PyEmir Documentation. Release Sergio Pascual, Nicolás Cardiel

Size: px
Start display at page:

Download "PyEmir Documentation. Release Sergio Pascual, Nicolás Cardiel"

Transcription

1 PyEmir Documentation Release 0.11 Sergio Pascual, Nicolás Cardiel Dec 11, 2018

2

3 Contents 1 PyEmir User Guide 3 2 PyEmir MOS Tutorial 47 3 PyEmir Reference Glossary 105 Python Module Index 107 i

4 ii

5 Welcome. This is the Documentation for PyEmir (version 0.11), EMIR is a wide-field, near-infrared, multi-object spectrograph proposed for the Nasmyth focus of GTC. It will allow observers to obtain from tens to hundreds of intermediate resolution spectra simultaneously, in the nir bands Z, J, H and K. A multi-slit mask unit will be used for target acquisition. EMIR is designed to address the science goals of the proposing team and of the Spanish community at large. PyEmir user guide: PyEmir User Guide PyEmir MOS tutorial: PyEmir MOS Tutorial PyEmir reference guide: PyEmir Reference Contents 1

6 2 Contents

7 CHAPTER 1 PyEmir User Guide This guide is intended as an introductory overview of PyEmir and explains how to install and make use of the most important features of Pyemir. For detailed reference documentation of the functions and classes contained in the package, see the PyEmir Reference. Warning: This User Guide is still a work in progress; some of the material is not organized, and several aspects of PyEmir are not yet covered sufficient detail. 1.1 PyEmir Installation This is PyEmir, the data reduction pipeline for EMIR. PyEmir is distributed under GNU GPL, either version 3 of the License, or (at your option) any later version. See the file COPYING for details. PyEmir requires the following packages installed in order to be able to be installed and work properly: setuptools numpy scipy astropy >= 1.1 matplotlib six numina >= 0.15 photutils sep scikit-image Additional packages are optionally required: pytest to run the tests sphinx to build the documentation 3

8 Webpage: Maintainer: Stable version The latest stable version of PyEmir can be downloaded from To install PyEmir, use the standard installation procedure:: $ tar zxvf pyemir-x.y.z.tar.gz $ cd pyemir-x.y.z $ python setup.py install The install command provides options to change the target directory. By default installation requires administrative privileges. The different installation options can be checked with:: $ python setup.py install --help Development version The development version can be checked out with:: $ git clone And then installed following the standard procedure:: $ cd pyemir $ python setup.py install Using conda Install and configure conda. Then install the dependencies (you can create an environment):: $ conda create --name emir python=3 $ source activate emir $ (emir) conda install numpy scipy astropy matplotlib six scikit-image $ (emir) conda install -c astropy photutils $ (emir) conda install cython pyyaml The latest development version of the emirdrp source code can be retrieved using git. In addition, we will need the latest version of numina:: $ git clone $ git clone Then, to build and install emirdrp:: $ (emir) cd numina $ (emir) python setup.py build $ (emir) python setup.py install $ (emir) cd../emirdrp $ (emir) python setup.py build $ (emir) python setup.py install 4 Chapter 1. PyEmir User Guide

9 1.1.3 Building the documentation The PyEmir documentation is base on sphinx. With the package installed, the html documentation can be built from the doc directory: $ cd doc $ make html The documentation will be copied to a directory under build/sphinx. The documentation can be built in different formats. The complete list will appear if you type make 1.2 PyEmir Deployment with Virtualenv Virtualenv is a tool to build isolated Python environments. It s a great way to quickly test new libraries without cluttering your global site-packages or run multiple projects on the same machine which depend on a particular library but not the same version of the library Install virtualenv I install it with the package system of my OS, so that it ends in my global site-packages. With Fedora/EL is just: $ sudo yum install python-virtualenv Create virtual environment Create the virtual environment enabling the packages already installed in the global site-packages via the OS package system. Some requirements (in particullar numpy and scipy) are difficult to build: they require compiling and external C and FORTRAN libraries to be installed. So the command is: $ virtualenv --system-site-packages myenv If you need to create the virtualenv without global packages, drop the system-site-packages flag Activate the environment Once the environment is created, you need to activate it. Just change directory into it and load with your command line interpreter the script bin/activate. With bash: $ cd myenv $. bin/activate (myenv) $ With csh/tcsh: $ cd myenv $ source bin/activate (myenv) $ Notice that the prompt changes once you are activate the environment. To deactivate it just type deactivate: 1.2. PyEmir Deployment with Virtualenv 5

10 (myenv) $ deactivate $ Install PyEMIR PyEmir is registered in the Python Package Index. That means (among other things) that can be installed inside the environment with one command. (myenv) $ pip install pyemir The requirements of pyemir will be downloaded and installed inside the virtual environment. Once the installation is finished, you can check by listing the installed recipes: (myenv) $./bin/numina show Bias Image: Recipe to process bias images Instrument: EMIR Recipe: emirdrp.recipes.biasrecipe Key: bias_image UUID: a7ea9c0c-76a ec Dark current Image: Summary of Dark current Image Instrument: EMIR Recipe: emirdrp.recipes.darkrecipe Key: dark_image UUID: 5b15db e8ca27a866af PyEmir Deployment with Conda Install conda Install conda/anaconda from Both versions (2 and 3) are supported. If you have conda installed already, you don t need to do it again Create an environment With the command: $ conda create --name emir python=3 Then, Activate the environment: $ source activate emir Install dependencies Most of the dependencies can be grabbed from the conda repositories: $ conda install numpy scipy astropy matplotlib six scikit-image $ conda install cython pyyaml pytest $ conda install -c astropy photutils lmfit $ pip install sep 6 Chapter 1. PyEmir User Guide

11 1.3.4 Download and install the source code The development version is hosted at Github. Choose a top level directory for keeping pyemir source code, then: $ git clone $ git clone Then, you can install the packges: $ cd numina $ python setup.py build && python setup.py install # lots of output $ cd../pyemir $ python setup.py build && python setup.py install # lots of output $ cd.. To check that the pipeline is installed, run: $ numina show-instruments The expected output is: DEBUG: Numina simple recipe runner version 0.15.dev5 Instrument: EMIR has configuration 'Default configuration' uuid=225fcaf2-7f6f-49cc-972a- 70fd0aee8e96 default is 'Default configuration' has datamodel 'emirdrp.datamodel.emirdatamodel' has pipeline 'default', version Running the pipeline The EMIR DRP is run through a command line interface provided by numina. The run mode of numina requires: A observation result file in YAML format A requirements file in YAML format The raw images obtained in the observing block The calibrations required by the recipe The observation result file and the requirements file are created by the user, the format is described in the following sections Format of the observation result The contents of the file is a serialized dictionary with the following keys: id: not required, string, defaults to 1 Unique identifier of the observing block instrument: required, string Name of the instrument, as it is returned by numina show-instruments mode: required, string Name of the observing mode, as returned by numina show-modes frames: required, list of strings List of images names children: not required, list of integers, defaults to empty list Identifications of nested observing blocks This is an example of the observation result file 1.4. Running the pipeline 7

12 id: dark-test-21 instrument: EMIR mode: TEST6 images: - r0121.fits - r0122.fits - r0123.fits - r0124.fits - r0125.fits - r0126.fits - r0127.fits - r0128.fits - r0129.fits - r0130.fits - r0131.fits - r0132.fits Format of the requirements file This file contains calibrations obtained by running recipes (called products) and other parameters (numeric or otherwise) required by the recipes (named requirements). The file is serialized using YAML Example requirements file: version: 1 (1) products: (2) EMIR: - {id: 1, content: 'file1.fits', type: 'MasterFlat', tags: {'filter': 'J'}, ob: 200} (3) - {id: 4, content: 'file4.fits', type: 'MasterBias', tags: {'readmode': 'cds'}, ob: 400} (3) MEGARA: - {id: 1, content: 'file1.fits', type: 'MasterFiberFlat', tags: {'vph': 'LR-U'}, ob: 1200} (3) - {id: 2, content: 'file2.yml', type: 'TraceMap', tags: {'vph': 'LR2', 'readmode ': 'fast'}, ob: 1203} (3) requirements: (4) MEGARA: default: MegaraArcImage: (5) polynomial_degree: 5 (6) nlines: [5, 5] (6) 1. Mandatory entry, version must be 1 2. Products of other recipes are list, by instrument 3. The products of the reduction recipes are listed. Each result must contain: A type, one of the types of the products of the DRP in string format A tags field, used to select the correct calibration based on the keywords of the input. A content field, a pointer to the serialized version of the calibration. A id field, unique integer A ob field, optional integer, used to store the observation id of the images that created the calibration. 4. Numerical parameters of the recipes are stored in requirements, with different sections per instrument. 5. The name of the observing mode. 6. Different parameters for the recipe corresponding to the observing mode in (5) 8 Chapter 1. PyEmir User Guide

13 1.4.3 Running the pipeline numina copies the images (calibrations and raw data) from directory datadir to directory workdir, where the processing happens. The result is stored in directory resultsdir. The default values are for each directory are data, obsid<id_of_obs>_work and obsid<id_of_obs>_results. All these directories can be defined in the command line using flags: $ numina run --workdir /tmp/test1 --datadir /scrat/obs/run12222 obs.yaml -r requires.yaml See Command Line Interface for a full description of the command line interface. Following the example, we create a directory data in our current directory and copy there the raw frames from r0121.fits to r0132.fits and the master bias master_bias-1.fits. The we run: $ numina run obsresult.yaml -r requirements.yaml INFO: Numina simple recipe runner version 0.15 INFO: Loading observation result from 'obsrun.yaml' INFO: Identifier of the observation result: 1 INFO: instrument name:... numina.recipes.emir INFO stacking 4 images using median numina.recipes.emir INFO bias reduction ended INFO: result: BiasRecipeResult(qc=Product(type=QualityControlProduct(), dest='qc'), biasframe=product(type=masterbias(), dest='biasframe')) INFO: storing result We get information of what s going on through logging messages. In the end, the result and log files are stored in obsid<id_of_obs>_results. The working directory obsid<id_of_obs>_work can be inspected too. Intermediate results will be saved here. On the other hand, in the following we attach a short code to run pyemir by using a Python script. This is useful to use the Python debugger. from numina.user.cli import main def run_recipe(): main(['run', 'obsresult.yaml', '-r', 'requirements.yaml']) if name == " main ": run_recipe() 1.5 EMIR Reduction Recipes author Sergio Pascual <sergiopr@fis.ucm.es>, Nicolás Cardiel <cardiel@fis.ucm.es> date version Execution environment of the Recipes Recipes have different execution environments. Some recipes are designed to process observing modes required while observing at the telescope. These modes are related to visualization, acquisition and focusing. The corresponding Recipes are integrated in the GTC environment. We call these recipes the Data Factory Pipeline, (DFP) EMIR Reduction Recipes 9

14 Other group of recipes are devoted to scientific observing modes: imaging, spectroscopy and auxiliary calibrations. These Recipes constitute the Data Reduction Pipeline, (DRP). The software is meant to be standalone, users shall download the software and run it in their own computers, with reduction parameters and calibrations provided by the instrument team. Users of the DRP may use the simple Numina CLI (Command Line Interface) or the higher level, database-driven Pontifex. Users of the DFP shall interact with the software through the GTC Inspector Recipe Parameters EMIR Recipes based on Numina have a list of required parameters needed to properly configure the Recipe. The Recipe announces the required parameters with the following syntax (the syntax is subject to changes). class SomeRecipeInput(RecipeInput): master_dark = DataProductParameter(MasterDark, 'Master dark image') some_numeric_value = Parameter(0.45, 'Some numeric class SomeRecipe(RecipeBase):... When the reduction is run from the command line using Numina CLI, the program checks that the required values are provided or have default values. When the reduction is automatically executed using Pontifex, the program searches the operation database looking for the most appropriated data products (in this case, a MasterDark frame). When the Recipe is properly configured, it is executed with an observing block data structure as input. When run using Numina CLI, this data structure is created from a text file. When run with Pontifex, the observing block data structure is created from the contents of the database Recipe Products Recipes based on Numina provide a list of products created by the recipe. The Recipe announces the required parameters with the following syntax (the syntax is subject to changes). class SomeRecipeInput(RecipeInput): master_dark = DataProductParameter(MasterDark, 'Master dark image') some_numeric_value = Parameter(0.45, 'Some numeric value'), class SomeRecipeResult(RecipeResult): master_flat class SomeRecipe(RecipeBase):... In the following two sections, we list the Reduction Recipes for the DRP and for the DFP. The format is: name of the Python class of the recipe, name of the observing mode, required parameters and data products provided. From the fully quallified name of the recipe we have removed the initial emirdrp.recipes.. The name of the parameters are prefixed with Product if the parameter is the result provided by another Recipe. If not, the value is a Parameter, or an OptionalParameter that will be ignored if not present. DFP Recipes Parameters class focus.telescoperoughfocusrecipe mode TS rough focus requires 10 Chapter 1. PyEmir User Guide

15 Product: MasterBias Product: MasterDark Product: MasterBadPixelMask Product: NonLinearityCorrection Product: MasterIntensityFlatField Parameter: objects Parameter: focus_range provides TelescopeFocus class focus.telescopefinefocusrecipe mode TS fine focus requires Product: MasterBias Product: MasterDark Product: MasterBadPixelMask Product: NonLinearityCorrection Product: MasterIntensityFlatField Parameter: objects provides TelescopeFocus class focus.dtufocusrecipe mode EMIR focus control requires Product: MasterBias Product: MasterDark Product: MasterBadPixelMask Product: NonLinearityCorrection Product: MasterIntensityFlatField Parameter: objects Parameter: msm_pattern Parameter: dtu_focus_range provides DTUFocus class acquisition.maskcheckrecipe mode Target acquisition requires Product: MasterBias Product: MasterDark Product: MasterBadPixelMask 1.5. EMIR Reduction Recipes 11

16 Product: NonLinearityCorrection Product: MasterIntensityFlatField provides TelescopeOffset class acquisition.maskimagingrecipe mode Mask image requires Product: MasterBias Product: MasterDark Product: MasterBadPixelMask Product: NonLinearityCorrection Product: MasterIntensityFlatField provides MSMPositions class acquisition.maskcheckrecipe mode MSM and LSM check requires Product: MasterBias Product: MasterDark Product: MasterBadPixelMask Product: NonLinearityCorrection Product: MasterIntensityFlatField provides TelescopeOffset, MSMPositions DRP Recipes Parameters class auxiliary.biasrecipe mode Bias image requires provides MasterBias class auxiliary.darkrecipe mode Dark image requires Product: MasterBias provides MasterDark class auxiliary.intensityflatrecipe mode Intensity flat-field requires 12 Chapter 1. PyEmir User Guide

17 Product: MasterBias Product: MasterDark Product: MasterBadPixelMask Product: NonLinearityCorrection provides MasterIntensityFlat class auxiliary.spectralflatrecipe mode MSM spectral flat-field requires Product: MasterBias Product: MasterDark Product: MasterBadPixelMask Product: NonLinearityCorrection provides MasterSpectralFlat class auxiliary.slittransmissionrecipe mode Slit transmission calibration requires Product: MasterBias Product: MasterDark Product: MasterBadPixelMask Product: NonLinearityCorrection provides SlitTransmissionCalibration class auxiliary.wavelengthcalibrationrecipe mode Wavelength calibration requires Product: MasterBias Product: MasterDark Product: MasterBadPixelMask Product: NonLinearityCorrection Product: MasterIntensityFlatField Product: MasterSpectralFlatField Parameter: line_table (with wavelengths of arc lines) provides WavelengthCalibration class image.stareimagerecipe mode Stare image 1.5. EMIR Reduction Recipes 13

18 requires Product: MasterBias Product: MasterDark Product: MasterBadPixelMask Product: NonLinearityCorrection Product: MasterIntensityFlatField OptionalParameter: sources (list of sources coordinates) provides Image, SourcesCatalog class image.nbimagerecipe mode Nodded/Beamswitched images requires Product: MasterBias Product: MasterDark Product: MasterBadPixelMask Product: NonLinearityCorrection Product: MasterIntensityFlatField Parameter: extinction (Mean atmospheric extinction) Parameter: iterations Parameter: sky_images (Images used to estimate the background before and after current image) Parameter: sky_images_sep_time (Maximum separation time between consecutive sky images in minutes) Parameter: check_photometry_levels (Levels to check the flux of the objects) Parameter: check_photometry_actions (Actions to take on images) OptionalParameter: offsets (list of integer offsets between images) provides Image, SourcesCatalog class image.ditheredimagerecipe mode Dithered images requires Product: MasterBias Product: MasterDark Product: MasterBadPixelMask Product: NonLinearityCorrection Product: MasterIntensityFlatField Parameter: extinction (Mean atmospheric extinction) Parameter: iterations Parameter: sky_images (Images used to estimate the background before and after current image) 14 Chapter 1. PyEmir User Guide

19 Parameter: sky_images_sep_time (Maximum separation time between consecutive sky images in minutes) Parameter: check_photometry_levels (Levels to check the flux of the objects) Parameter: check_photometry_actions (Actions to take on images) provides Image, SourcesCatalog class image.microditheredimagerecipe mode Micro-dithered images requires All the parameters of image.ditheredimagerecipe Parameter: subpixelization (number of subdivisions in each pixel side) provides Image, SourcesCatalog class image.mosaicrecipe mode Mosaiced images requires provides Image, SourcesCatalog class mos.starespectrarecipe mode Stare spectra requires Product: MasterBias Product: MasterDark Product: MasterBadPixelMask Product: NonLinearityCorrection Product: MasterIntensityFlatField Product: MasterSpectralFlatField Product: SlitTransmissionCalibration Product: WavelengthCalibration Parameter: lines (wavelength to measure) provides Spectra, LinesCatalog class mos.dnspectrarecipe mode Dithered/Nodded spectra along the slit requires Product: MasterBias Product: MasterDark Product: MasterBadPixelMask Product: NonLinearityCorrection 1.5. EMIR Reduction Recipes 15

20 Product: MasterIntensityFlatField Product: MasterSpectralFlatField Product: SlitTransmissionCalibration Product: WavelengthCalibration Parameter: lines (wavelegnth to measure) OptionalParameter: offsets (list of integer offsets between images) provides Spectra, LinesCatalog class mos.offsetspectrarecipe mode Offset spectra beyond the slit requires Product: MasterBias Product: MasterDark Product: MasterBadPixelMask Product: NonLinearityCorrection Product: MasterIntensityFlatField Product: MasterSpectralFlatField Product: SlitTransmissionCalibration Product: WavelengthCalibration Parameter: lines (wavelegnth to measure) OptionalParameter: offsets (list of integer offsets between images) provides Spectra, LinesCatalog class mos.rasterspectrarecipe mode Raster spectra requires Product: MasterBias Product: MasterDark Product: MasterBadPixelMask Product: NonLinearityCorrection Product: MasterIntensityFlatField Product: MasterSpectralFlatField Product: SlitTransmissionCalibration Product: WavelengthCalibration Parameter: lines (wavelegnth to measure) provides DataCube class engineering.dtu_xy_calibrationrecipe mode DTU X_Y calibration 16 Chapter 1. PyEmir User Guide

21 requires Parameter: slit_pattern Parameter: dtu_range provides DTU_XY_Calibration class engineering.dtu_z_calibrationrecipe mode DTU Z calibration requires Parameter: dtu_range provides DTU_Z_Calibration class engineering.dtuflexurerecipe mode DTU Flexure compensation requires provides DTUFlexureCalibration class engineering.csu2detectorrecipe mode CSU2Detector calibration requires Parameter: dtu_range provides DTU_XY_Calibration class engineering.focalplanecalibrationrecipe mode Lateral colour requires provides PointingOriginCalibration class engineering.spectralcharacterizationrecipe mode Spectral characterization requires provides WavelengthCalibration class engineering.rotationcenterrecipe mode Centre of rotation requires provides PointingOriginCalibration class engineering.astrometriccalibrationrecipe mode Astrometric calibration requires 1.5. EMIR Reduction Recipes 17

22 provides Image class engineering.photometriccalibrationrecipe mode Photometric calibration requires Parameter: phot provides PhotometricCalibration class engineering.spectrophotometriccalibrationrecipe mode Spectrophotometric calibration requires Parameter: sphot provides SpectroPhotometricCalibration 1.6 EMIR Data Products Each recipe of the EMIR Pipeline produces a set of predefined results, known as data products. In turn, the recipes may request diferent data products as computing requeriments, efectively chaining the recipes. For example, the requirements of the Intensity Flat-Field recipe include a MasterDark object. This object is produced by the recipe DarkRecipe, which in turn requires a MasterBias object FITS Keywords The description of the keywords follows a convention found in other FITS keyword dictionaries, for example the list in The keyword name is expressed, with the reference to the paper where it is included. Following the type of HDU where the keyword can appear. The value shows the kind of variable represented by the keyword. The comment is a example of the comment associated with the keyword and definition is a explanation in detail of the usage of the keyword. Primary header Type Keyword Example Explanation L SIMPLE T Standard FITS format I BITPIX 16 One of -64,-32,8,16,32 I NAXIS 2 # of axes in frame I NAXIS # of pixels per row I NAXIS # of rows S ORIGIN GTC FITS originator S OBSERVAT ORM Observatory S TELESCOP GTC The telescope S INSTRUME EMIR The instrument S OBJECT NGC 4594 Target designation S OBSERVER OBSERVER Who adquired the data S DATE-OBS T12:00:11.50 Date of the start of the observation S DATE T12:14:12.78 Date the file was written 18 Chapter 1. PyEmir User Guide

23 Required by the pipeline Type Keyword Example Explanation R AIRMASS Mean airmass of the observation R MJD-OBS Modified JD of the start of the observation S IMAGETYP FLAT Type of the image S OBSTYPE FLATON Type of observation L READPROC T The frame has been preprocessed after readout S READ- RAMP The readmode used to adquire the image MODE I READSAMP 12 Number of samples taken (equal to NAXIS1 if NAXIS is 3) R EXPOSED Photometric time? R DARKTIME TBD R EXPTIME TBD R ELAPSED Time between resets? I OBSID 567 Identifier of the observing block S OBSMODE DITHER_IMAGES Identifier of the observing mode I OBSEXPN 30 # of exposures during the block I OBSSEQN 12 # of the image in the sequence Coordinate system The specifications of world coordinates are treated in a series of four papers. By World coordinates we mean coordinates that serve to locate a measurement in some multidimensional parameter space. They include, for example, a measurable quantity such as frequency or wavelength associated with each point of the spectrum or a longitude and latitude in a conventional spherical coordinate system. Representation of world coordinates in FITS, Greisen, E.W. & Clabretta, M.R A&A 395, 1061 (hereafter Paper I) describes a very general method for specifying coordinates. A pixel-to-coordinate matrix PCj_i will replace CROTAj, units will be described with a new keyword CUNITj, and secondary sets of coordinate descriptions may be specified. A complete system of unit specification is described and is expected to supplement the IAU standard system of units. Methods for describing the coordinates of matrices in binary tables are also described. Representation of celestial coordinates in FITS, Clabretta, M.R. & Greisen, E.W A&A 395, 1077 (hereafter Paper II) applies the general rules of Paper I to the specific problem of specifying celestial coordinates in a two-dimensional projection of the sky. The coordinate system is specified with the new keyword RADESYS and a large number of projections are defined. Oblique projections are described and illustrated. Several examples of header interpretation and construction are given including one that specifies coordinates on a planetary body rather than the celestial sphere. The application to binary tables is described. Representation of spectral coordinates in FITS, Greisen, E.W. et al (hereafter Paper III) is still open to comments from the FITS community. It applies the general rules and practices developed in the first two papers to spectral coordinates, namely frequency, wavelength, velocity, and the radio and optical conventional velocities. These are defined and methods of computing one type of coordinate from a spectral axis gridded in another are given. A projection representative of optical spectrometers is also defined. Coordinate reference frames may be specified. Representation of distortions in FITS world coordinate systems, Clabretta, M.R. et al (hereafter Paper IV) is in preparation. It will define Distortion Correction Functions (DCFs) which may be used to correct for instrumental defects including celestial coordinate warps (plate defects), variation of actual frequency with celestial coordinate, refraction, and the like. The set of WCS keywords usable are those supported by wcslib library WCS/ 1.6. EMIR Data Products 19

24 Checksum convention The CHECKSUM and DATASUM keywords that are embedded in the FITS header are used to verify the integrity of the HDU. See Type Keyword Example Explanation S CHECKSUM ADFASASDLIEXV HDU checksum S DATASUM data unit checksum Raw Image data products Readout modes Single In single mode, the detector is readout after a reset. After the readout, the detector is reset gain. This readout mode has limited utillity and it is meant for engeneering. The frame contains a single HDU. The data section of the HDU contains the 2048x2048 dataframe. In the header, the following keywords are set READPROC = F / The frame has been preprocessed after readout READMODE = SINGLE / The readmode used to adquire the image Correlated double sampling In correlated double sampling, the detector is reset, then read inmediately after the reset and then read after the programed exposure time. The frame contains a single HDU. The data section of the HDU contains a 2048x2048x2 dataframe. The first layer contains the readout after reset and the second the readout after the exposure time. In the header, the following keywords are set READPROC = F / The frame has been preprocessed after readout READMODE = CDS / The readmode used to adquire the image Fowler Fowler mode is an extension of CDS. The detector is reset, then read n times, exposed and then read agin n times. The exposure time in this case is equal to the time between correlated reads. The frame contains a single HDU. The data section of the HDU contains a 2048x2048x2n dataframe. The first n layers contain the readouts after reset and the second n the readouts after the exposure time. In the header, the following keywords are set READPROC = F / The frame has been preprocessed after readout READMODE = FOWLER / The readmode used to adquire the image READSAMP = 2*n / Number of samples taken Follow-up-the ramp In ramp mode, the exposure time is sampled n times. 20 Chapter 1. PyEmir User Guide

25 The frame contains a single HDU. The data section of the HDU contains a 2048x2048xn dataframe. Each layer contains the n-th readouts after reset. In the header, the following keywords are set READPROC = F / The frame has been preprocessed after readout READMODE = RAMP / The readmode used to adquire the image READSAMP = n / Number of samples taken Image types Bias Dark Flat Target Image data products These data products are saved to disk as FITS files. PyEmir makes use of the FITS headers to record information about the data processing. This information may be recorded using other methods as well, such as the GTC Database or the standalone Pontifex database. The following headers are included in all image data products and record information about the version of Numina and the name and version of the recipe used. NUMXVER = '0.7.0 ' / Numina package version NUMRNAM = 'DitheredImageRecipe' / Numina recipe name NUMRVER = '0.1.0 ' / Numina recipe version NUMTYP = 'TARGET ' / Data product type HISTORY keywords may be used also, but the information in these keyword may not be easily indexed. Master Bias frames Bias frames are produced by the recipe BiasRecipe. Each bias frame is a multiextension FITS file with the following extensions. Extension name PRIMARY Type Contents Version Primary The bias level VARIANCE Image 1 Variance of the bias level obtained from the input frames VARIANCE Image 2 Variance of the bias level measured on the result frame MAP Image Number of pixels used to compute the bias level Master bias frames are represented by MasterBias. Master Dark frames Master dark frames are produced by the recipe DarkRecipe. Each dark frame is a multiextension FITS file with the following extensions EMIR Data Products 21

26 Extension name PRIMARY Type Contents Version Primary The dark level VARIANCE Image 1 Variance of the dark level obtained from the input frames VARIANCE Image 2 Variance of the dark level measured on the result frame MAP Image Number of pixels used to compute the dark level Master dark frames are represented by MasterDark. 1.7 High Level description of the EMIR Data Reduction Pipeline author Nicolás Cardiel revision 1 date Dec 11, Introduction This is an overall description of the relevant processes involved in the Data Reduction Pipeline of EMIR (basically from the point of view of the astronomers). In this sense, the information here contained follows the contents of the EMIR Observing Strategies document. The reason for a data reduction The data reduction process, aimed to minimize the impact of data acquisition imperfections on the measurement of data properties with a scientific meaning for the astronomer, is typically performed by means of arithmetical manipulations of data and calibration frames. The imperfections are usually produced by non-idealities of image the sensors: temporal noise, fixed pattern noise, dark current, and spatial sampling, among others. Although appropriate observational strategies can greatly help in reducing the sources of data biases, the unavoidable limited observation time that can be spent in each target determines the maximum signal-to-noise ratio in practice achievable. Sources of errors Common sources of noise in image sensors are separated into two categories: Random noise: This type of noise is temporally random (it is not constant from frame to frame). Examples of this kind of noise are the followings: Shot noise: It is due to the discrete nature of electrons. Two main phenomena contribute to this source of error: the random arrival of photo-electrons to the detector (shot noise of the incoming radiation), and the thermal generation of electrons (shot noise of the dark current). In both cases, the noise statistical distributions are well described by a Poisson distribution. Readout noise: Also known as floor noise, it gives an indication of the minimum resolvable signal (when dark current is not the limiting factor), and accounts for the amplifier noise, the reset noise, and the analog-to-digital converter noise. Pattern noise: it is usually similar from frame to frame (i.e. it is stable in larger timescales), and cannot be reduced by frame averaging. Pattern noise is also typically divided into two subtypes: 22 Chapter 1. PyEmir User Guide

27 Fixed Pattern Noise (FPN):This is the component of pattern noise measured in absence of illumination. It is possible to compensate for FPN by storing the signal generated under zero illumination and subtracting it from subsequent signals when required. Photo-Response Non-Uniformity (PRNU): It is the component of pattern noise that depends on the illumination (e.g. gain non-uniformity). A first approximation is to assume that its contribution is a (small) fraction, f PRNU, of the number of photo-electrons, N e. Under this hypothesis, and considering in addition only the shot noise due to photo-electrons, the resulting variance of the combination of both sources of noise would be expressed as N e + (f PRNU * N e ) 2 Thus, the worst case is obtained when N e approaches the pixel full-well capacity. It is important to note that the correction of data biases, like the FPN, also constitutes, by itself, a source of random error, since they are performed with the help of a limited number of calibration images. In the ideal case, the number of such images should be large enough to guarantee that these new error contributors are negligible in comparison with the original sources of random error. The treatment of errors in the data reduction process Three methods to quantify random errors In a classic view a typical data reduction pipeline can be considered as a collection of filters, each of which transforms input images into new output images, after performing some kind of arithmetic manipulation and making use of additional measurements and calibration frames when required. Under this picture, three different approaches can in principle be employed to determine random errors in completely reduced images. Comparison of independent repeated measurements This is one of the simplest and most straightforward ways to estimate errors, since, in practice, errors are not computed nor handled during the reduction procedure, but through the comparison of the end products of the data processing. The only requirement is the availability of a non too small number of independent measurements. Although as such can be considered even the flux collected by each independent pixel in a detector (for example when determining the sky flux error in direct imaging), in most cases this method requires the comparison of different frames. For that reason, and given that for many purposes it may constitute an extremely expensive method in terms of observing time, its applicability on a general situation seems rather unlikely High Level description of the EMIR Data Reduction Pipeline 23

28 First principles and brute force: error bootstrapping Making use of the knowledge concerning how photo-electrons are generated ( expected statistical distribution of photon arrival into each pixel, detector gain and read-out noise), it is possible to generate an error image associated to each raw-data frame. In this sense, typically one can compute such error image (in number of counts, ADU, analogic to digital number ) as: σ A (i, j) 2 = 1 g A(i, j) + [f PRNUA(i, j)] 2 + RN 2 (i, j) where A(i, j) is the signal (after the bias-level subtraction) in the pixel (i,j) of a given two-dimensional image (in ADU), g is the gain of the A/D converter (in e - /ADU), f PRNU is the photo-response non-uniformity factor discussed above, and RN is the read-out noise (in ADU). Note that the apparent dimensional inconsistency of the previous expression is not real, and arises from the fact that one of the properties of the Poisson distribution is that its variance is numerically equal to the mean expected number of events. By means of error bootstrapping via Monte Carlo simulations, simulated initial data frames can be generated and be completely reduced as if they were real observations. In order to achieve this task, it is possible to use: A simul (i, j) = A(i, j) + 2σ A (i, j) log(1 z 1 ) cos(2πz 2 ) where A simul (i, j) is a new instance of the initial raw-data frame, and z 1 and z 2 are two random numbers in the range [0,1). Note that the second term in the right hand side of the previous expression introduces Gaussian noise in each pixel. The comparison of the measurements performed over the whole set of reduced simulated observations provides then a good estimation of the final errors. However, and although this method overcome the problem of wasting observing time, it can also be terribly expensive, but now in terms of computing time. 24 Chapter 1. PyEmir User Guide

29 First principles and elegance: parallel reduction of data and error frames Instead of wasting either observing or computing time, it is also possible to feed the data reduction pipeline with both, the original raw-data frame and its associated error frame (computed from first principles), and proceed only once throughout the whole reduction process. In this case every single arithmetic manipulation performed over the data image must be translated, using the law of propagation of errors, into parallel manipulations of the error image. Unfortunately, typical astronomical data reduction packages (e.g. Iraf, Midas, etc.) do not consider random error propagation as a by default operation and, thus, some kind of additional programming is unavoidable High Level description of the EMIR Data Reduction Pipeline 25

30 Error correlation: a real problem Although each of the three methods described above is suitable of being employed in different circumstances, the third approach is undoubtedly the one that, in practice, can be used in a more general situation. In fact, once the appropriate data reduction tool is available, the parallel reduction of data and error frames is the only way to proceed when observing or computing time demands are prohibitively high. However, due to the unavoidable fact that the information collected by detectors is physically sampled in pixels, this approach collides with a major problem: errors start to be correlated as soon as one introduces image manipulations involving rebinning or non-integer pixel shifts of data. A naive use of the analysis tools would neglect the effect of covariance terms, leading to dangerously underestimated final random errors. Actually, this is likely the most common situation since, initially, the classic reduction operates as a black box, unless specially modified for the contrary. The figure below shows a very simple example which illustrates this problem. Unfortunately, as soon as one accumulates a few reduction steps involving increment of correlation between adjacent pixels (e.g. image rectification when correcting for geometric distortions, wavelength calibration into a linear scale, etc.), the number of covariance terms starts to increase too rapidly to make it feasible the possibility of stacking up and propagate all the new coefficients for every single pixel of an image. In this simple example we illustrate the problem of error correlation when reducing data. Assuming we have a linear detector, composed by a set of consecutive pixels, in an ideal situation we are considering that all the signal 26 Chapter 1. PyEmir User Guide

31 of a given object (100 +/- 10 counts) is received in a single pixel (we are ignoring additional sources of error, like read-out noise). However, a small shift in the focal plane may imply that the observed signal is distributed in two adjacent pixels. After reducing the data while restoring the image, and propagating the observed errors in each pixel, the error in the total flux F is computed using the errors in each pixel and following the law of combination of errors. But if we use the incomplete expression, neglecting the covariance terms, we get an unrealistic (and underestimated) error. A modified reduction procedure Obviously, the problem can be circumvented if one prevents its emergence, i.e. if one does not allow the data reduction process to introduce correlation into neighbouring pixels before the final analysis. In other words, if all the reduction steps that lead to error correlation are performed in a single step during the measurement of the image properties with a scientific meaning for the astronomer, there are no previous covariance terms to be concerned with. Whether this is actually possible or not may depend on the type of reduction steps under consideration. In any case, a change in the philosophy of the classic reduction procedure can greatly help in alleviating the problem. The core of this change consists in considering the reductions steps that originate pixel correlation as filters that do not necessarily take input images and generate new versions of them after applying some kind of arithmetic manipulation, but as filters that properly characterize the image properties, without modifying those input images. More precisely, the reduction steps can be segregated in two groups: Simple filters, which do not require data rebinning nor non-integer pixel shifts of data. Complex filters, those suitable of introducing error correlation between adjacent pixels. The former may be operated like in a classic reductions, since their application do not introduce covariance terms. However, the complex steps are only allowed to determine the required image properties that one would need to actually perform the correction. For the more common situations, these characterizations may be simple poly High Level description of the EMIR Data Reduction Pipeline 27

32 nomials (in order to model geometric distortions, non-linear wavelength calibration scales, differential refraction dependence with wavelength, etc.). Under this view, the end product of the modified reduction procedure is constituted by a slightly modified version of the raw data frames after quite simple arithmetic manipulations (denoted as raw data and raw errors in the previous figure), and by an associated collection of image characterizations. Modus Operandi Clearly, at any moment it is possible to combine the result of the partial reduction after all the linkable simple steps, with the information achieved through all the characterizations derived from the complex steps, to obtain the same result than in a classic data reduction (thick line in the previous figure). However this is not the only option. Instead of trying to obtain completely reduced images ready for starting the analysis work, one can directly feed a clever analysis tool with the end products of the modified reduction procedure, as depicted in this figure: Obviously, this clever analysis tool has to perform its task taking into account that some reductions steps have not been performed. For instance, if one considers the study of a 2D spectroscopic image, the analysis tool should use the information concerning geometric distortions, wavelength calibration scale, differential refraction, etc., to obtain, for example, an equivalent width through the measurement in the partially reduced (uncorrected for geometric distortions, wavelength calibration, etc.) image. Image distortions and errors Interestingly, the most complex reduction steps are generally devoted to compensate for image imperfections that can be associated with geometric distortions. For illustration, and using the typical problems associated to the reduction of long-slit spectroscopy, we can summarize the most common image distortions in the following types: 28 Chapter 1. PyEmir User Guide

33 Optical distortion: Along the slit (spatial) direction, this distortion would be equivalent to a geometric distortion in imaging mode. Furthermore, this distortion also includes any possible spatial distortion of the spectra in the detector (i.e. spectra of punctual objects not following a line parallel to the detector rows) which is not due to the slit in use (orientation or shape defects; see below) or to refraction effects. The way to deduce the distortion map (note that it is a 3D map, accounting the third dimension for the distortion of the spectra) is by observing punctual objects in different positions of the focal plane. This can be accomplished by observing lamp arc spectra through special masks with evenly distributed holes along a focal plane column. Slit distortion: This distortion accounts for the potential distortions introduced by the use of an imperfect slit. This includes: a. small variations in the slit width along the slit direction and, b. the difference in slit orientation with respect to the vertical direction in the detector plane. Wavelength distortion: Commonly referred as wavelength calibration, this distortion accounts for the fact that the relation between pixels and actual wavelengths along the dispersion direction, after the removal of the two previous distortions, is typically not linear. Differential refraction distortion: In the absence of the three previous distortions, the dependence of atmospheric dispersion with wavelength produces that the spectrum of a punctual source does not follow a straight line parallel to the dispersion direction. This effect depends mainly on the zenith angle of the observation, the wavelength range, and the difference between the slit position angle and the parallactic angle (being the distortion maximum when both angles are the same, and zero if they are orthogonal). For these reasons, it is not possible to derive a general distortion map for a given instrument setup, but this kind of distortion must be corrected individually for each observed frame. To accomplish a proper random error treatment, as previously described, it is necessary to manipulate the data using a new and distorted system of coordinates that must account for all the image distortions present in the data. These distortions should be easily mapped with the help of calibration images. The new coordinate system provides the correspondence between the expected scientific coordinate system (e.g. wavelength and 1D physical size, in spectroscopic observations) and the observed coordinate system (physical pixels). It is important to highlight that, in this situation, the error estimation should not be a complex task, since the analysis tool is supposed to be handling uncorrelated pixels. The bottom line that can be extracted from the comparison of the different methods to estimate random errors in data reduction processes is the relevance of delaying the arithmetic manipulations involving the rebinning of the data until their final analysis. Note: In the case of EMIR, we will use the parallel reduction of data and error frames, trying to combine the arithmetical manipulations implying signal rebinning into the fewer steps as possible. In this way we hope to minimize the impact of error correlation. If we have enough time, we can try to create software tools that perform the kind of clever analysis we have previously described Basic observing modes and strategies EMIR is offering two main observing modes: imaging: FOV of 6.67 x 6.67 arcmin, with a plate scale of 0.2 arcsec/pixel. Imaging can be done through NIR broad-band filters Z, J, H, K, K s, and a dataset of narrow-band filters (TBC). multi-object spectroscopy: multi-slit mask with a FOV of 6.67 x 4 arcmin. Long-slit spectroscopy can be performed by placing the slitlets in adjacent positions. We are assuming that a particular observation is performed by obtaining a set of images, each of which is acquired at different positions referred as offsets from the base pointing. In this sense, and following the notation used in EMIR Observing Strategies, several situations are considered: Telescope 1.7. High Level description of the EMIR Data Reduction Pipeline 29

34 Chopping (TBD if this option will be available): achieved by moving the GTC secondary mirror. It provides a 1D move of the order of 1 arcmin. The purpose is to isolate the source flux from the sky background flux by first measuring the total (Source+Background) flux and then subtracting the signal from the Background only. DTU Offseting: the Detector Translation Unit allows 3D movements of less than 5 arcsec. The purpose is the same as in the chopping case, when the target is point-like. It might also be used to defocus the target for photometry or other astronomical uses. Dither: it is carried out by pointing to a number of pre-determined sky positions, with separations of the order of 25 arcsec, using the GTC primary or secondary mirrors, or the EMIR DTU, or the Telescope. The purpose of this observational strategy is to avoid saturating the detector, to allow the removal of cosmetic defects, and to help in the creation of a sky frame. Nodding: pointing the Telescope alternatively between two or more adjacent positions on a 1D line, employing low frequency shifts and typical distances of the order of slitlet-lengths (it plays the same role as chopping in imaging). Jitter: in this case the source falls randomly around a position in a known distribution, with shifts typically below 10 arcsec, to avoid cosmetic defects Imaging Mode Inputs Science frames Offsets between them Master Dark Bad pixel mask (BPM) Non-linearity correction polynomials Master flat Master background Exposure Time (must be the same in all the frames) Airmass for each frame Detector model (gain, RN) Average extinction in the filter In near-infrared imaging it is important to take into account that the variations observed in the sky flux in a given image are due to real spatial variations of the sky brightness along the field of view, the thermal background, and intrinsic flatfield variations. The master flatfield can be computed from the same science frames (for small targets) or from adjacent sky frames. This option, however, is not the best one, since the sky brightness is basically produced by a finite subset of bright emission lines, which SED is quite different from a continuous source. For this reason, most of the times the preferred master flatfield should be computed from twilight flats. On the other hand, systematic effects are probably more likely in this second approach. Probably it will be required to test both alternatives. The description that follows describes the method employed when computing the master flatfield from the same set of night images, at is based on the details given in SCAM reduction document, corresponding to the reduction of images obtained with NIRSPEC at Keck II. A typical reduction scheme for imaging can be the following: Data modelling (if appropriate/possible) and variance frame creation from first principles: all the frames Correction for non-linearity: all the frames 30 Chapter 1. PyEmir User Guide

35 Data: I linear (x, y) = I observed (x, y) Pol linearity Variances: σ 2 linear (x, y) = [σ model(x, y)pol linearity ] 2 + [I observed (x, y)errorpol linearity ] 2 Dark correction: all the frames Data: I dark (x, y) = I linear (x, y) MasterDark(x, y) Variances: σ 2 dark (x, y) = [σ linear(x, y)] 2 + [ErrorMasterDark(x, y)] 2 Master flat and object mask creation: a loop starts First iteration: computing the object mask, refining the telescope offsets, QC to the frames. No object mask is used (it is going to be computed). All the dark-corrected science frames are used. No variances computation. BPM is used. a. Flat computation (1st order): F lat 1st (x, y) = Comb[I dark (x, y)]/norm Combination using the median (alternatively, using the mean). No offsets taken into account. Normalization to the mean. b. Flat correction (1st order): I 1st flat (x, y) = I dark(x, y)/flat 1st (x, y) c. Sky correction (1st order): Isky 1st (x, y) = I1st flat (x, y) Sky Sky is computed and subtracted in each array channel (mode of all the pixels in the channel), in order to avoid time-dependent variations of the channel amplifiers. BPM is used for the above sky level determination. d. Science image (1st order): Science 1st (x, y) = Comb[Isky 1st (x, y)] Combination using the median. Taking telescope offsets into account. Extinction correction is performed to each frame before combination: kX, being X the airmass. Rejection of bad pixels during the combination (alternatively, asigma-clipping algorithm). e. Object Mask (1st order): SExtractor[Science 1st (x, y)] > Obj M ask 1 st(x, y) High DETECT_THRESH (for detecting only the brightest objects). Saturation limit must be carefully set (detected objects must not be saturared). f. Offsets refinement: Objects are also found in the sky-corrected frames: SExtractor[Isky 1st (x, y)] All the objects detected in the combined science image are also identified in each sky-corrected frame. For doing that, the position of each source from the combined image is converted into positions in the reference system of each frame Isky 1st (x, y). The telescope offsets are used for a first estimation of the source position in the frame. A TBD cross-correlation algorithm finds the correct source position into a window of size S around the estimated position. The new improved offsets are computed for each source in each frame. The differences between the improved offsets (OFFX, OFFY) and the telescope (nominal) offsets (OFFX tel, OFFY tel ) are computed for each object in each frame. The differences between both sets of offsets are plotted for all the objects vs. Object Number, ordered by brightness High Level description of the EMIR Data Reduction Pipeline 31

36 The mean values of these differences (weighting with object brightness) are computed, making an approximation to integer values. These values represent the average displacement of the true offsets of the frame relative to the nominal telescope offsets. If the estimated refined offsets are very different from the nominal values, the Science 1st (x, y) image is computed again, using the refined offset values. A llop starts from step d) to f), until the offsets corrections are less than a TBD threshold value for the corresponding frame. g. Quality Control for the science frames: Second iteration The brightest objects detected in the ObjMask 1st (x, y) are selected (N~5 objects). They must appear in more than two frames. The FLUX_AUTO and the FWHM of each selected object are computed in each frame. The FLUX_AUTO kX and FWHM are plotted vs. frame number. The median values of FLUX_AUTO kX and FWHM along all the frames are computed for each object, as well as their standard deviations. A sigma-clipping algorithm will select those frames with more than N/2 objects (TBD) lying +/- 1 sigma above/below the median value of FLUX_AUTO kX. These frames will be flagged as non-adequate for the creation of the final science frame. All those frames with FWHM lying n times sigma above their median value or m times sigma below it are also flagged as non-adequate. Notice that m and n must be different (FWHM values better than the median must be allowed). The non-adequate frames are not used for generating the final science frame. They will be avoided in the rest of the reduction. A QC flag will be assigned to the final science image, depending on the number of frames finally used in the combination. E.g, QC_GOOD if between % of the original set of frames are adequate, QC_FAIR between 70-90%, QC_BAD below 70% (the precise numbers TBD). ObjMask 1st (x, y) is used for computing the flatfield and the sky. Only those dark-corrected science frames that correspond to adequate frames are used. No variances computation. BPM is also used. a. Flat computation (2nd order): F lat 2nd (x, y) = Comb[I dark (x, y)]/norm Combination using the median (alternatively, using the mean). The first order object mask is used in the combination. No offsets taken into account in the combination, although they are used for translating positions in the object mask to positions in each individual frame. Normalization to the mean. b. Flat correction (2nd order): I 2nd flat (x, y) = I dark(x, y)/f lat 2nd (x, y) c. Sky correction (2nd order): Isky 2nd (x, y) = I2nd flat (x, y) Skynew (x, y) Sky new is computed as the average of m (~ 6, TBD) Iflat 2nd (x, y) frames, near in time to the considered frame, taking into account the first order object mask and the BPM. An array storing the number of values used for computing the sky in each pixel is generated (weights array). If no values are adequate for computing the sky in a certain pixel, a zero is stored at the corresponding position in the weights array. The sky value at these pixels is obtained through interpolation with the neighbouring pixels. d. Science image (2nd order): Science 2nd (x, y) = Comb[Isky 2nd (x, y)] 32 Chapter 1. PyEmir User Guide

37 Third iteration Combination using the median. Taking the refined telescope offsets into account. Extinction correction is performed to each frame before combination: kX, being X the airmass. Rejection of bad pixels during the combination (alternatively, asigma-clipping algorithm). e. Object Mask (2nd order): SExtractor[Science 2nd (x, y)] > ObjMask 2nd (x, y) Lower DETECT_THRESH. Saturation limit must be carefully set. ObjMask 2nd (x, y) is used in the combinations. Only those dark-corrected science frames that correspond to adequate frames are used. Variance frames are computed. BPM is also used. Additional iterations: stop the loop when a suitable criterium applies (TBD) Multi-Object Spectroscopy Mode Inputs Science frames Offsets between them Master Dark Bad pixel mask (BPM) Non-linearity correction polynomials Master spectroscopic flat Master spectroscopic background Master wavelength calibration Master spectrophotometric calibration Exposure Time (must be the same in all the frames) Airmass for each frame Extinction correction as a function of wavelength Detector model (gain, RN) In the case of EMIR, the reduction of the Multi-Object Spectroscopy observations will be in practice carried out by extracting the individual aligned slits (not necessarily single slits), and reducing them as if they were traditional long-slit observations in the near infrared. Most of the steps to be applied to these pseudo long-slit subimages are those graphically depicted in this figure 1.7. High Level description of the EMIR Data Reduction Pipeline 33

38 34 Chapter 1. PyEmir User Guide

39 The details are given in Chapter 3 of Cardiel s thesis (1999). The key difference in the infrared observations is the sky subtraction, which will depend on the observational strategy. Basic steps must include: Data modelling (if appropriate/possible) and variance frame creation from first principles: all the frames Correction for non-linearity: all the frames Data: I linear (x, y) = I observed (x, y)p ol linearity Variances: σ 2 linear (x, y) = [σ model(x, y)p ol linearity ] 2 + [I observed (x, y)errorp ol linearity ] 2 Dark correction: all the frames Data: I dark (x, y) = I linear (x, y) MasterDark(x, y) Variances: σ 2 dark (x, y) = [σ linear(x, y)] 2 + [ErrorMasterDark(x, y)] 2 Flatfielding: distinguish between high frequency (pixel-to-pixel) and low-frequency (overall response and slit illumination) corrections. Lamp flats are adequate for the former and twilight flats for the second. Follow section Detection and extraction of slits: apply Border_Detection algorithm, from own frames or from flatfields. Cleaning Single spectroscopic image: sigma-clipping algorithm removing local background in pre-defined direction(s). Multiple spectroscopic images: sigma-clipping from comparison between frames. Wavelength calibration and C-distortion correction of each slit. Double-check with available sky lines. Sky-subtraction (number of sources/slit will be allowed to be > 1?). Subtraction using sky signal at the borders of the same slit. Subtraction using sky signal from other(s) slit(s), not necessarily adjacent. Spectrophotometric calibration of each slit, using the extinction correction curve and the master spectrophotometric calibration curve. Spectra extraction: define optimal, average, peak, FWHM. 1.8 Engineering Recipes These recipes are devoted to calibrate the EMIR detector, a Rockwell HAWAII-2 unit Cosmetics Mode Engineering Recipe class CosmeticsRecipe Input class CosmeticsRecipeInput Result class CosmeticsRecipeResult Detector cosmetics include: dead and hot pixels, inhomogenities in pixel-to-pixel response, stripes pattern and bias in the least significative bit Engineering Recipes 35

40 Dead and hot pixels Dead pixels have low response independently of the brightness of the incident light. Hot pixels have high response even in low brightness conditions. Both types of pixels are related with problem in detector electronics. Dead and hot pixels are detected using two flat illuminated images of different exposure time. The ratio of these images would be a constant, except in dead or hot pixels, which will deviate significantly. Asumming a normal distribution for the vast majority of the pixels in the ratio image, we flag as hot pixels those that lie a given number of standard deviations over the center of the distribution. Dead pixels are those that lie a given number of pixels below the center of the distribution. The center of the distribution is estimated as the median of the full image. The standard deviation of the distribution of pixels is computed obtaining the percentiles nearest the pixel values corresponding to nsig in the normal CDF. The standar deviation is then the distance between the pixel values divided by two times nsig. The ratio image is then normalized with this standard deviation. In the image below we plot the histogram of the ratio image (with median subtracted). The red curve represents a normal distribution with mean 0 and sigma computed from 1-sigma percentiles. The normal distribution is plotted upto 4-sigma. Overplotted in green are the regions at 6-sigma level. These points would be flagged as bad pixels in the final output mask. 36 Chapter 1. PyEmir User Guide

41 Note: The procedure is similar to the algorithm of the IRAF task ccdmask Requeriments Name Type Default Meaning 'lowercut' Parameter 4.0 Values below this sigma level are flagged as dead pixels 'uppercut' Parameter 4.0 Values above this sigma level are flagged as hot pixels Products 'ratio' contains the normalized ratio of the two flat images. 'mask' contains a frame with zero for valid pixels and non zero for invalid. Name 'ratio' 'mask' Type EmirDataFrame EmirDataFrame 1.9 Auxiliary Recipes Bias Image Mode Bias Image Recipe class BiasRecipe Input class BiasRecipeInput Result class BiasRecipeResult The actions to calibrate the zero (pedestal) level of the detector plus associated control electronic by taken images with null integration time. Requeriments Name Type Default Meaning 'master_bpm' Product NA Master BPM frame Procedure The frames in the observed block are stacked together using the median of them as the final result. The variance of the result frame is computed using two different methods. The first method computes the variance across the pixels in the different frames stacked. The second method computes the variance en each channel in the result frame. Products Name 'biasframe' 'stats' Type MasterBias ChannelLevelStatistics 1.9. Auxiliary Recipes 37

42 1.9.2 Dark Current Image Mode Dark Current Image Recipe class DarkRecipe Input class DarkRecipeInput Result class DarkRecipeResult The actions to measure the variation of the intrinsic signal of the system by taken images under zero illumination condition and long integration time. Requeriments Name Type Default Meaning 'master_bpm' Product NA Master BPM frame 'master_bias' Product NA Master Bias frame Procedure The frames in the observed block are subtracted from the master bias and then they are stacked together, using the median of them as the final result. The variance of the result frame is computed using two different methods. The first method computes the variance across the pixels in the different frames stacked. The second method computes the variance en each channel in the result frame. Products Name 'darkframe' 'stats' Type MasterDark ChannelLevelStatistics Intensity Flat-Field Mode Intensity Flat-Field Recipe class IntensityFlatRecipe Input class IntensityFlatRecipeInput Result class IntensityFlatRecipeResult The required actions to set the TS and EMIR at the configuration from which sky and/or artificial illumination flat field data acquisition can proceed and take data. Requeriments Name Type Default Meaning 'master_bpm' Product NA Master BPM frame 'master_bias' Product NA Master Bias frame 'master_dark' Product NA Master Dark frame 'nonlinearity' Product [1.0, 0.0] Master non-linearity calibration 38 Chapter 1. PyEmir User Guide

43 Procedure The frames in the observed block are subtracted from the master bias and the master dark. The frames are corrected from non-linearity. The frames with lamps-on and with lamps-off are stacked using the median, and then the combined lamps-off frame is subtracted from the lamps-on frame. The result is the subtracted frame, scaled to have a mean value of 1. Products Name 'flatframe' Type MasterIntensityFlat 1.10 Imaging Recipes Stare Image Mode Stare Image Recipe class StareImageRecipe Input class StareImageRecipeInput Result class StareImageRecipeResult The effect of recording images of the sky in a given pointing position of the TS. Requeriments Name Type Default Meaning 'master_bpm' Product NA Master BPM frame 'master_bias' Product NA Master Bias frame 'master_dark' Product NA Master Dark frame 'nonlinearity' Product [1.0, 0.0] Master non-linearity calibration 'master_intensity_ff' Product NA Master Intensity flat-field frame 'extinction' Parameter 0.0 Mean atmospheric extinction 'sources' Parameter None List of (x, y) coordinates to measure FWHM 'offsets' Parameter None List of pairs of offsets 'iterations' Parameter 4 Iterations of the recipe Procedure The block of raw frames are processed in several stages. First, the frames are corrected from bias, dark, nonlinearity and intensity flat field. For these steps we use the frames in the requeriments from 'master_bpm' to 'master_intensity_ff' Note: It is not clear the need to have a master_bias requirement, as it is only needed in one readout mode If the parameter offsets is None, the recipe computes offset information using the WCS stored in each frame FITS header. If it is defined, the information in offsets is used instead. The size of the result frame is computed using the sizes of the input frames and the offstes between them. New intermediate frames are created resizing the input frames Imaging Recipes 39

44 A sky flat is computed using the input frames. Each frame is scaled acording to its mean value and then they are combined together using a sigma clipping algorithm with low and high rejection limits of 3 sigma. Each input frame is then divided by the sky flat. Next, the sky level is estimated in each frame by obtaining the median. The sky value is then subtracted of each frame. The frames (sky-subtracted, flat-fielded and resized) are then stacked. The frames are scaled acording to its airmass and the value of 'extinction'. The algorithm used to stack frames is numina.array.combine. quantileclip() with 10% points rejected at both ends of the distribution. Results The result of the Recipe is an object of type StareImageRecipeResult. It contains two objects, a FrameDataProduct containing the result frame and a SourcesCatalog containing a catalog of sources Nodded/Beam-switched images Mode Nodded/Beam-switched images Recipe class NBImageRecipe Input class NBImageRecipeInput Result class NBImageRecipeResult The effect of recording a series of stare images, with the same acquisition parameters, and taken by pointing the TS in cycles between two, or more, sky positions. Displacements are larger than the EMIR FOV, so the images have no common area. Used for sky subtraction. Requeriments Name Type Default Meaning 'master_bpm' Product NA Master BPM frame 'master_bias' Product NA Master Bias frame 'master_dark' Product NA Master Dark frame 'nonlinearity' Product [1.0, 0.0] Master non-linearity calibration 'master_intensity_ff' Product NA Master Intensity flat-field frame 'extinction' Parameter 0.0 Mean atmospheric extinction 'sources' Parameter None List of (x, y) coordinates to measure FWHM 'offsets' Parameter None List of pairs of offsets 'iterations' Parameter 4 Iterations of the recipe Procedure The block of raw frames contains both sky and target images. They are treated differently at some stages. Sky frames have IMGTYP = 'SKY' in their FITS headers. Target frames have IMGTYP = 'TARGET'. All the frames are corrected from bias, dark, non-linearity and intensity flat field. For these steps we use the frames in the requeriments from 'master_bpm' to 'master_intensity_ff' Note: It is not clear the need to have a master_bias requirement, as it is only needed in one readout mode Then, an iterative process starts. The number of iterations is controlled by the parameter 'iterations'. 40 Chapter 1. PyEmir User Guide

45 Base step Offsets between the target frames are obtained. If the parameter offsets is None, the recipe computes offset information using the WCS stored in each frame FITS header. If it is defined, the information in offsets is used instead. The size of the result frame is computed using the sizes of the target frames and the offsets between them. New intermediate frames are created resizing the target input frames. A sky flat is computed using the input sky frames. Each sky frame is scaled acording to its mean value and then they are combined together using a sigma clipping algorithm with low and high rejection limits of 3 sigma. Each input target frame is then divided by the sky flat. Next, the sky level is estimated in each frame by obtaining the median of the nearest sky image. The sky value is then subtracted of each frame. The target frames (sky-subtracted, flat-fielded and resized) are then stacked. The frames are scaled acording to its airmass and the value of 'extinction'. The algorithm used to stack frames is numina.array.combine.quantileclip() with 10% points rejected at both ends of the distribution. Check step In the next step, several checkings are performed in the result image. The centroids of bright objects are compared between the input target frames and the result frame. This test allows to check if the offsets are correct and to refine the offsets. The flux of bright objects is compared between the input target frames and the result frame. This test allows to find target frames with abnormal illumination (due to clouds, for example). The parameter 'check_photometry_levels' mark different categories of clasification of the frames acording the fraction of the median flux level of the frames. The parameter 'check_photometry_actions' allow the user to select the action to take in each category. The allowed actions are 'default'`, ``'warn' and 'reject'. Warning: The offset-recompute routine is not yet implemented Full reduction step Using the latest available result image (in the first iteration, that of the base step), a segmentation mask is computed. This segmentation mask applies to target frames only. Note: A segmentation mask for each sky frame is being considered The sky flat is applied to the target frames. The sky level for target frames is estimated using the median value of the nearest sky frames in the observed series. We use a number of 'sky_images' frames before and after and never separated more than 'sky_images_sep_time' minutes. The target frames (sky-subtracted, flat-fielded and resized) are then stacked. The frames are scaled acording to its airmass and the value of 'extinction'. The algorithm used to stack frames is numina.array.combine. quantileclip() with 10% points rejected at both ends of the distribution. This last step is repeated 'iterations' times, the segmentation mask computed from the result of the previous step. Results The result of the Recipe is an object of type NBImageRecipeResult. It contains two objects, a FrameDataProduct containing the result frame and a SourcesCatalog containing a catalog of sources Imaging Recipes 41

46 Dithered images Mode Dithered images Recipe class DitheredImageRecipe Input class DitheredImageRecipeInput Result class DitheredImageRecipeResult The effect of recording a series of stare images, with the same acquisition parameters, and taken by pointing to a number of sky positions, with separations of the order of arcsec, either by nodding the TS, tilting the TS M2 or shifting the EMIR DTU. Displacements are of the order of several pixels (even fractional). Images share the large majority of the sky positions so they can be coadded. Used for avoid cosmetic effects and/or improve the SNR. Superflat and/or supersky frames can be built from the image series. Requeriments Name Type Default Meaning 'master_bpm' Product NA Master BPM frame 'master_bias' Product NA Master Bias frame 'master_dark' Product NA Master Dark frame 'nonlinearity' Product [1.0, 0.0] Master non-linearity calibration 'master_intensity_ff' Product NA Master Intensity flat-field frame 'extinction' Pa- 0.0 Mean atmospheric extinction rame- ter 'sources' Pa- None List of (x, y) coordinates to measure FWHM rame- ter 'offsets' Pa- None List of pairs of offsets rame- ter 'iterations' Pa- 4 Iterations of the recipe rame- ter 'sky_images' Parameter 5 Images used to estimate the background before and after current image 'sky_images_sep_time' Parameter 10 Maximum separation time between consecutive sky images in minutes 'check_photometry_levels' Pa- [0.5, 0.8] Levels to check the flux of the objects rame- ter 'chec_photometry_actions' Pa- [ warn, warn, Actions to take on images rame- ter default ] 42 Chapter 1. PyEmir User Guide

47 Procedure The block of raw frames are processed in several stages. First, the frames are corrected from bias, dark, nonlinearity and intensity flat field. For these steps we use the frames in the requeriments from 'master_bpm' to 'master_intensity_ff' Note: It is not clear the need to have a master_bias requirement, as it is only needed in one readout mode Then, an iterative process starts. The number of iterations is controlled by the parameter 'iterations'. Base step Offsets between the frames are obtained. If the parameter offsets is None, the recipe computes offset information using the WCS stored in each frame FITS header. If it is defined, the information in offsets is used instead. The size of the result frame is computed using the sizes of the input frames and the offstes between them. New intermediate frames are created resizing the input frames. A sky flat is computed using the input frames. Each frame is scaled acording to its mean value and then they are combined together using a sigma clipping algorithm with low and high rejection limits of 3 sigma. Each input frame is then divided by the sky flat. Next, the sky level is estimated in each frame by obtaining the median. The sky value is then subtracted of each frame. The frames (sky-subtracted, flat-fielded and resized) are then stacked. The frames are scaled acording to its airmass and the value of 'extinction'. The algorithm used to stack frames is numina.array.combine. quantileclip() with 10% points rejected at both ends of the distribution. Check step In the next step, several checkings are performed in the result image. The centroids of bright objects are compared between the input frames and the result frame. This test allows to check if the offsets are correct and to refine the offsets. The flux of bright objects is compared between the input frames and the result frame. This test allows to find frames with abnormal illumination (due to clouds, for eample). The parameter 'check_photometry_levels' mark different categories of clasification of the frames acording the fraction of the median flux level of the frames. The parameter 'check_photometry_actions' allow the user to select the action to take in each category. The allowed actions are 'default'`, ``'warn' and 'reject'. Warning: The offset-recompute routine is not yet implemented Full reduction step Using the latest available result image (in the first iteration, that of the base step), a segmentation mask is computed. The segmentation mask is used to avoid objects when computing a new sky flat. With the frames corrected with the new sky flat, the sky level is estimated. For each frame, we use frames before and after in the series to compute a median sky, that is subtracted from each frame. We use a number of 'sky_images' frames before and after and never separated more than 'sky_images_sep_time' minutes. The frames (sky-subtracted, flat-fielded and resized) are then stacked. The frames are scaled acording to its airmass and the value of 'extinction'. The algorithm used to stack frames is numina.array.combine. quantileclip() with 10% points rejected at both ends of the distribution Imaging Recipes 43

48 This last step is repeated 'iterations' times, the segmentation mask computed from the result of the previous step. Results The result of the Recipe is an object of type DitheredImageRecipeResult. It contains two objects, a FrameDataProduct containing the result frame and a SourcesCatalog containing a catalog of sources Micro-dithered images Mode Micro-dithered images Recipe class MicroDitheredImageRecipe Input class MicroDitheredImageRecipeInput Result class MicroDitheredImageRecipeResult The effect of recording a series of stare images, with the same acquisition parameters, and taken by pointing to a number of sky positions, with separations of the order of sub arcsecs, either by moving the either by nodding the TS, tilting the TS M2 or shifting the EMIR DTU, the latter being the most likely option. Displacements are of the order of fraction of pixels. Images share the large majority of the sky positions so they can be coadded. Used for improving the spatial resolution of the resulting images and not valid for sky or superflat images. 44 Chapter 1. PyEmir User Guide

49 Requeriments Name Type Default Meaning 'master_bpm' Product NA Master BPM frame 'master_bias' Product NA Master Bias frame 'master_dark' Product NA Master Dark frame 'nonlinearity' Product [1.0, 0.0] Master non-linearity calibration 'master_intensity_ff' Product NA Master Intensity flat-field frame 'extinction' Pa- 0.0 Mean atmospheric extinction rame- ter 'sources' Pa- None List of (x, y) coordinates to measure FWHM rame- ter 'offsets' Pa- None List of pairs of offsets rame- ter 'iterations' Pa- 4 Iterations of the recipe rame- ter 'sky_images' Parameter 5 Images used to estimate the background before and after current image 'sky_images_sep_time' Parameter 10 Maximum separation time between consecutive sky images in minutes 'check_photometry_levels' Pa- [0.5, 0.8] Levels to check the flux of the objects rame- ter 'chec_photometry_actions' Pa- [ warn, warn, Actions to take on images rame- ter default ] 'subpixelization' Pa- 4 Number of subdivision of each pixel side rame- ter 'window' Parameter None Region of interest Procedure The procedure followed by this recipe is equivalent to Dithered images. They differ in the aspects controlled by the parameters 'subpixelization' and 'window'. If 'window' is different to None, the frames are clipped to the size 'window'. Each pixel of the input frames is subdivided in 'subpixelization' x 'subpixelization' pixels. Results The result of the Recipe is an object of type MicroDitheredImageRecipeResult. It contains two objects, a FrameDataProduct containing the result frame and a SourcesCatalog containing a catalog of sources Imaging Recipes 45

50 46 Chapter 1. PyEmir User Guide

51 CHAPTER 2 PyEmir MOS Tutorial This tutorial provides a short introduction to the rectification and wavelength calibration of EMIR spectroscopic images using PyEmir. In any case, it is expected to be an easy introduction to the use of numina and PyEmir. For detailed documentation concerning the installation of PyEmir, see the PyEmir User Guide. Warning: This MOS Tutorial is still a work in progress; several aspects of PyEmir are not yet covered sufficient detail. More detailed instructions will be provided here in the future. As shown in the previous diagram, PyEmir helps to generate a rectified and wavelength calibrated 2D image. From this point, the astronomer can use her favourite software tools to proceed with the spectra extraction and analysis. 47

52 2.1 Preliminaries Warning: All the commands are assumed to be executed in a terminal running the bash shell. Don t forget to activate the same Python environment employed to install PyEmir. In this document, the prompt (py36) $ will indicate that this is the case Running PyEmir recipes from Numina The numina script is the interface with GTC pipelines. In order to execute PyEmir recipes you should use execute something like: (py36) $ numina run <observation_result_file.yaml> -r <requirements_file.yaml> where <observation_result_file.yaml> is an observation result in YAML format, and <requirements_files.yaml> is a reqruirements file, also in YAML format. Note: YAML is a human-readable data serialization language (for details, see YAML Syntax desciption) Use of interactive matplotlib plots The interactive plots created by some Numina and PyEmir scripts have been tested using the Qt5Agg backend of matplotlib. If you want to use the same backend, check that the following line appears in the file.matplotlib/ matplotlibrc (under your home directory): backend: Qt5Agg If that file does not exist, generate it with the above line. In most interactive matplotlib plots created by Numina and Pyemir you can press? over the plotting window to retrieve a quick help concerning the use of some keystrokes to perform useful plot actions, like zooming, panning, setting background and foreground levels, etc. Note that some of these actions are already available in the navigation toolbar that appears at the top of the plotting windows. 48 Chapter 2. PyEmir MOS Tutorial

53 2.1.3 Installing ds9 Probably you already have ds9 installed in your system. If this is not the case, you can use conda to do it! (py36) $ conda install ds9 Note that we have activated the py36 environment prior to the installation of the new package. That means that this particular ds9 installation will be exclusively available from within that environment Installing dfits and fitsort Two very useful utilities (used in this tutorial) are dfits and fitsort. They will allow you to quickly examine the header content of a set of FITS files. These two utilities belong to the ESO eclipse library. If you do not have eclipse installed in your system, you can download the following stand-alone files (somehow outdated, but they work perfectly fine for our purposes and do not require anything else but a C compiler): dfits.c fitsort.c These files can be directly compiled using any C compiler: $ cc -o dfits dfits.c $ cc -o fitsort fitsort.c Note that it is highly advisable to place the two binary files in a directory included in the path of your system. 2.2 Understanding the data: image distortions It is important to highlight that the raw spectroscopic images obtained with EMIR exhibit geometric distortions, specially noticeable near the borders of the spatial direction Understanding the data: image distortions 49

54 PyEmir Documentation, Release 0.11 The previous image correspond to a continuum exposure where the even-numbered slitlets were open, whereas the odd-numbered slitlets were closed. In this way it is easy to recognize the detector region spanned by each (distorted) individual 2D spectroscopic image corresponding to each particular slitlet. In addition to that, it also important to keep in mind that the frontiers between diferent slitlets change as a function of the location of the slitlet (along the wavelength direction) in the focal plane of the telescope (i.e., the configuration of the slitlets in the Cold Slit Unit). The described geometric distortion, as well as the wavelength calibration for each slitlet, have been modelled for any arbitrary configuration of the Cold Slit Unit (i.e., for any reasonable arrangement of the slitlets), using for that purpose a large set of tungsten and arc calibration exposures. The calibration model can be easily employed to obtain a preliminary rectified and wavelength calibrated EMIR spectroscopic image without additional calibration images. This facilitates both the on-line quick reduction of the data at the telescope and the off-line detailed reduction of the data by the astronomer. You can see an example of the variation of the slitlet frontiers when using different slitlet configurations in the following videos: Video 1: Variation of the frontiers between slitlets 3 and 4 as a function of the location of the slitlets (along the wavelength direction) in the Cold Slit Unit, using grism J and filter J. The varying numbers at the top of the image indicate the location of both slitlets (in mm). Video 2: Same as the previous video, zooming in the frontier between slitlet 3 and Chapter 2. PyEmir MOS Tutorial

55 2.3 Simple example: arc exposure Warning: All the commands are assumed to be executed in a terminal running the bash shell. Don t forget to activate the same Python environment employed to install PyEmir. In this document, the prompt (py36) $ will indicate that this is the case. The rectification and wavelength calibration of any EMIR spectroscopic image can be obtained with two levels of quality: Preliminary calibration, without auxiliary calibration images, computed from the empirical calibration derived by the instrument team. This is the on-line reduction perfomed at the GTC while gathering the images. Note that the empirical calibrations were computed using a large set of initial calibration images, and it is not expected that the absolute wavelength calibration to be correct within a few pixels nor the relative wavelength calibration between slitlets to agree within one pixel. For that reason this rectified and wavelength calibrated image has been defined as a preliminary version. Anyhow, it is a good starting point in order to have a look to the data. Refined calibration, which requires either auxilary arc exposures or a more detailed reduction of scientific images with good signal of the airglow emission (OH emission lines). In this case, the preliminary calibration is refined in order to guarantee that both, the absolute wavelength calibration and the relative wavelength calibration between slitlets do agree within a fraction of a pixel Preliminary rectification and wavelength calibration Assume you want to perform the rectification and wavelength calibration of the following raw spectroscopic images (corresponding in this case to spectral arc lamps): EMIR-TEST0.fits EMIR-TEST0.fits EMIR-TEST0.fits These images should be similar since they were taken consecutively with the same instrument configuration. In this case, the median of the three raw images will be computed and a preliminary rectified and wavelength calibrated image will be generated from that median image. Those three files (together with some additional files that you will need to follow this simple example) are available as a compressed tgz file: EMIR_simple_example.tgz. Download and decompress the previous file: (download EMIR_simple_example.tgz) (py36) $ tar zxvf simple_example_files.tgz... (you can remove the tgz file) (py36) $ rm simple_example_files.tgz A new subdirectory named EMIR_simple_example should have appeared, with the following content: (py36) $ tree EMIR_simple_example EMIR_simple_example 00_starespectrawave.yaml 01_starespectrawave.yaml control.yaml data EMIR-TEST0.fits EMIR-TEST0.fits EMIR-TEST0.fits master_bpm.fits (continues on next page) 2.3. Simple example: arc exposure 51

56 master_dark.fits master_flat.fits rect_wpoly_moslibrary_grism_h_filter_h.json rect_wpoly_moslibrary_grism_j_filter_j.json rect_wpoly_moslibrary_grism_k_filter_ksp.json rect_wpoly_moslibrary_grism_lr_filter_hk.json rect_wpoly_moslibrary_grism_lr_filter_yj.json (continued from previous page) Move into the EMIR_simple_example directory: (py36) $ cd EMIR_simple_example This directory contains a subdirectory data/ with the following files: The data/ subdirectory contains the following files: The first three FITS files *.FITS correspond to the arc exposures. master_bpm.fits is a preliminary bad-pixel-mask image (pixels in this image with values different from zero are interpolated). master_dark.fits is a dummy 2048x2048 image of zeros (this image is typically not necessary since in the IR the reduction of science observations usually requires de subtraction of consecutive images). master_flat.fits is a dummy 2048x2048 image of ones (in a more realistic reduction this image should have been obtained previously). The rect_wpoly_moslibrary_grism*.json files contain the empirical calibration for rectification and wavelength calibration for different grism+filter configurations. Remain in the EMIR_simple_example directory. From here you are going to execute the pipeline. You can easily examine the header of the three arc files using the utilities dfits and fitsort (previously mentioned): (py36) $ dfits data/ *fits fitsort object grism filter exptime date-obs FILE OBJECT GRISM FILTER EXPTIME DATE-OBS data/ emir-test0.fits CSU_RETI ALL SPEC J J T18:32:29.61 data/ emir-test0.fits CSU_RETI ALL SPEC J J T18:32:32.68 data/ emir-test0.fits CSU_RETI ALL SPEC J J T18:32:35.74 Have a look to any of the tree raw arc images (the three images are similar). For that purpose you can use ds9 or the visualization tool provided with numina: (py36) $ numina-ximshow data/ emir-test0.fits 52 Chapter 2. PyEmir MOS Tutorial

57 The wavelength direction corresponds to the horizontal axis, whereas the spatial direction is the vertical axis. This image was obtained with all the slitlets configured in longslit format. The arc lines exhibit an important geometric distortion when moving along the spatial direction even in this longslit configuration. The slitlet configuration can also be easily displayed using an auxiliay PyEmir script: (py36) $ pyemir-display_slitlet_arrangement data/ emir-test0. fits Simple example: arc exposure 53

58 The above image clearly shows that all CSU bars were configured to create aligned slitlets forming a longslit. Note: Remember that the numina script is the interface with GTC pipelines. In order to execute PyEmir recipes you should use type something like: (py36) $ numina run <observation_result_file.yaml> -r <requirements_file.yaml> where <observation_result_file.yaml> is an observation result file in YAML format, <requirements_files.yaml> is a requirements file, also in YAML format. YAML is a human-readable data serialization language (for details see YAML Syntax) and The directory EMIR_simple_example contains the following two files required to execute the reduction recipe needed in this case: 00_starespectrawave.yaml: this is what we call an observation result file, which basically contains the reduction recipe to be applied and the images involved. id: 1345 instrument: EMIR mode: STARE_SPECTRA_WAVE frames: EMIR-TEST0.fits EMIR-TEST0.fits EMIR-TEST0.fits enabled: True 54 Chapter 2. PyEmir MOS Tutorial

59 The id value is a label that is employed to generate the name of two auxiliary subdirectories (in this example the two subdirectories will be named obsid1345_work and obsid1345_results; see below), where the intermediate results and the final results are going to be stored. Not surprisingly, the key instrument is set to EMIR. The key mode indicates the identification of the reduction recipe (STARE_SPECTRA_WAVE in this example). frames lists the images to be combined (median). The key enabled: True indicates that this block is going to be reduced (it is possible to concatenate several blocks in the same observation result file, as it is going to be shown later). control.yaml: this is the requirements file, containing the expected name of generic calibration files. version: 1 products: EMIR: - {id: 2, type: 'MasterBadPixelMask', tags: {}, content: 'master_ bpm.fits'} - {id: 3, type: 'MasterDark', tags: {}, content: 'master_dark.fits'} - {id: 4, type: 'MasterSpectralFlat', tags: {}, content: 'master_ flat.fits'} - {id: 11, type: 'MasterRectWave', tags: {grism: J, filter: J}, content: 'rect_wpoly_moslibrary_grism_j_filter_j.json'} - {id: 12, type: 'MasterRectWave', tags: {grism: H, filter: H}, content: 'rect_wpoly_moslibrary_grism_h_filter_h.json'} - {id: 13, type: 'MasterRectWave', tags: {grism: K, filter: Ksp}, content: 'rect_wpoly_moslibrary_grism_k_filter_ksp.json'} - {id: 14, type: 'MasterRectWave', tags: {grism: LR, filter: YJ}, content: 'rect_wpoly_moslibrary_grism_lr_filter_yj.json'} - {id: 15, type: 'MasterRectWave', tags: {grism: LR, filter: HK}, content: 'rect_wpoly_moslibrary_grism_lr_filter_hk.json'} - {id: 21, type: 'RefinedBoundaryModelParam', tags: {grism: J, filter: J}, content: 'final_multislit_bound_param_grism_j_filter_j. json'} - {id: 22, type: 'RefinedBoundaryModelParam', tags: {grism: H, filter: H}, content: 'final_multislit_bound_param_grism_h_filter_h. json'} - {id: 23, type: 'RefinedBoundaryModelParam', tags: {grism: K, filter: Ksp}, content: 'final_multislit_bound_param_grism_k_filter_ Ksp.json'} - {id: 24, type: 'RefinedBoundaryModelParam', tags: {grism: LR, filter: YJ}, content: 'final_multislit_bound_param_grism_lr_filter_ YJ.json'} - {id: 25, type: 'RefinedBoundaryModelParam', tags: {grism: LR, filter: HK}, content: 'final_multislit_bound_param_grism_lr_filter_ HK.json'} requirements: EMIR: default: { } You are ready to execute the reduction recipe indicated in the file 00_starespectrawave.yaml (in this case the reduccion recipe named STARE_SPECTRA_WAVE): (py36) $ numina run 00_starespectrawave.yaml -r control.yaml After the execution of the previous command line, two subdirectories should have been created: 2.3. Simple example: arc exposure 55

60 a work subdirectory: obsid1345_work/ a results subdirectory: obsid1345_results/ The work subdirectory (py36) $ ls obsid1345_work/ EMIR-TEST0.fits EMIR-TEST0.fits EMIR-TEST0.fits ds9_arc_rawimage.reg ds9_arc_rectified.reg ds9_boundaries_rawimage.reg ds9_boundaries_rectified.reg ds9_frontiers_rawimage.reg ds9_frontiers_rectified.reg ds9_oh_rawimage.reg ds9_oh_rectified.reg index.pkl master_dark.fits master_flat.fits median_spectra_full.fits median_spectra_slitlets.fits median_spectrum_slitlets.fits rectwv_coeff.json reduced_image.fits All the relevant raw images *-EMIR-TEST0.fits have been copied in this working directory in order to preserve the original files. In addition, some intermediate images are also stored here during the execution of the reduction recipe. In particular: reduced_image.fits: median combination of the three *fits files. rectwv_coeff.json: rectification and wavelength calibration polinomial coefficients derived from the empirical model, and computed for the specific CSU configuration of the considered raw images. ds9-region files for raw images (before rectification and wavelength calibration): ds9_frontiers_rawimage.reg: ds9 region file with the frontiers between slitlets, valid for the raw-type images (images with the original distortions). ds9_boundaries_rawimage.reg: ds9 region file with the boundaries for each slitlet, valid for the raw-type images (images with the original distortions). ds9_arc_rawimage.reg: ds9 region file with expected location of arc lines from the EMIR calibration lamps. ds9_oh_rawimage.reg: ds9 region file with expected location of airglow (OH) sky lines. ds9-region files for rectified and wavelength calibrated images: ds9_frontiers_rectified: ds9 region file with the frontiers between slitlets, valid for rectified and wavelength calibrated images. ds9_boundaries_rectified: ds9 region file with the boundaries for each slitlet, valid for rectified and wavelength calibrated images. ds9_arc_rectified.reg: ds9 region file with expected location of arc lines from the EMIR calibration lamps. ds9_oh_rectified.reg: ds9 region file with expected location of airglow (OH) sky lines. images with averaged spectra: median_spectra_full.fits: image with the same size as the rectified and wavelength calibrated image, where the individual 38 spectra of each slitlet have been replaced by its median spectrum. median_spectra_slitlets.fits: image with simply 55 spectra, corresponding to the 55 median spectrum of each slitlet. median_spectrum_slitlets.fits: single median spectrum, with signal in all pixels with wavelength coverage in any of the 55 slitlets. 56 Chapter 2. PyEmir MOS Tutorial

61 The results subdirectory (py36) $ ls obsid1345_results/ processing.log result.yaml task.yaml reduced_image.fits stare.fits The main results are stored separately in this last subdirectory. The important files here are: reduced_image.fits: contains the (median) combination of the 3 original raw images. stare.fits is the preliminary version of the rectified and wavelength calibrated image (please, keep reading). You can easily display the last image using ds9 or the visualization tool provided with numina: (py36) $ numina-ximshow obsid1345_results/stare.fits --z1z2 0,1000 The wavelength calibration coefficientes are stored in the usual FITS keywords CRPIX1, CRVAL1 and CDELT1: (py36) $ dfits obsid1345_results/stare.fits fitsort crpix1 crval1 cdelt1 FILE CRPIX1 CRVAL1 CDELT1 obsid1345_results/stare.fits Prefixed CRVAL1 and CDELT1 values have been stablished for the different grism+filter combinations (CRPIX1=1 is employed in all cases). The goal is that all the rectified and wavelength calibrated images, corresponding to raw images obtained the same grism+filter, have the same linear coverage and sampling in 2.3. Simple example: arc exposure 57

62 wavelength, which should facilitate the scientific analysis of images obtained with distinct CSU configurations. Note that the image dimensions are now NAXIS1=3400 and NAXIS2=2090: (py36) $ dfits obsid1345_results/stare.fits fitsort naxis1 naxis2 FILE NAXIS1 NAXIS2 obsid1345_results/stare.fits NAXIS1 has been enlarged in order to accommodate wavelength calibrated spectra for slitlets in different locations along the spectral direction (i.e., with different wavelength coverage). For that reason there are empty leading and trailing areas (with signal set to zero) in the wavelength direction. NAXIS2 has also been slightly enlarged (from 2048 to 2090) in order to guarantee that all the rectified slitlets have exactly the same extent in the spatial direction (38 pixels). In the configuration of this particular example (grism J + filter J) slitlet#1 and slitlet#55 fall partially or totally outside of the spatial coverage of the EMIR detector. For that reason the first 38 pixels (slitlet #1) and the last 38 pixels (slitlet#55) in the vertical (spatial) direction are also set to zero. The coordinates of the useful rectangular region of each slitlet in the rectified and wavelength calibrated image are stored in the FITS header under the keywords: IMNSLT?? (minimum Y pixel) IMXSLT?? (maximum Y pixel) JMNSLT?? (minimum X pixel) JMXSLT?? (maximum X pixel) where?? runs from 01 to 55 (slitlet number). In principle IMNSLT?? and IMXSLT?? are always the same for all the grism + filter combinations, and are independent of the slitlet location along the wavelength direction (X axis). This guarantees that reduced images will have each slitlet always spanning the same location in the spatial direction (Y axis). However, JMNSLT?? and JMXSLT?? will change with the location of the slitlets in the spectral direction (X axis), since the actual location of each slitlet determines its resulting wavelength coverage. In the simple example just described, we have straightforwardly executed the reduction recipe STARE_SPECTRA_WAVE using the empirical model for rectification and wavelength calibration. This is good enough for a preliminary inspection of the data (for example when observing at the telescope), but it is possible to do a better job with some extra effort. For example, having a look to the preliminary rectified and wavelength calibrated image (making a zoom in a relatively narrow range in the X direction) it is clear that the relative wavelength calibration between slitlets does not agree within roughtly 1 pixel: (py36) $ numina-ximshow obsid1345_results/stare.fits --bbox 1920,2050,1, z1z2 0, Chapter 2. PyEmir MOS Tutorial

63 In addition, the absolute wavelength calibration is also wrong by a few pixels, as it is described below Refined rectification and wavelength calibration The user can obtain a more refined rectified and wavelength calibrated image using precise wavelength calibration data. For this purpose one can use arc exposures (obtained just before or after de scientific images), or even the scientific images themselves, when the airglow emission (OH emission lines) are brigth enough to be employed as wavelength references). In this simple example, since the image we are trying to reduce is precisely an arc exposure, we are using the arc lines to refine the calibration. Important: The following process only works for arc images obtained with the 3 types of arc lamps simultaneously ON during the exposure time. An easy way to check that this is the case is to examine the corresponding status keywords: (py36) $ dfits obsid1345_results/stare.fits fitsort lampxe1 lampne1 lamphg1 lampxe2 lampne2 lamphg2 FILE LAMPXE1 LAMPNE1 LAMPHG1 LAMPXE2 LAMPNE2 LAMPHG2 obsid1345_results/stare.fits Note that the EMIR calibration unit has 3 types of arc lamps: Xe, Ne, and Hg (actually two lamps of each type). In principle the six lamps should be ON (keyword = 1) Simple example: arc exposure 59

64 Warning: Before attempting to obtain a reasonable rectified and wavelength calibrated image, it is important to understand that the empirical calibration does not guarantee a perfect job when determining the slitlet location along the spatial direction (Y axis) nor in the wavelength direction (X axis). These two effects can be estimated either making use of the script pyemir-overplot_boundary_model, or by overplotting ds9-region files on the images. Both methods are described in the next subsections). Checking the spatial direction (Y axis) Note: If you prefer to use ds9 instead of the default PyEmir graphical output for the following examples, please keep reading anyway and wait for additional explanations below. For example, we can execute the auxiliary script pyemir-overplot_boundary_model with the first of the three raw arc images previously used (since the three images were obtained consecutively with exactly the same configuration, we can choose any of them): (py36) $ pyemir-overplot_boundary_model \ data/ emir-test0.fits \ --rect_wpoly_moslibrary data/rect_wpoly_moslibrary_grism_j_filter_j.json Zooming in the lower region: 60 Chapter 2. PyEmir MOS Tutorial

65 Zooming in the middle region: 2.3. Simple example: arc exposure 61

66 Zooming in the upper region: 62 Chapter 2. PyEmir MOS Tutorial

67 The above plots show the selected image with the frontiers and boundaries of each slitlet overplotted. Here a clarification is needed: frontiers: separation between slitlets. In the above plots frontiers are displayed with blue lines running from left to right. These lines are curved due to the geometric distortions. boundaries: more conservative slitlet limits, avoiding a few pixels too close to the frontiers. Boundaries have been determined by examining continuum lamp exposures and selecting regions were the slitlet illumination is relatively flat. Note that, by construction, the CSU bars create a small (but detectable) decrease in the slitlet width at the frontiers between bars. The boundary limits are displayed alternatively with cyan and magenta lines (with the same color as the one employed in the label indicating the slitlet number; in this example all the labels appear centered in the image). One can easily check that with grism J + filter J the slitlets number 1 and 55 fall partially outside the detector. Although the longslit configuration in this example makes difficult to distinguish the frontiers between slitlets in the data, a reasonable zoom (showing consecutive slitlets with slightly different slit widths), helps to check that the predicted frontiers (blue lines) separate properly the slitlet data: 2.3. Simple example: arc exposure 63

68 If you prefer to use ds9 for this task, remember that some useful auxiliary ds9-region files have been created under the obsid1345_work subdirectory. In particular: ds9_frontiers_rawimage.reg: the ds9-region file with the frontiers for the raw image ds9_boundaries_rawimage.reg: the ds9-region file with the boundaries for the raw image Open ds9 with the same image (py36) $ ds9 data/ emir-test0.fits and load the two region files: select region --> load -> obsid1345_work/ds9_frontiers_rawimage.reg select region --> load -> obsid1345_work/ds9_boundaries_rawimage.reg 64 Chapter 2. PyEmir MOS Tutorial

69 Zooming to check the slitlet frontiers: 2.3. Simple example: arc exposure 65

70 Checking the wavelength direction (X axis) 66 Chapter 2. PyEmir MOS Tutorial

71 Note: If you prefer to use ds9 instead of the default PyEmir graphical output for the following examples, please keep reading anyway and wait for additional explanations below. Since we know that the raw data correspond to arc images, we can overplot the expected locations of the some of the brightest arc lines by using the additional parameter --arc_lines: (py36) $ pyemir-overplot_boundary_model \ data/ emir-test0.fits \ --rect_wpoly_moslibrary data/rect_wpoly_moslibrary_grism_j_filter_j.json \ --arc_lines Zooming: 2.3. Simple example: arc exposure 67

72 The data arc lines appear around 3 pixels towards the left of the predicted locations (marked by the cyan circles). If you prefer to use ds9 for this task, it is also possible to use the auxiliary ds9-region with the expected location of the arc lines, created under the obsid1345_work subdirectory. In this case, open ds9 with the same image: (py36) $ ds9 data/ emir-test0.fits and load the two region files: select region --> load -> obsid1345_work/ds9_arc_rawimage.reg 68 Chapter 2. PyEmir MOS Tutorial

73 Zooming: 2.3. Simple example: arc exposure 69

74 Here it is also clear that the arc lines appear around 3 pixels towards the left of the expected locations (indicated by the ds9 regions). 70 Chapter 2. PyEmir MOS Tutorial

75 Improving the rectification and wavelength calibration Once you have estimated the potential integer offsets (in X and Y) of your image relative to the expected slitlet frontiers (Y axis) and arc line locations (X axis), it is possible to rerun the reduction recipe STARE_SPECTRA_WAVE making use of that information. In our case, we have estimated that there is no offset in the spatial direction (Y axis), and an offset of around 3 pixels in the wavelength direction (X axis). We can introduce that information in the observation result file. In this case, we create a copy of the initial 00_starespectrawave.yaml file as 01_starespectrawave.yaml, with the following content: id: 1345refined instrument: EMIR mode: STARE_SPECTRA_WAVE frames: EMIR-TEST0.fits EMIR-TEST0.fits EMIR-TEST0.fits enabled: True requirements: refine_wavecalib_mode: 2 global_integer_offset_x_pix : 3 global_integer_offset_y_pix : 0 This file is the same as 00_starespecrawave.yaml but with a different id (to generate different work and results subdirectories that do not overwrite the initial reduction), and four extra lines at the end. In particular, we are specifying a few parameters that are going to modify the behavior of the reduction recipe: refine_wavecalib_mode: 2: this indicates that the image correspond to an arc exposure and that we are asking for a refinement of the wavelength calibration using that information. Note that, by default, this parameter is set to zero, and no refinement is carried out. The value 2 indicates that the refinement is performed with the help or arc lines; a value of 12 indicates that the refinement process will use airglow (OH) lines. global_integer_offset_x_pix: 3: integer offset (pixels) that must be applied to the image data for the arc lines to fall at the expected location. global_integer_offset_y_pix: 0: integer offset (pixels) that must be applied to the image data for the frontiers to fall at the expected location. Execute the reduction recipe using the new observation result file: (py36) $ numina run 01_starespectrawave.yaml -r control.yaml Now the execution of the code takes longer (the median spectrum of each slitlet is crosscorrelated with an expected arc spectrum in order to guarantee that the wavelength calibration of the different slitlets match). The new stare.fits image now exhibit a much better wavelength calibration: (py36) $ numina-ximshow obsid1345refined_results/stare.fits --bbox 1920,2050,1, z1z2 0, Simple example: arc exposure 71

76 Remember that in the work directory you can find ds9-region files with the frontiers (ds9_frontiers_rectified.reg), boundaries (ds9_boundaries_rectified.reg) and expected arc line locations (ds9_arc_rectified.reg) for the rectified and wavelength calibrated image. Note that in this case the expected frontiers and boundaries lines are perfectly horizontal, whereas the expected arc lines are vertical (the image has been rectified!). This region files are useful to locate individual slitlets by number. (py36) $ ds9 obsid1345refined_results/stare.fits and load the region files: select reg select reg region --> load -> obsid1345refined_work/ds9_boundaries_rectified. region --> load -> obsid1345refined_work/ds9_frontiers_rectified. select region --> load -> obsid1345refined_work/ds9_arc_rectified.reg 72 Chapter 2. PyEmir MOS Tutorial

77 Zooming: 2.3. Simple example: arc exposure 73

78 In the obsid1345refined_work subdirectory you can find a file named crosscorrelation.pdf which contains a graphical summary of the cross-correlation process. In particular, you have an individual plot for each slitlet showing the cross-correlation function: 74 Chapter 2. PyEmir MOS Tutorial

79 2.3. Simple example: arc exposure 75

80 76 Chapter 2. PyEmir MOS Tutorial

81 2.3. Simple example: arc exposure 77

82 2.4 MOS example Warning: All the commands are assumed to be executed in a terminal running the bash shell. Don t forget to activate the same Python environment employed to install PyEmir. In this document, the prompt (py36) $ will indicate that this is the case. Note: It is assumed that the reader has already followed the previous tutorial Simple example: arc exposure. Some of the concepts already introduced there are not going to be repeated here with the same level of detail. Let s consider the rectification and wavelength calibration of a MOS image with slitlets configured in a nonlongslit pattern. Download the following file: EMIR_mos_example.tgz. Download and decompress the previous file: 78 Chapter 2. PyEmir MOS Tutorial

83 (download EMIR_mos_example.tgz) (py36) $ tar zxvf EMIR_mos_example.tgz... (you can remove the tgz file) (py36) $ rm EMIR_mos_example.tgz A new subdirectory named EMIR_mos_exmaple should have appeared, with the following content: (py36) $ tree EMIR_mos_example EMIR_mos_example/ 00_mos_example.yaml 01_mos_example.yaml 02_mos_example.yaml control.yaml data EMIR-STARE_SPECTRA.fits EMIR-STARE_SPECTRA.fits EMIR-STARE_SPECTRA.fits EMIR-STARE_SPECTRA.fits EMIR-STARE_SPECTRA.fits EMIR-STARE_SPECTRA.fits EMIR-STARE_SPECTRA.fits EMIR-STARE_SPECTRA.fits EMIR-STARE_SPECTRA.fits EMIR-STARE_SPECTRA.fits EMIR-STARE_SPECTRA.fits EMIR-STARE_SPECTRA.fits master_bpm.fits master_dark.fits master_flat.fits rect_wpoly_moslibrary_grism_h_filter_h.json rect_wpoly_moslibrary_grism_j_filter_j.json rect_wpoly_moslibrary_grism_k_filter_ksp.json rect_wpoly_moslibrary_grism_lr_filter_hk.json rect_wpoly_moslibrary_grism_lr_filter_yj.json Move into the EMIR_mos_example directory: (py36) $ cd EMIR_mos_example This directory contains a subdirectory data/ with the following files: The data/ subdirectory contains the following files: The first 12 FITS files *.FITS correspond to science exposures. In this case, the targets were observed following the typical ABBA scheme (in particular, the 12 images correspond to 3 consecutive ABBA blocks). master_bpm.fits is a preliminary bad-pixel-mask image (pixels in this image with values different from zero are interpolated). master_dark.fits is a dummy 2048x2048 image of zeros (this image is typically not necessary since in the IR the reduction of science observations usually requires de subtraction of consecutive images). master_flat.fits is a dummy 2048x2048 image of ones (in a more realistic reduction this image should have been obtained previously). The rect_wpoly_moslibrary_grism*.json files contain the empirical calibration for rectification and wavelength calibration for different grism+filter configurations. Remain in the EMIR_simple_example directory. From here you are going to execute the pipeline. You can easily examine the header of the 12 science files using the utilities dfits and fitsort (previously mentioned): 2.4. MOS example 79

84 (py36) $ dfits data/ * fitsort object grism filter exptime date-obs FILE OBJECT GRISM FILTER EXPTIME DATE-OBS data/ emir-stare_spectra.fits MOS example J J T22:50:16.03 data/ emir-stare_spectra.fits MOS example J J T22:56:25.08 data/ emir-stare_spectra.fits MOS example J J T23:02:28.88 data/ emir-stare_spectra.fits MOS example J J T23:08:37.93 data/ emir-stare_spectra.fits MOS example J J T23:15:03.87 data/ emir-stare_spectra.fits MOS example J J T23:21:11.87 data/ emir-stare_spectra.fits MOS example J J T23:27:15.66 data/ emir-stare_spectra.fits MOS example J J T23:33:24.72 data/ emir-stare_spectra.fits MOS example J J T00:00:25.01 data/ emir-stare_spectra.fits MOS example J J T00:06:34.07 data/ emir-stare_spectra.fits MOS example J J T00:12:37.86 data/ emir-stare_spectra.fits MOS example J J T00:18:46.92 Have a look to any of the tree raw arc images (the three images are similar). For that purpose you can use ds9 or the visualization tool provided with numina: (py36) $ numina-ximshow data/ emir-stare_spectra.fits 80 Chapter 2. PyEmir MOS Tutorial

85 2.4.1 The CSU configuration It is clear from the previous figure that the EMIR slitlets were not configured in longslit mode, but in MOS mode. In addition, it is important to highlight that not all the slitlets were opened (i.e., the slits not assigned to a particular scientific target were closed in order to avoid spurious spectra in the image; note that even for the closed slitlets the corresponding CSU bars are not completely closed to avoid collisions) The slitlet configuration can be easily displayed with the help of the auxiliary PyEmir script pyemir-display_slitlet_arrangement (please, note the use in this case of the additional parameter --n_clusters 2): (py36) $ pyemir-display_slitlet_arrangement \ data/ emir-stare_spectra.fits \ --n_clusters > separator: MOS example 81

86 The first figure displays the slitlet arrangement. 82 Chapter 2. PyEmir MOS Tutorial

87 Note that in this case we have employed the parameter --n_clusters 2, that forces the script to display a histogram of slitlet widths, and to compute two clusters and a separating value (in this case mm) which classifies the slitlets in two groups: slitlets closed (widths < mm) and opened (width > mm). This number will be used later Preliminary rectification and wavelength calibration As explained in Simple example: arc exposure, the rectification and wavelength calibration of any EMIR spectroscopic image can be obtained with two levels of quality: preliminary calibration: without auxiliary calibration images, computed from the empirical calibration derived by the instrument team. refined calibration: that refines the empirical calibration by making use of either additional calibration images (i.e., arcs) or by using the airglow (OH) emission lines. The refinement process requieres an initial estimation of the offsets in the spatial (Y axis) and spectral (X axis) directions between the empirical calibration and the actual data. These two offsets can be easily estimated after computing the preliminary calibration. In this example, for the preliminary rectification and wavelength calibration one can simply reduce one of the twelve scientific images (the first one for example). Have a look to the observation result file 00_mos_example.yaml: id: 2158preliminary instrument: EMIR (continues on next page) 2.4. MOS example 83

88 mode: STARE_SPECTRA_WAVE frames: EMIR-STARE_SPECTRA.fits enabled: True (continued from previous page) Execute the reduction recipe: (py36) $ numina run 00_mos_example.yaml -r control.yaml As expected, two new subdirectories have been created: obsid2158preliminary_work and obsid2158preliminary_results. (py36) $ numina-ximshow obsid2158preliminary_results/stare.fits Refined rectification and wavelength calibration Alghough the rectified and wavelength calibration image that we have just obtained appears to be fine, looking in detail it is possible to realize that the absolute and relative wavelength calibration between slitlets is still not perfect and that there exists a small offset between the expected and the observed slitlet frontiers. Fortunately, both problems can be easily solved. 84 Chapter 2. PyEmir MOS Tutorial

89 Note: As described in Simple example: arc exposure, the task of finding the offsets can be performed with either the auxiliary PyEmir script pyemir-overplot_boundary_model, or by using ds9 with the auxiliary ds9- region files created during the preliminary rectification and wavelength calibration reduction. In the following two subsections we are using the latter option. Checking the spatial direction (Y axis) The offset in the spatial direction (Y axis) can be estimated by plotting the expected slitlet frontiers (file obsid2158preliminary_work/ds9_frontiers_rawimage.reg), derived in the preliminary rectification and wavelength calibration, over the raw image: (py36) $ ds9 data/ emir-stare_spectra.fits & select scale --> zscale select region --> load --> obsid2158preliminary_work/ ds9_frontiers_rawimage.reg 2.4. MOS example 85

90 Zooming: 86 Chapter 2. PyEmir MOS Tutorial

91 From this visual examination one concludes that global_integer_offset_y_pix: -4. Note that the sign of the offset is chosen to place the actual data within the predicted frontiers (displayed with dotted blue lines) MOS example 87

92 Checking the wavelength direction (X axis) Warning: The refinement process here described is based on the use of airglow (OH) emission lines in the science frames. This assumes that the target spectra are not dominant over the airglow emission. If this is not the case (for example when observing bright sources with short exposure times), the user should employ calibration arc images obtained before and/or after the science images. The refinement process should be carried out as described in Simple example: arc exposure. Continuing with the same ds9 interface, overplot the expected location of the airglow (OH) emission lines: select region --> load --> obsid2158preliminary_work/ds9_oh_rawimage.reg 88 Chapter 2. PyEmir MOS Tutorial

93 Note that the location of only the brightest OH lines are displayed. The visual examination reveals that in this case global_integer_offset_x_pix: 8. Note that the sign of the offset is chosen to place the observed OH lines on the predicted locations (displayed in cyan and magenta for the odd- and even-numbered slitlets, respectively) MOS example 89

94 Improving the rectification and wavelength calibration For the refined rectification and wavelength calibration we are going to use the observation result file 01_mos_example.yaml: id: 2158refined instrument: EMIR mode: STARE_SPECTRA_WAVE frames: EMIR-STARE_SPECTRA.fits EMIR-STARE_SPECTRA.fits EMIR-STARE_SPECTRA.fits EMIR-STARE_SPECTRA.fits EMIR-STARE_SPECTRA.fits EMIR-STARE_SPECTRA.fits EMIR-STARE_SPECTRA.fits EMIR-STARE_SPECTRA.fits EMIR-STARE_SPECTRA.fits EMIR-STARE_SPECTRA.fits EMIR-STARE_SPECTRA.fits EMIR-STARE_SPECTRA.fits enabled: True requirements: refine_wavecalib_mode: 12 minimum_slitlet_width_mm: maximum_slitlet_width_mm: global_integer_offset_x_pix: 8 global_integer_offset_y_pix: -4 If you compare this file with the previous one (00_mos_example.yaml) you can see the following changes: We are using the 12 raw images (not only the first one). The reduction recipe is going to compute the median of the 12 images, improving the signal to noise ratio, prior to the rectification and wavelength calibration process. Thus, the end result of this reduction recipe will be a single calibrated image. We have introduced a requirements block, defining the following parameters: refine_wavecalib_mode: 12: this indicates that the image correspond to a science exposure, deep enough to detect OH sky lines, and that we are asking for a refinement of the wavelength calibration using that information. Note that if we were using an arc image, this parameter should be set to 2 instead of 12 (as described in Simple example: arc exposure). minimum_slitlet_width_mm: and maximum_slitlet_width_mm: 2.000: minimum and maximum slitlet widths (mm) to be considered as a scientific slitlet. Note that these numbers are compatible with the histogram of slitlet widths that we have obtained previously using pyemir-display_slitlet_arrangement with the parameter n_clusters 2. Only the slitlets which width was set within the specified range will be employed to derive a median sky spectrum (needed for the cross-correlation algorithm that is taking place during the refinement process). global_integer_offset_x_pix: 8 and global_integer_offset_y_pix: -4: these are the offsets between the raw images and the expected empirical calibration, estimated as described above. Execute the reduction recipe: numina run 01_mos_example.yaml -r control.yaml As expected, two new subdirectories have been created: obsid2158refined_work and obsid2158refined_results. (py36) $ numina-ximshow obsid2158refined_results/stare.fits 90 Chapter 2. PyEmir MOS Tutorial

95 In the obsid2158refined_work subdirectory you can find a file named crosscorrelation.pdf which contains a graphical summary of the cross-correlation process. In particular, you have an individual plot for each slitlet showing the cross-correlation function: 2.4. MOS example 91

96 92 Chapter 2. PyEmir MOS Tutorial

97 2.4. MOS example 93

98 94 Chapter 2. PyEmir MOS Tutorial

99 Warning: The refined rectification and wavelength calibration has been saved in the file obsid2158refined_work/rectwv_coeff.json. This file can be applied, as described in the next section, to any raw image (with the same CSU configuration). This file, stored in JSON format, contains all the relevant information necessary to carry out the rectification and wavelength calibration of any image. Note: JSON is an open-standard file format that uses human readable text to transmit data objects (see details in JSON description) Applying the refined rectification and wavelength calibration Placing the refined calibration in data/ subdirectory The first step is to copy the file containing the refined rectification and wavelength calibration (in this case obsid2158refined_work/rectwv_coeff.json) into the data/ subdirectory. Since the this JSON file has a generic name, it is advisable to rename it in order to avoid overwritting it by accident: 2.4. MOS example 95

100 (py36) $ cp obsid2158refined_work/rectwv_coeff.json data/rectwv_coeff_2158refined. json Preparing the observation result file Next, one needs to generate a new observation result file in which each individual raw frame is going to be rectified and wavelength-calibrated using the refined calibration previously computed. For our case this file is 02_mos_example.yaml. The first 42 lines of this file are: 1 id: instrument: EMIR 3 mode: STARE_SPECTRA_APPLY_WAVE 4 frames: EMIR-STARE_SPECTRA.fits 6 requirements: 7 rectwv_coeff_json: rectwv_coeff_2158refined.json 8 enabled: True id: instrument: EMIR 12 mode: STARE_SPECTRA_APPLY_WAVE 13 frames: EMIR-STARE_SPECTRA.fits 15 requirements: 16 rectwv_coeff_json: rectwv_coeff_2158refined.json 17 enabled: True id: instrument: EMIR 21 mode: STARE_SPECTRA_APPLY_WAVE 22 frames: EMIR-STARE_SPECTRA.fits 24 requirements: 25 rectwv_coeff_json: rectwv_coeff_2158refined.json 26 enabled: True id: instrument: EMIR 30 mode: STARE_SPECTRA_APPLY_WAVE 31 frames: EMIR-STARE_SPECTRA.fits 33 requirements: 34 rectwv_coeff_json: rectwv_coeff_2158refined.json 35 enabled: True id: ABBA1 38 instrument: EMIR 39 mode: LS_ABBA 40 children: [2158, 2187, 2216, 2245] 41 enabled: True Note that this file is more complex than the previous observation result files employed in this tutorial. In this sense, the main differences are: The file contains several blocks, one for each reduction recipe to be executed. Each block is separated from the next one by a separating line containing just three dashes (---). In the previous display the first and the fifth block have been highlighted (lines 1-8, and 37-41). Do not forget the separation line between blocks (otherwise the pipeline will no recognize where one block ends and the next one begins). 96 Chapter 2. PyEmir MOS Tutorial

101 This separation line must not appear after the last block. Comment lines in this file start by a hash (#) symbol. Each block contains a different id, which means that the execution of each block is going to generate two different subdirectories (work and results), in order to avoid overwritting the intermediate work and final results derived from the executions of each reduction recipe. The previous display shows the first 5 blocks: The first 4 blocks (lines 1-8, 10-17, and 28-35) correspond to the rectification and wavelength calibration of the first four individual images of the first ABBA observation pattern employed at the telescope. Note that the reduction recipe is STARE_SPECTRA_APPLY_WAVE, which indicates that the pipeline is not going to use the empirical calibration but the refined calibration previously obtained. The specific name of the file containing this refined calibration must be given in the requirements section of each block under the label rectwv_coeff_json (remember that the refined calibration file has been copied to the data/ subdirectory and renamed as rectwv_coeff_2158refined. json). We are using the same refined calibration for the 12 science images (the three ABBA observation blocks). The id label for each block has been arbitrarily set to the last 4 digits of the running number that uniquely identifies each raw image obtained with the GTC. The fifth block (lines 37-41) is the responsible of computing the arithmetic combination A-B-B+A sequence, using for that purpose the result (rectified and wavelength calibrated image) of the previous four blocks. In this case there is no frames: field in the block, but a children: field. This field contains a list with the id of the four relevant blocks ([2158, 2187, 2216, 2245]), which correspond, respectively to the A, B, B, and A image within the ABBA sequence. The reduction recipe is LS_ABBA. The id of this fifth block has been chosen to be ABBA1. If you examine the rest of the file 02_mos_example.yaml you will find that the same pattern of 5 blocks is repeated three times, once for each of the three ABBA sequences. Finally, there is an additional block at the end of the observation result file (lines ) that reads: 130 id: ABBA_combined 131 instrument: EMIR 132 mode: BASIC_COMBINE 133 children: [ABBA1, ABBA2, ABBA3] 134 requirements: 135 method: median 136 field: spec_abba 137 enabled: True This block is responsible of computing the median of the results of the previous blocks identified as ABBA1, ABBA2 and ABBA3: The reduction recipe is BASIC_COMBINE. Within the requirements field of this block we have specified the combination method (median, altough sum and mean are also valid options). There is no frames: field in the block, but a children: field. This field contains a list with the id of the three relevant blocks ([ABBA1, ABBA2, ABBA3]). The additional requirement field: spec_abba indicates that the results from the reduction of the ABBA1, ABBA2 and ABBA3 blocks that are going to be used as input are the images named spec_abba. fits (that can be found within the corresponding obsidabba1_result, obsidabba2_result and obsidabba3_result subdirectories. Here is a summary of the different blocks that constitute the observation result file 02_mos_example.yaml: 2.4. MOS example 97

102 id input recipe EMIR- STARE_SPECTRA_APPLY_WAVE STARE_SPECTRA.fits EMIR- STARE_SPECTRA_APPLY_WAVE STARE_SPECTRA.fits EMIR- STARE_SPECTRA_APPLY_WAVE STARE_SPECTRA.fits EMIR- STARE_SPECTRA_APPLY_WAVE STARE_SPECTRA.fits ABBA1 2158, 2187, 2216, 2245 LS_ABBA EMIR- STARE_SPECTRA_APPLY_WAVE STARE_SPECTRA.fits EMIR- STARE_SPECTRA_APPLY_WAVE STARE_SPECTRA.fits EMIR- STARE_SPECTRA_APPLY_WAVE STARE_SPECTRA.fits EMIR- STARE_SPECTRA_APPLY_WAVE STARE_SPECTRA.fits ABBA2 2274, 2303, 2332, 2361 LS_ABBA EMIR- STARE_SPECTRA_APPLY_WAVE STARE_SPECTRA.fits EMIR- STARE_SPECTRA_APPLY_WAVE STARE_SPECTRA.fits EMIR- STARE_SPECTRA_APPLY_WAVE STARE_SPECTRA.fits EMIR- STARE_SPECTRA_APPLY_WAVE STARE_SPECTRA.fits ABBA3 2414, 2443, 2472, 2501 LS_ABBA ABBA_combined ABBA1, ABBA2, ABBA3 BASIC_COMBINE (median) Executing the observation result file The last step consists in running numina with the observation result file described in the previous subsection: (py36) $ numina run 02_mos_example.yaml -r control.yaml The execution of all the blocks may require a few minutes (depending on your computer). The final image will be stored in the obsidabba_combined_results subdirectory: (py36) $ ds9 obsidabba_combined_results/result.fits select reg select reg region --> load -> obsid2158refined_work/ds9_boundaries_rectified. region --> load -> obsid2158refined_work/ds9_frontiers_rectified. 98 Chapter 2. PyEmir MOS Tutorial

103 Note: Remember that the ds9-region files with the refined boundaries and frontiers were stored in the obsid2158refined_work/ subdirectory! 2.4. MOS example 99

Spectroscopy techniques II. Danny Steeghs

Spectroscopy techniques II. Danny Steeghs Spectroscopy techniques II Danny Steeghs Conducting long-slit spectroscopy Science goals must come first, what are the resolution and S/N requirements? Is there a restriction on exposure time? Decide on

More information

ESO SCIENCE DATA PRODUCTS STANDARD. Doc. No. GEN-SPE-ESO , Issue 5. Addendum. Date: 15/07/2015. Integral Field Spectroscopy: 3D Data Cubes

ESO SCIENCE DATA PRODUCTS STANDARD. Doc. No. GEN-SPE-ESO , Issue 5. Addendum. Date: 15/07/2015. Integral Field Spectroscopy: 3D Data Cubes ESO SCIENCE DATA PRODUCTS STANDARD Doc. No. GEN-SPE-ESO-33000-5335, Issue 5 Addendum Date: 15/07/2015 Integral Field Spectroscopy: 3D Data Cubes The data format being defined in this section applies to

More information

The Italian LBT spectroscopic data reduction pipeline

The Italian LBT spectroscopic data reduction pipeline LBTO 2017 Users' Meeting The Italian LBT spectroscopic data reduction pipeline Alida Marchetti INAF-IASF Milano Firenze, June 20th-23rd reduction pipeline SOME NUMBERS INAF nights 46 Effective observing

More information

VERY LARGE TELESCOPE 3D Visualization Tool Cookbook

VERY LARGE TELESCOPE 3D Visualization Tool Cookbook European Organisation for Astronomical Research in the Southern Hemisphere VERY LARGE TELESCOPE 3D Visualization Tool Cookbook VLT-SPE-ESO-19500-5652 Issue 1.0 10 July 2012 Prepared: Mark Westmoquette

More information

Data products. Dario Fadda (USRA) Pipeline team Bill Vacca Melanie Clarke Dario Fadda

Data products. Dario Fadda (USRA) Pipeline team Bill Vacca Melanie Clarke Dario Fadda Data products Dario Fadda (USRA) Pipeline team Bill Vacca Melanie Clarke Dario Fadda Pipeline (levels 1 à 2) The pipeline consists in a sequence of modules. For each module, files are created and read

More information

MEGARA ONLINE ETC (v1.0.0) QUICK START AND REFERENCES GUIDE by Alexandre Y. K. Bouquin (updated June2017)

MEGARA ONLINE ETC (v1.0.0) QUICK START AND REFERENCES GUIDE by Alexandre Y. K. Bouquin (updated June2017) MEGARA ONLINE ETC (v1.0.0) QUICK START AND REFERENCES GUIDE by Alexandre Y. K. Bouquin (updated June2017) Table of Contents GETTING STARTED... 2 QUICK START... 3 Very quick start, for the truly impatient...

More information

zap Documentation Release 1.0.dev86 Kurt Soto

zap Documentation Release 1.0.dev86 Kurt Soto zap Documentation Release 1.0.dev86 Kurt Soto February 03, 2016 Contents 1 Installation 3 1.1 Requirements............................................... 3 1.2 Steps...................................................

More information

FLAMES Integral Field Unit ARGUS commissioned

FLAMES Integral Field Unit ARGUS commissioned FLAMES Integral Field Unit ARGUS commissioned A.Kaufer, L.Pasquini, R.Schmutzer, and R.Castillo Introduction The FLAMES multi fibre facility at the VLT (Pasquini et al. 2002) is equipped with two different

More information

FIFI-LS: Basic Cube Analysis using SOSPEX

FIFI-LS: Basic Cube Analysis using SOSPEX FIFI-LS: Basic Cube Analysis using SOSPEX Date: 1 Oct 2018 Revision: - CONTENTS 1 INTRODUCTION... 1 2 INGREDIENTS... 1 3 INSPECTING THE CUBE... 3 4 COMPARING TO A REFERENCE IMAGE... 5 5 REFERENCE VELOCITY

More information

Southern African Large Telescope. PFIS Distortion and Alignment Model

Southern African Large Telescope. PFIS Distortion and Alignment Model Southern African Large Telescope PFIS Distortion and Alignment Model Kenneth Nordsieck University of Wisconsin Document Number: SALT-3120AS0023 Revision 2.0 31 May 2006 Change History Rev Date Description

More information

IVOA Spectral Energy Distribution (SED) Data Model

IVOA Spectral Energy Distribution (SED) Data Model International Virtual Observatory Alliance IVOA Spectral Energy Distribution (SED) Data Model Version 1.0 IVOA Working Draft, 2012 October 15 This version: WD-SEDDM-1.0-20121015 Previous version(s): http://www.ivoa.net/internal/ivoa/interopmay2011sed/seddm-20110515.pdf

More information

IMAGING SPECTROMETER DATA CORRECTION

IMAGING SPECTROMETER DATA CORRECTION S E S 2 0 0 5 Scientific Conference SPACE, ECOLOGY, SAFETY with International Participation 10 13 June 2005, Varna, Bulgaria IMAGING SPECTROMETER DATA CORRECTION Valentin Atanassov, Georgi Jelev, Lubomira

More information

JWST Pipeline & Data Products

JWST Pipeline & Data Products JWST Pipeline & Data Products Stage 1: Ramps-to-Slopes Karl D. Gordon JWST Calibration WG Lead Space Telescope Sci. Inst. Baltimore, MD, USA Stage 2: Calibrated Slopes Stage 3: Ensemble Processing 18 May

More information

Overview of Post-BCD Processing

Overview of Post-BCD Processing Overview of Post-BCD Processing Version 1.1 Release Date: January 7, 2004 Issued by the Spitzer Science Center California Institute of Technology Mail Code 314-6 1200 E. California Blvd Pasadena, California

More information

NIRSpec Technical Note NTN / ESA-JWST-TN Author(s): G. Giardino Date of Issue: November 11, 2013 Version: 1.0

NIRSpec Technical Note NTN / ESA-JWST-TN Author(s): G. Giardino Date of Issue: November 11, 2013 Version: 1.0 NIRSpec Technical Note NTN-013-011/ ESA-JWST-TN-093 Author(s): G. Giardino Date of Issue: November 11, 013 Version: 1.0 estec European Space Research and Technology Centre Keplerlaan 1 01 AZ Noordwijk

More information

OU-VIS: Status. H.J. McCracken. and the OU-VIS team

OU-VIS: Status. H.J. McCracken. and the OU-VIS team OU-VIS: Status H.J. McCracken and the OU-VIS team What is OU-VIS for? From raw VIS data, create the algorithms and software to produce calibrated images suitable for cosmic shear measurement Implications:

More information

Photometric Software for Transits. PhoS-T

Photometric Software for Transits. PhoS-T Photometric Software for Transits PhoS-T Version 1.0 - Manual D. Mislis, R.Heller, J. Fernandez, U. Seemann Hamburg 2010 1. Introduction PhoS-T is an open-source graphical software for the data reduction

More information

JWST Pipeline & Data Products

JWST Pipeline & Data Products JWST Pipeline & Data Products Stage 1: Ramps-to-Slopes Karl D. Gordon JWST Calibration WG Lead Space Telescope Sci. Inst. Baltimore, MD, USA Stage 2: Calibrated Slopes Stage 3: Ensemble Processing Star

More information

esac PACS Spectrometer: forward model tool for science use

esac PACS Spectrometer: forward model tool for science use esac European Space Astronomy Centre (ESAC) P.O. Box, 78 28691 Villanueva de la Cañada, Madrid Spain PACS Spectrometer: forward model tool for science use Prepared by Elena Puga Reference HERSCHEL-HSC-TN-2131

More information

GIANO: The Graphical User Interface Manual

GIANO: The Graphical User Interface Manual 1/22 GIANO: The Graphical User Interface Manual Document: TNG-GIANO-001 Issue: 1.0 Prepared by : Name: S. Scuderi Institute: INAF Osservatorio Astrofisico di Catania Date : Approved by : Name: L. Origlia

More information

PERFORMANCE REPORT ESTEC. The spectral calibration of JWST/NIRSpec: accuracy of the instrument model for the ISIM-CV3 test cycle

PERFORMANCE REPORT ESTEC. The spectral calibration of JWST/NIRSpec: accuracy of the instrument model for the ISIM-CV3 test cycle ESTEC European Space Research and Technology Centre Keplerlaan 1 2201 AZ Noordwijk The Netherlands www.esa.int PERFORMANCE REPORT The spectral calibration of JWST/NIRSpec: accuracy of the instrument model

More information

Southern African Large Telescope

Southern African Large Telescope Southern African Large Telescope Title: Author(s): MIDAS automatic pipeline for HRS data Alexei Kniazev Doc. number: HRS0000006 Version: 1.0 Date: November 10, 2016 Keywords: HRS, Pipeline Approved: Petri

More information

UNIVERSITY OF HAWAII AT MANOA Institute for Astrononmy

UNIVERSITY OF HAWAII AT MANOA Institute for Astrononmy Pan-STARRS Document Control PSDC-xxx-xxx-01 UNIVERSITY OF HAWAII AT MANOA Institute for Astrononmy Pan-STARRS Project Management System PS1 Postage Stamp Server System/Subsystem Description Grant Award

More information

Image Processing Fundamentals. Nicolas Vazquez Principal Software Engineer National Instruments

Image Processing Fundamentals. Nicolas Vazquez Principal Software Engineer National Instruments Image Processing Fundamentals Nicolas Vazquez Principal Software Engineer National Instruments Agenda Objectives and Motivations Enhancing Images Checking for Presence Locating Parts Measuring Features

More information

Visualization & the CASA Viewer

Visualization & the CASA Viewer Visualization & the Viewer Juergen Ott & the team Atacama Large Millimeter/submillimeter Array Expanded Very Large Array Robert C. Byrd Green Bank Telescope Very Long Baseline Array Visualization Goals:

More information

4Kx4K CCD Imager for the 3.6m DOT

4Kx4K CCD Imager for the 3.6m DOT 4Kx4K CCD Imager for the 3.6m DOT As an in-house developmental activity, the 4KX4K CCD Imager is designed and assembled as a first light instrument for the axial port of the 3.6m DOT using the f/9 beam

More information

Modeling and Estimation of FPN Components in CMOS Image Sensors

Modeling and Estimation of FPN Components in CMOS Image Sensors Modeling and Estimation of FPN Components in CMOS Image Sensors Abbas El Gamal a, Boyd Fowler a,haomin b,xinqiaoliu a a Information Systems Laboratory, Stanford University Stanford, CA 945 USA b Fudan

More information

ENVI Classic Tutorial: Multispectral Analysis of MASTER HDF Data 2

ENVI Classic Tutorial: Multispectral Analysis of MASTER HDF Data 2 ENVI Classic Tutorial: Multispectral Analysis of MASTER HDF Data Multispectral Analysis of MASTER HDF Data 2 Files Used in This Tutorial 2 Background 2 Shortwave Infrared (SWIR) Analysis 3 Opening the

More information

Diffraction. Single-slit diffraction. Diffraction by a circular aperture. Chapter 38. In the forward direction, the intensity is maximal.

Diffraction. Single-slit diffraction. Diffraction by a circular aperture. Chapter 38. In the forward direction, the intensity is maximal. Diffraction Chapter 38 Huygens construction may be used to find the wave observed on the downstream side of an aperture of any shape. Diffraction The interference pattern encodes the shape as a Fourier

More information

Astronomical spectrographs. ASTR320 Wednesday February 20, 2019

Astronomical spectrographs. ASTR320 Wednesday February 20, 2019 Astronomical spectrographs ASTR320 Wednesday February 20, 2019 Spectrographs A spectrograph is an instrument used to form a spectrum of an object Much higher spectral resolutions than possible with multiband

More information

Minimizing Noise and Bias in 3D DIC. Correlated Solutions, Inc.

Minimizing Noise and Bias in 3D DIC. Correlated Solutions, Inc. Minimizing Noise and Bias in 3D DIC Correlated Solutions, Inc. Overview Overview of Noise and Bias Digital Image Correlation Background/Tracking Function Minimizing Noise Focus Contrast/Lighting Glare

More information

VIP Documentation. Release Carlos Alberto Gomez Gonzalez, Olivier Wertz & VORTEX team

VIP Documentation. Release Carlos Alberto Gomez Gonzalez, Olivier Wertz & VORTEX team VIP Documentation Release 0.8.9 Carlos Alberto Gomez Gonzalez, Olivier Wertz & VORTEX team Feb 17, 2018 Contents 1 Introduction 3 2 Documentation 5 3 Jupyter notebook tutorial 7 4 TL;DR setup guide 9

More information

111 Highland Drive Putnam, CT USA PHONE (860) FAX (860) SM32Pro SDK

111 Highland Drive Putnam, CT USA PHONE (860) FAX (860) SM32Pro SDK SM32Pro SDK Spectrometer Operating Software USER MANUAL SM301/SM301EX Table Of Contents Warranty And Liability...3 Quick Start Installation Guide...4 System Requirements...5 Getting Started...6 Using the

More information

DD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication

DD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication DD2423 Image Analysis and Computer Vision IMAGE FORMATION Mårten Björkman Computational Vision and Active Perception School of Computer Science and Communication November 8, 2013 1 Image formation Goal:

More information

Optics Vac Work MT 2008

Optics Vac Work MT 2008 Optics Vac Work MT 2008 1. Explain what is meant by the Fraunhofer condition for diffraction. [4] An aperture lies in the plane z = 0 and has amplitude transmission function T(y) independent of x. It is

More information

SPIcam: an overview. Alan Diercks Institute for Systems Biology 23rd July 2002

SPIcam: an overview. Alan Diercks Institute for Systems Biology 23rd July 2002 SPIcam: an overview Alan Diercks Institute for Systems Biology diercks@systemsbiology.org 23rd July 2002 1 Outline Overview of instrument CCDs mechanics instrument control performance construction anecdotes

More information

FFT-Based Astronomical Image Registration and Stacking using GPU

FFT-Based Astronomical Image Registration and Stacking using GPU M. Aurand 4.21.2010 EE552 FFT-Based Astronomical Image Registration and Stacking using GPU The productive imaging of faint astronomical targets mandates vanishingly low noise due to the small amount of

More information

TECHNICAL REPORT. Doc #: Date: Rev: Phone:

TECHNICAL REPORT. Doc #: Date: Rev: Phone: When there is a discrepancy between the information in this technical report and information in JDox, assume JDox is correct. TECHNICAL REPORT Title: Simulations of Target Acquisition with MIRI Four-Quadrant

More information

INTERFERENCE. where, m = 0, 1, 2,... (1.2) otherwise, if it is half integral multiple of wavelength, the interference would be destructive.

INTERFERENCE. where, m = 0, 1, 2,... (1.2) otherwise, if it is half integral multiple of wavelength, the interference would be destructive. 1.1 INTERFERENCE When two (or more than two) waves of the same frequency travel almost in the same direction and have a phase difference that remains constant with time, the resultant intensity of light

More information

PAPARA(ZZ)I User Manual

PAPARA(ZZ)I User Manual PAPARA(ZZ)I 2.0 - User Manual June 2016 Authors: Yann Marcon (yann.marcon@awi.de) Autun Purser (autun.purser@awi.de) PAPARA(ZZ)I Program for Annotation of Photographs and Rapid Analysis (of Zillions and

More information

Contour LS-K Optical Surface Profiler

Contour LS-K Optical Surface Profiler Contour LS-K Optical Surface Profiler LightSpeed Focus Variation Provides High-Speed Metrology without Compromise Innovation with Integrity Optical & Stylus Metrology Deeper Understanding More Quickly

More information

The latest trend of hybrid instrumentation

The latest trend of hybrid instrumentation Multivariate Data Processing of Spectral Images: The Ugly, the Bad, and the True The results of various multivariate data-processing methods of Raman maps recorded with a dispersive Raman microscope are

More information

PACS Spectrometer Simulation and the Extended to Point Correction

PACS Spectrometer Simulation and the Extended to Point Correction PACS Spectrometer Simulation and the Extended to Point Correction Jeroen de Jong February 11, 2016 Abstract This technical note describes simulating a PACS observation with a model source and its application

More information

CanariCam: Polarimetry & Data Reduction

CanariCam: Polarimetry & Data Reduction Acknowledgements CanariCam: Polarimetry & Data Reduction Chris Packham, Charlie Telesco & The CanariCam Team University of Florida Text heavily drawn from Polpack Starlink manual by Berry & Gledhill Longevity

More information

Imaging and Deconvolution

Imaging and Deconvolution Imaging and Deconvolution Urvashi Rau National Radio Astronomy Observatory, Socorro, NM, USA The van-cittert Zernike theorem Ei E V ij u, v = I l, m e sky j 2 i ul vm dldm 2D Fourier transform : Image

More information

Software tools for ACS: Geometrical Issues and Overall Software Tool Development

Software tools for ACS: Geometrical Issues and Overall Software Tool Development Software tools for ACS: Geometrical Issues and Overall Software Tool Development W.B. Sparks, R. Jedrzejewski, M. Clampin, R.C. Bohlin. June 8, 2000 ABSTRACT We describe the issues relating to internal

More information

Time-of-flight basics

Time-of-flight basics Contents 1. Introduction... 2 2. Glossary of Terms... 3 3. Recovering phase from cross-correlation... 4 4. Time-of-flight operating principle: the lock-in amplifier... 6 5. The time-of-flight sensor pixel...

More information

OSKAR-2: Simulating data from the SKA

OSKAR-2: Simulating data from the SKA OSKAR-2: Simulating data from the SKA AACal 2012, Amsterdam, 13 th July 2012 Fred Dulwich, Ben Mort, Stef Salvini 1 Overview OSKAR-2: Interferometer and beamforming simulator package. Intended for simulations

More information

Image Restoration and Reconstruction

Image Restoration and Reconstruction Image Restoration and Reconstruction Image restoration Objective process to improve an image, as opposed to the subjective process of image enhancement Enhancement uses heuristics to improve the image

More information

Interactive comment on Quantification and mitigation of the impact of scene inhomogeneity on Sentinel-4 UVN UV-VIS retrievals by S. Noël et al.

Interactive comment on Quantification and mitigation of the impact of scene inhomogeneity on Sentinel-4 UVN UV-VIS retrievals by S. Noël et al. Atmos. Meas. Tech. Discuss., www.atmos-meas-tech-discuss.net/5/c741/2012/ Author(s) 2012. This work is distributed under the Creative Commons Attribute 3.0 License. Atmospheric Measurement Techniques Discussions

More information

Interactive comment on Quantification and mitigation of the impact of scene inhomogeneity on Sentinel-4 UVN UV-VIS retrievals by S. Noël et al.

Interactive comment on Quantification and mitigation of the impact of scene inhomogeneity on Sentinel-4 UVN UV-VIS retrievals by S. Noël et al. Atmos. Meas. Tech. Discuss., 5, C741 C750, 2012 www.atmos-meas-tech-discuss.net/5/c741/2012/ Author(s) 2012. This work is distributed under the Creative Commons Attribute 3.0 License. Atmospheric Measurement

More information

Status of PSF Reconstruction at Lick

Status of PSF Reconstruction at Lick Status of PSF Reconstruction at Lick Mike Fitzgerald Workshop on AO PSF Reconstruction May 10-12, 2004 Quick Outline Recap Lick AO system's features Reconstruction approach Implementation issues Calibration

More information

specular diffuse reflection.

specular diffuse reflection. Lesson 8 Light and Optics The Nature of Light Properties of Light: Reflection Refraction Interference Diffraction Polarization Dispersion and Prisms Total Internal Reflection Huygens s Principle The Nature

More information

ADC Figure of Merit. Introduction:

ADC Figure of Merit. Introduction: ADC Figure of Merit Introduction: The improved performance LRIS in spectroscopic mode with the ADC (over the no-adc case) has many factors. Concentrating solely on spectral throughput, those factors that

More information

Hashing. Hashing Procedures

Hashing. Hashing Procedures Hashing Hashing Procedures Let us denote the set of all possible key values (i.e., the universe of keys) used in a dictionary application by U. Suppose an application requires a dictionary in which elements

More information

Image Restoration and Reconstruction

Image Restoration and Reconstruction Image Restoration and Reconstruction Image restoration Objective process to improve an image Recover an image by using a priori knowledge of degradation phenomenon Exemplified by removal of blur by deblurring

More information

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical

More information

The FITS Image Format and Image Display with DS9

The FITS Image Format and Image Display with DS9 The FITS Image Format and Image Display with DS9 Guillermo Damke (TA) Steve Majewski (Course instructor) ASTR 5110 Fall 2011 University of Virginia Part I. FITS Files FITS is the standard and most used

More information

Instytut Fizyki Doświadczalnej Wydział Matematyki, Fizyki i Informatyki UNIWERSYTET GDAŃSKI

Instytut Fizyki Doświadczalnej Wydział Matematyki, Fizyki i Informatyki UNIWERSYTET GDAŃSKI Instytut Fizyki Doświadczalnej Wydział Matematyki, Fizyki i Informatyki UNIWERSYTET GDAŃSKI I. Background theory. 1. Characteristics of the apparatus: prismatic, grating, interferometers. 2. Operating

More information

Performance Characterization in Computer Vision

Performance Characterization in Computer Vision Performance Characterization in Computer Vision Robert M. Haralick University of Washington Seattle WA 98195 Abstract Computer vision algorithms axe composed of different sub-algorithms often applied in

More information

CSPAD FAQ And starting point for discussion. Philip Hart, LCLS Users Workshop, Detector session, 2 Oct 2013

CSPAD FAQ And starting point for discussion. Philip Hart, LCLS Users Workshop, Detector session, 2 Oct 2013 CSPAD FAQ And starting point for discussion Philip Hart, LCLS Users Workshop, Detector session, 2 Oct 2013 Outline Planning CSpad-based experiments getting latest specs MC-based experimental design studies

More information

( ) = First Bessel function, x = π Dθ

( ) = First Bessel function, x = π Dθ Observational Astronomy Image formation Complex Pupil Function (CPF): (3.3.1) CPF = P( r,ϕ )e ( ) ikw r,ϕ P( r,ϕ ) = Transmittance of the aperture (unobscured P = 1, obscured P = 0 ) k = π λ = Wave number

More information

ADS40 Calibration & Verification Process. Udo Tempelmann*, Ludger Hinsken**, Utz Recke*

ADS40 Calibration & Verification Process. Udo Tempelmann*, Ludger Hinsken**, Utz Recke* ADS40 Calibration & Verification Process Udo Tempelmann*, Ludger Hinsken**, Utz Recke* *Leica Geosystems GIS & Mapping GmbH, Switzerland **Ludger Hinsken, Author of ORIMA, Konstanz, Germany Keywords: ADS40,

More information

Data Reduction for XCOV30

Data Reduction for XCOV30 Data Reduction for XCOV30 1) Location of Original Data files All observers will upload their data to daedalus.dot.physics.udel.edu (note the dot ). The original observations are located in /home/data/incoming

More information

Introduction to Raman spectroscopy measurement data processing using Igor Pro

Introduction to Raman spectroscopy measurement data processing using Igor Pro Introduction to Raman spectroscopy measurement data processing using Igor Pro This introduction is intended to minimally guide beginners to processing Raman spectroscopy measurement data, which includes

More information

HESP PIPELINE v. 1.0

HESP PIPELINE v. 1.0 HESP PIPELINE v. 1.0 Installation and Usage Arun Surya 20/06/2017 1. INSTALLATION The HESP pipeline installation zip file can be downloaded from, https://www.iiap.res.in/hesp/hesp_pipeline.zip. The zip

More information

(Fiber-optic Reosc Echelle Spectrograph of Catania Observatory)

(Fiber-optic Reosc Echelle Spectrograph of Catania Observatory) (Fiber-optic Reosc Echelle Spectrograph of Catania Observatory) The echelle spectrograph delivered by REOSC (France), was designed to work at the F/15 cassegrain focus of the 91-cm telescope. The spectrograph

More information

The Mosaic Data Capture Agent

The Mosaic Data Capture Agent Astronomical Data Analysis Software and Systems VII ASP Conference Series, Vol. 145, 1998 R. Albrecht, R. N. Hook and H. A. Bushouse, eds. The Mosaic Data Capture Agent Doug Tody and Francisco G. Valdes

More information

Filter-Dependent and Geometric Distortions

Filter-Dependent and Geometric Distortions Updates to the WFC3/UVIS Filter-Dependent and Geometric Distortions C. Martlin, V. Kozhurina-Platais, M. McKay, E. Sabbi. July 11, 2018 Abstract Individual geometric distortion solutions and fine-scale

More information

CRISM (Compact Reconnaissance Imaging Spectrometer for Mars) on MRO. Calibration Upgrade, version 2 to 3

CRISM (Compact Reconnaissance Imaging Spectrometer for Mars) on MRO. Calibration Upgrade, version 2 to 3 CRISM (Compact Reconnaissance Imaging Spectrometer for Mars) on MRO Calibration Upgrade, version 2 to 3 Dave Humm Applied Physics Laboratory, Laurel, MD 20723 18 March 2012 1 Calibration Overview 2 Simplified

More information

HW Chapter 20 Q 2,3,4,5,6,10,13 P 1,2,3. Chapter 20. Classic and Modern Optics. Dr. Armen Kocharian

HW Chapter 20 Q 2,3,4,5,6,10,13 P 1,2,3. Chapter 20. Classic and Modern Optics. Dr. Armen Kocharian HW Chapter 20 Q 2,3,4,5,6,10,13 P 1,2,3 Chapter 20 Classic and Modern Optics Dr. Armen Kocharian Electromagnetic waves and matter: A Brief History of Light 1000 AD It was proposed that light consisted

More information

UNIFIT - Spectrum Processing, Peak Fitting, Analysis and Presentation Software for XPS, AES, XAS and RAMAN Spectroscopy Based on WINDOWS

UNIFIT - Spectrum Processing, Peak Fitting, Analysis and Presentation Software for XPS, AES, XAS and RAMAN Spectroscopy Based on WINDOWS UNIFIT - Spectrum Processing, Peak Fitting, Analysis and Presentation Software for XPS, AES, XAS and RAMAN Spectroscopy Based on WINDOWS UNIFIT FOR WINDOWS is an universal processing, analysis and presentation

More information

OTO Photonics. Sidewinder TM Series Datasheet. Description

OTO Photonics. Sidewinder TM Series Datasheet. Description OTO Photonics Sidewinder TM Series Datasheet Description SW (Sidewinder TM ) Series spectrometer,built with the InGaAs type sensor and high performance 32bits RISC controller in, is specially designed

More information

Radiance, Irradiance and Reflectance

Radiance, Irradiance and Reflectance CEE 6100 Remote Sensing Fundamentals 1 Radiance, Irradiance and Reflectance When making field optical measurements we are generally interested in reflectance, a relative measurement. At a minimum, measurements

More information

Chemistry Instrumental Analysis Lecture 6. Chem 4631

Chemistry Instrumental Analysis Lecture 6. Chem 4631 Chemistry 4631 Instrumental Analysis Lecture 6 UV to IR Components of Optical Basic components of spectroscopic instruments: stable source of radiant energy transparent container to hold sample device

More information

Simple Spectrograph. grating. slit. camera lens. collimator. primary

Simple Spectrograph. grating. slit. camera lens. collimator. primary Simple Spectrograph slit grating camera lens collimator primary Notes: 1) For ease of sketching, this shows a transmissive system (refracting telescope, transmission grating). Most telescopes use a reflecting

More information

Document Number: SC2/FTS/SOF/020

Document Number: SC2/FTS/SOF/020 SCUBA-2 FTS Project Office University of Lethbridge Physics Department 4401 University Drive Lethbridge, Alberta CANADA T1K 3M4 Tel: 1-403-329-2771 Fax: 1-403-329-2057 Email: brad.gom@uleth.ca WWW: http://research.uleth.ca/scuba2/

More information

STRAIGHT LINE REFERENCE SYSTEM STATUS REPORT ON POISSON SYSTEM CALIBRATION

STRAIGHT LINE REFERENCE SYSTEM STATUS REPORT ON POISSON SYSTEM CALIBRATION STRAIGHT LINE REFERENCE SYSTEM STATUS REPORT ON POISSON SYSTEM CALIBRATION C. Schwalm, DESY, Hamburg, Germany Abstract For the Alignment of the European XFEL, a Straight Line Reference System will be used

More information

Padova and Asiago Observatories

Padova and Asiago Observatories ISSN 1594-1906 Padova and Asiago Observatories CCD DATA ACQUISITION SYSTEM FOR THE COPERNICO TELESCOPE Baruffolo A., D Alessandro M. Technical Report n. 3 March 1993 Document available at: http://www.pd.astro.it/

More information

OPTICAL COHERENCE TOMOGRAPHY:SIGNAL PROCESSING AND ALGORITHM

OPTICAL COHERENCE TOMOGRAPHY:SIGNAL PROCESSING AND ALGORITHM OPTICAL COHERENCE TOMOGRAPHY:SIGNAL PROCESSING AND ALGORITHM OCT Medical imaging modality with 1-10 µ m resolutions and 1-2 mm penetration depths High-resolution, sub-surface non-invasive or minimally

More information

Introduction to Digital Image Processing

Introduction to Digital Image Processing Fall 2005 Image Enhancement in the Spatial Domain: Histograms, Arithmetic/Logic Operators, Basics of Spatial Filtering, Smoothing Spatial Filters Tuesday, February 7 2006, Overview (1): Before We Begin

More information

Chandra Source Catalog Quality Assurance Specifications

Chandra Source Catalog Quality Assurance Specifications I. General Chandra Source Catalog Quality Assurance Specifications August 17, 2007 Ian Evans (ievans@cfa.harvard.edu) 1. Quality Assurance Mechanisms Chandra Source Catalog quality assurance is achieved

More information

X shooter pipeline reduc1ons. In 7 (x3) easy steps

X shooter pipeline reduc1ons. In 7 (x3) easy steps X shooter pipeline reduc1ons In 7 (x3) easy steps Cascade X shooter headers, tools Example Run chain(vis) Overview Differences VIS/UVB/NIR chains Examples Outputs Observing strategy : To nod or not to

More information

Lecture 24. Lidar Simulation

Lecture 24. Lidar Simulation Lecture 24. Lidar Simulation q Introduction q Lidar Modeling via Lidar Simulation & Error Analysis q Functions of Lidar Simulation and Error Analysis q How to Build up Lidar Simulation? q Range-resolved

More information

UCD discussion from SSIG. B. Cecconi, S. Erard, P. Le Sidaner Observatoire de Paris

UCD discussion from SSIG. B. Cecconi, S. Erard, P. Le Sidaner Observatoire de Paris UCD discussion from SSIG B. Cecconi, S. Erard, P. Le Sidaner Observatoire de Paris Previous proposal (1) Plasma environment modeling magnetic potential vector. phys.magfield;phys.potential OR phys.magfield.potentialvector

More information

TracePro Stray Light Simulation

TracePro Stray Light Simulation TracePro Stray Light Simulation What Is Stray Light? A more descriptive term for stray light is unwanted light. In an optical imaging system, stray light is caused by light from a bright source shining

More information

Cosmic Origins Spectrograph: On-Orbit Performance of Target Acquisitions

Cosmic Origins Spectrograph: On-Orbit Performance of Target Acquisitions The 2010 STScI Calibration Workshop Space Telescope Science Institute, 2010 Susana Deustua and Cristina Oliveira, eds. Cosmic Origins Spectrograph: On-Orbit Performance of Target Acquisitions Steven V.

More information

DIFFRACTION 4.1 DIFFRACTION Difference between Interference and Diffraction Classification Of Diffraction Phenomena

DIFFRACTION 4.1 DIFFRACTION Difference between Interference and Diffraction Classification Of Diffraction Phenomena 4.1 DIFFRACTION Suppose a light wave incident on a slit AB of sufficient width b, as shown in Figure 1. According to concept of rectilinear propagation of light the region A B on the screen should be uniformly

More information

Ch 22 Inspection Technologies

Ch 22 Inspection Technologies Ch 22 Inspection Technologies Sections: 1. Inspection Metrology 2. Contact vs. Noncontact Inspection Techniques 3. Conventional Measuring and Gaging Techniques 4. Coordinate Measuring Machines 5. Surface

More information

Topic 9: Lighting & Reflection models 9/10/2016. Spot the differences. Terminology. Two Components of Illumination. Ambient Light Source

Topic 9: Lighting & Reflection models 9/10/2016. Spot the differences. Terminology. Two Components of Illumination. Ambient Light Source Topic 9: Lighting & Reflection models Lighting & reflection The Phong reflection model diffuse component ambient component specular component Spot the differences Terminology Illumination The transport

More information

From multiple images to catalogs

From multiple images to catalogs Lecture 14 From multiple images to catalogs Image reconstruction Optimal co-addition Sampling-reconstruction-resampling Resolving faint galaxies Automated object detection Photometric catalogs Deep CCD

More information

in Astronomy CCD cameras Charlie Santori Aug. 24, 2012

in Astronomy CCD cameras Charlie Santori Aug. 24, 2012 Introduction to signal and noise in Astronomy CCD cameras Charlie Santori Aug. 24, 2012 Outline 1. Introduction to CCDs 2. Signal and noise in CCDs 3. Comparison of a few CCD cameras Electronic detectors

More information

Euclid Mission Database

Euclid Mission Database Euclid Mission Database Roland D. Vavrek ESA / European Space Astronomy Centre 2016 Euclid Photometric Calibration Workshop Euclid Mission Database (MDB) Based on the Gaia Parameter Database design Centralized,

More information

SYDE 575: Introduction to Image Processing

SYDE 575: Introduction to Image Processing SYDE 575: Introduction to Image Processing Image Enhancement and Restoration in Spatial Domain Chapter 3 Spatial Filtering Recall 2D discrete convolution g[m, n] = f [ m, n] h[ m, n] = f [i, j ] h[ m i,

More information

Configuration of systems for testing thermal imagers

Configuration of systems for testing thermal imagers Optica Applicata, Vol. XL, No. 3, 2010 Configuration of systems for testing thermal imagers KRZYSZTOF CHRZANOWSKI 1, 2*, XIANMIN LI 3 1 Military University of Technology, Institute of Optoelectronics,

More information

1 Description. MEMORANDUM February 19, 2013

1 Description. MEMORANDUM February 19, 2013 MIT Kavli Institute Chandra X-Ray Center MEMORANDUM February 19, 2013 To: Jonathan McDowell, SDS Group Leader From: Glenn E. Allen, SDS Subject: DELTOCLK spec Revision: 3.2 URL: http://space.mit.edu/cxc/docs/docs.html#deltoclk

More information

1. ABOUT INSTALLATION COMPATIBILITY SURESIM WORKFLOWS a. Workflow b. Workflow SURESIM TUTORIAL...

1. ABOUT INSTALLATION COMPATIBILITY SURESIM WORKFLOWS a. Workflow b. Workflow SURESIM TUTORIAL... SuReSim manual 1. ABOUT... 2 2. INSTALLATION... 2 3. COMPATIBILITY... 2 4. SURESIM WORKFLOWS... 2 a. Workflow 1... 3 b. Workflow 2... 4 5. SURESIM TUTORIAL... 5 a. Import Data... 5 b. Parameter Selection...

More information

Long-slit data reduction: some advices. From: Alexei Kniazev

Long-slit data reduction: some advices. From: Alexei Kniazev Long-slit data reduction: some advices From: Alexei Kniazev The plan of this tedious talk 1. Very short introduction 2. Some advices 3. One example of the system of shell scripts for the long-slit reduction

More information

Interactive comment on Quantification and mitigation of the impact of scene inhomogeneity on Sentinel-4 UVN UV-VIS retrievals by S. Noël et al.

Interactive comment on Quantification and mitigation of the impact of scene inhomogeneity on Sentinel-4 UVN UV-VIS retrievals by S. Noël et al. Atmos. Meas. Tech. Discuss., 5, C751 C762, 2012 www.atmos-meas-tech-discuss.net/5/c751/2012/ Author(s) 2012. This work is distributed under the Creative Commons Attribute 3.0 License. Atmospheric Measurement

More information

Topic 9: Lighting & Reflection models. Lighting & reflection The Phong reflection model diffuse component ambient component specular component

Topic 9: Lighting & Reflection models. Lighting & reflection The Phong reflection model diffuse component ambient component specular component Topic 9: Lighting & Reflection models Lighting & reflection The Phong reflection model diffuse component ambient component specular component Spot the differences Terminology Illumination The transport

More information