Image Collection From Orbit

Size: px
Start display at page:

Download "Image Collection From Orbit"

Transcription

1 Image Collection From Orbit Having addressed the basics of orbital motion in the previous section, we now examine the geometric and kinematic aspects of the main orbital imaging techniques. The four main methods for collecting imagery from orbit are: (1) along track pushbroom scanning; (2) cross track scanning; (3) conical scanning; and (4) step-stare framing as depicted in Figure 1. There are subsets within these categories. Pushbroom scanning is the least complex and most widely used, largely because the orbital motion of the satellite can provide all or part of the scan. Pushbroom and cross-track scan are the preferred choices for area collection. The choice between them largely depends on the required resolution, with a bias toward pushbroom for the higher resolution and cross track for lower resolution. Conical scan is the most efficient area collection technique, but is limited to lower resolution applications. Because of its viewing geometry widely varies, conical scan is largely limited to ship search and meteorology applications. Stepstare framing is well suited for imaging smaller local areas or randomly distributed point targets requiring high resolution. The choice of technique also has a strong dependence on the available detector technology. A key consideration in these techniques is how the forward motion of the satellite and earth rotation are either dealt with or ignored. Figure 1. Image Collection Techniques 1 R. R. Auelmann

2 Pushbroom Scan Pushbroom scan is the principal choice for satellite imaging, primarily because of its relative simplicity. It used by all the SPOT satellites, and by the IKONOS II and QuickBird 2 satellites. The LOS projections of one or more lines of detectors are aligned normal to the direction of satellite motion and scanned on the ground in the along track direction (Figure 2). The center of the line array can be biased to point fore or aft of the satellite sub-nadir ground track point, or to point to either side of the satellite ground-track. For example, the SPOT 1, 2, 3 and 5 satellites all have two high resolution sensors which have the ability to generate long swaths offset by the angle θ CT up to 27 to either side of the satellite ground track. The SPOT 5 satellite also has a pair of lower resolution cameras, one pointing forward and the other pointing aft, to provide stereo imagery for mapping. θ AT θ CT Satellite ground track Fore (or aft) pushbroom Nadir pushbroom Side offset pushbroom Figure 2. Pushbroom Scans The cross-track offset angle θ CT, while adjustable, is a constant during a scan. However, the along track offset θ AT may or may not be constant during a scan. The SPOT high resolution sensors only viewed to the sides, while its two stereo cameras has fixed fore and aft offsets θ AT of ± R. R. Auelmann

3 Single Line of Detectors. In its simplest implementation, a single linear array of detectors is aligned normal to the direction of satellite motion with the center of the array slewed so that it continuously points along nadir (Figure 3). The detectors collect photons for a fixed time interval t int equal to the along track pixel projection D on the ground divided by the satellite ground track velocity v g. The readout time is almost instantaneous, so the effective line rate LR is the reciprocal of the integration time t int. The high-resolution sensors on the SPOT 1, 2 and 3 satellites used this implementation to provide D = 10 m ground resolution. For the SPOT 828-km altitude circular orbit, the ground speed v g of the satellite sub nadir point is km/s for which the allowable integration time for a single line of detectors t int is approximately 1.5 ms (equal to Δ /v g ). The SPOT optical system was designed so that adequate signal to noise could be achieved with this integration time. f p h Aperture FOV v Δ v τ = Δ Figure 3. Simplest Form of Pushbroom Imaging. The required line rate (LR) to scan at the ground speed of the satellite sub nadir point for a satellite in a circular orbit is given by LR = r E GSD µ ( r E + h) 3 For low Earth orbits, the LR is relatively insensitive to orbit altitude (Figure 4). 3 R. R. Auelmann

4 Figure 4. LR Required to Scan at Satellite Orbit Rate While the above restriction on LR (or the reciprocal integration time) proved practical for the SPOT 1 satellite, it is often impractical for higher resolution systems. IKONOS II provides a 0.82-m GSD when nadir pointing in its 682-km altitude circular orbit, for which the ground track speed is 6.79 km/s. If IKONOS were limited to a single line of detectors, the integration time would be limited to a scant 0.12 ms. One solution is to back-scan, so as to slow the line scan rate across the ground to increase the integration time. However, as shown in Figure 5, this approach results in a changes in the along-track LOS offset angle and changes in range, which limit the along track distance that can be imaged on a single pass. Satellite path Figure 5. Back-scan to Increase Integration Time Time Delayed Integration. Fortunately, there is a solution that allows an increase in integration time while maintaining a fixed along-track offset angle, making it suitable for imaging large areas. This solution is time-delayed integration (TDI), and is commonly employed on higher resolution imaging satellites including IKONOS II and QuickBird 2. This method involves the use of multiple (typically 4, 8, 16, 32 or 64) lines of detectors in the scan direction. IKONOS II has a CCD array with 32 lines of detectors stacked one behind the other in the along track direction. Using a clever delay-and-sum process, the allowable integration time is increased by a factor of 32 to an acceptable value of 3.84 ms for the IKONOS orbit. Thus this approach has the affect of increasing the integration time by the number of detectors in TDI. 4 R. R. Auelmann

5 The process is described in Figure 6 for the simplified case of a sensor translating at a velocity v over a flat earth with four lines of detectors in TDI. The size of the ground patch D is equal to the projection of a single detector on the ground. The time required for the vehicle to translate a distance Δ is equal to Δ/v. The vehicle is shown in four positions as it translates above the fixed ground patch. The positions are separated in time by Δ/v. During the time interval t 0 to t 0 + Δ v, the first detector on the left scans across the ground patch. At t 0 + Δ v, the accumulated signal from detector 1 is shifted to detector 2. In the time interval t 0 + Δ v to t 0 + 2Δ v, the second detector scans across the ground patch and adds the accumulated signal to that shifted from the first detector. This process is repeated until t 0 + 4Δ v when the fourth detector will have a total accumulated signal from all four detectors having viewed the same ground spot. At that time the signal from detector four is shifted to a buffer and read out. Because the magnitude of the signal is equivalent to that which would have been collected by a single detector translating at one-fourth the speed, TDI allows an increase in line rate without reducing the total integration time, which in this example is 4Δ v. Moreover, the readout noise is not increased, because only the summed value from the last line of detectors is readout. t o + Δ 2v g t o + 3Δ 2v g t o + 5Δ 2v g t o + 7Δ 2v g v v v v Δ Figure 6. Timed Delayed Integration (TDI) An inherent penalty for line scan systems (with or without TDI) is the one-pixel of smear in the along track direction due to the translation of the detector footprint during the integration time. While TDI is the method of choice for most high resolution pushbroom imaging systems, there are certain limitations which arise when viewing fore or aft, or to either side of the satellite local vertical. These limitations are described below. 5 R. R. Auelmann

6 Orbital Smear Due to Range Change When Viewing Fore or Aft. When viewing fore or aft, the small changes in range between the first and last lines in the TDI chain introduce a cross-track smear, which increases linearly with the number of cross-track pixels. This situation is depicted in Figure 7 for the case when viewing forward of the sub-nadir ground track. Because the range at the end of the integration interval is shorter than the range at the beginning of the interval, the projection of the array in the cross track direction is foreshortened. The magnitude of the foreshortening, expressed in terms of the number of pixels, is given by Smear(pixels) = N 2 vt int r sinθ where vt int sinθ is the change in range for a circular orbit. Note that when the smear is expressed in pixels, it is independent of resolution. This is a form of what is called orbital smear; that is, smear due to changes in the LOS due to orbital motion, as opposed to smear due to pointing errors of the sensor. vt int Last line of TDI θ r First line of TDI h smear Figure 7. Range Smear With TDI When Viewing Forward. The slant range for the last TDI line (pink) is shorter than for the first TDI line (blue). As a result the width of the footprint narrows. The opposite is true when viewing aft. Two examples are presented in Table 1. IKONOS II has the capability to point fore and aft 45. When so doing the smear at the ends of the array amounts to 0.14 pixels. It is also a problem for the lower flying QuickBird 2 which has the capability of pointing 30 fore and aft. Here the smear is 0.27 pixels, mainly because of the large number of elements along the array. In both cases the integration time is based on 32 elements in TDI. Even if 64 lines of TDI, the amount of range smear would have been tolerable. 6 R. R. Auelmann

7 TABLE 1. Cross Track Smear Due to Range Change for IKONOS II and QuickBird 2 IKONOS II QuickBird 2 Altitude, h (km) Offset, q (deg) Range, r (km) Velocity, v (km/s) Pixels, N 13,500 28,000 Time, t (msec) Smear (pixels) Clocking Error When Side Viewing. In a pushbroom TDI system the detector line rate is set so as to synchronize each detector in the center column of the N detectors across the swath (Figure 8). Unfortunately the ground projections of the detectors to the left of the center column are progressively reduced in the direction of scan, and those to the right of the center column are increased, all due to differences in slant range. As a result there is a progressively increasing smear error, which depends on the number of lines M in TDI and the number of cross-track detectors N. Figure 8. Limitation on TDI When Side Viewing. With a single line rate, only the center column of detectors is precisely synchronized so that each detector views the same portion of the ground. The rest of the columns are slightly mismatched. The maximum error occurs for the last detector at each end of the array, and is a fairly simple equation when expressed in terms of pixels as presented below 7 R. R. Auelmann

8 Clocking smear (pixels) = N 2 M IFOV tanψ For low altitudes, the zenith angle y is approximately equal to the offset pointing angle θ. The clocking smears for IKONOS II and QuickBird 2 are listed in Table 2 for side offset angles equal to the forward offsets used in Table 1. The effects are somewhat more significant. Since M IFOV = v t int r it follows that the clocking error is equivalent to the so-called azimuth orbital smear error associated with a frame camera (discussed later in this section). TABLE 2. Cross Track Smear Due to Clocking Error for IKONOS II and QuickBird 2 IKONOS II QuickBird 2 Altitude, h (km) Offset, q (deg) Zenith angle, y Pixels, M Pixels, N 13,500 28,000 IFOV (µr) Smear (pixels) Yaw Bias for Earth Rate. With detectors in TDI it is also important to bias the sensor yaw angle as a function of the satellite latitude to account for Earth rotation. For a nadir pointing satellite in a circular orbit, the required yaw angle is & Ω γ = tan 1 ( E ' n ) cos 2 λ sat cos 2 i+ * where i is the orbit inclination, λ sat is the satellite latitude, n is the mean orbit rate and Ω E is the rotation rate of the earth (73 µr/s). This effect can be as large as four degrees for a high inclination LEO satellite when crossing the Equator. The slowly-changing yaw angle can be accommodated by yawing the spacecraft. Line Wander. One of the drawbacks of pushbroom imaging is its susceptibility to line wander due to low-frequency vehicle oscillations. The problem occurs because, unlike with a frame camera, the entire scene is not imaged at the same time. So for example, IKONOS II takes 1.6 seconds to image its standard 11-km by 11-km scene. This may seem short, but not so short that oscillations with frequencies of only a few Hertz can play havoc. Attitude oscillations about the vehicle roll axis (typically aligned to the orbit velocity vector) introduce cross-track wander, which can make straight lines appear wavy. Attitude oscillations about the vehicle pitch axis (typically aligned normal to the orbit plane) are more difficult to see in the imagery, because lines of detectors are oriented in the cross-track direction. However, for stereo imaging used to estimate ground elevation for topographic purposes, pitch oscillations can be just as serious as 8 R. R. Auelmann

9 roll oscillations. Finally, vehicle oscillations about the yaw axis (typically aligned along the local vertical) produce a see-saw effect. Typical line wander signatures are depicted in Figure 9. Cross track line wander Along track line wander See-Saw line wander Figure 9. Line Wander Signatures. Rather than place the burden on the attitude control system to limit the line wander to on the order of a pixel, it is possible to sense the line wander and remove it by post processing. The technique described below involves the comparison of time-separated images of the same scene. In most cases, the desired width of the pushbroom scan dictates the use of multiple staggered sub-arrays as depicted in Figure 10a. Staggering is required because packaging limits just how close the adjacent sub arrays can be placed. By providing some overlap of adjacent sub arrays in the cross track direction, staggering can be used to provide time-separated images of the same scene, which information can be used to estimate line wander. While the cross track overlap is at one s disposal, the along track separation is generally constrained by manufacturing considerations. This is significant because the along track separation distance provides an upper limit cutoff to the temporal frequencies (of the line wander) that can be sensed. Let Δ AT be the along track pixel projection on the ground and v g be the satellite ground speed. Then the pixel rate on the ground is v g /Δ AT. For a nadir viewing IKONOS II, v g /Δ AT = pix/s. Let s is the along-track separation distance between arrays expressed in pixels. Then the timeτ between along track samples of the same scene is( s Δ AT ) /v g, and the sample frequency is v g /( s Δ AT ). As an example, postulate a minimum s of 400 pixels. Then for the IKONOS conditions, the maximum sampling frequency would be slightly over 20 Hz with 50 ms between images of the same scene. By the Nyquist criteria, this would be adequate to sample disturbance frequencies up to 10 Hz. Generally, the LOS control system is designed to limit the allowable smear rate to a fraction of a pixel per integration time interval t int. Suppose that limit were half a pixel, and assume a sinusoidal disturbance frequency of the form a ω sint. Then with the peak amplitude a ω set equal to 0.5 pixel/t int, we can determine the disturbance amplitude as a function of the 9 R. R. Auelmann

10 disturbance frequency f d = ω /2π. Thus for a 3-ms integration time, and a 20-Hz disturbance frequency, the peak amplitude could be as large as 2.65 pixels. If the goal were to limit the line wander to less than a pixel, it would be necessary to sample at a higher frequency than achievable with the leading and trailing arrays separated by as much as 400 pixels. In this event one might consider the approach depicted in Figure 10b. It uses small outboard arrays, which would be specifically designed to have smaller along track separation distances than are achievable with the larger sub arrays. Scan Along track separation, s a) Exploiting overlap of main sub arrays Scan Cross-track overlap Along track separation, s b) Employing small outboard arrays Figure 10. Providing time separated measurements of the same scene At each time interval τ a few lines of imagery are produced by both the leading and trailing overlap arrays. In the absence of line wander, an image produced by the trailing array should overlay exactly with the image produced by the leading array a time τ earlier. Deviations are expressed in terms of their along track and cross track image centroid differences. These deviations cumulatively add as a function of time. The concept is depicted in Figure 11. The deviation measurements themselves are assumed to have random errors σ. As such, the accuracy of the cumulative sum decreases as the square root of the number n of centroid measurements as depicted in Figure 12. This process is performed at least two points across the array, so that along track line wander errors can be distinguished from line wander rotation errors about the LOS. Clearly, the accuracy of the rotation error about the LOS improves with the distance between the two measurement points on the same line. Because the predicted cross track error must be the same across any single image line, the cumulative cross track error is reduced. If there are two measurements along any line, the reduction in the cross track error is the square root of two. This only applies to cross track errors. 10 R. R. Auelmann

11 Along track displacement Forward array compare Cummulative error Step difference Image block compare Step difference s Aft array 0 τ 2τ Comparisons used to located centroid differences Time (ms) Figure 11. Comparing Image Centroids produced by Time-Separated Images Pixel displacement, y 2Δ Incremental Change Δy n = measurements = y n y n 1 σ = measurement error ˆ y = estimates of y ˆ y n n ( ) = Δy n 1 σ n 2 = n σ 2 3Δ 4Δ Error buildup n Predicted Changes Figure 12. Random Walk Estimation Using Time Separated Measurements The error estimation process can propagate from a given line in both the forward and aft directions to produce a block of imagery, which (after correction) meets some given level of relative accuracy. Δy R. R. Auelmann

12 Cross-Track Scan When the desired swath width perpendicular to the satellite flight path is large compared to the width of a linear array, and/or the allowable FOV of the telescope, an attractive alternative to pushbroom is to scan the detector array in the cross-track direction, i.e. perpendicular to the satellite ground track. A cross track scan is more complicated than pushbroom because the camera LOS must be scanned perpendicular to the satellite ground track. Landsat satellites used an oscillating mirror in front of the telescope. The Corona satellites used a pair continuously rotating cameras. And the soon to be launched WorldView satellites will slew the entire satellite to accomplish the scans. Also, in the case of high-resolution imaging, the forward motion of the satellite must be compensated, which is an added complication. Cross track scan can be characterized in two manners: as to whether the scan takes place in one or two directions, and as to whether or not the scans patterns can be repeated indefinitely. These two groupings are represented in Figure 1. The choice of scan direction affects the manner in which forward motion of the satellite is compensated. Figure 13. Cross track scan categories Consider scan duration. In general there is a limiting resolution to implement a continuously repeating scan. Such a limit depends on the number of detectors across the array, the achievable line rate, and the reset time between scans. Continuous cross track scan is generally associated with large area coverage. The Landsat and Corona satellites were in this category. The second category is generally employed when resolutions higher than possible with a continuous scan are required. The scan lengths are usually shorter and are often offset to one side of the satellite ground track. It is anticipated that the WorldView satellites will operate in this fashion. Such scans are well suited to provide limited area coverage. Bi-directional Scans. In a bi-directional scan the forward motion of the satellite is compensated by steering the LOS in a figure-eight pattern. On Landsat this is accomplished using a pair of oscillating mirrors within the optical train. For Worldview this will be accomplished by the satellite attitude control system. The bi-directional cross track scan pattern is depicted in Figure 14. The top portion depicts the LOS motion relative to a frame that rotates at the orbital rate. The horizontal figure eight path compensates for the forward motion of the satellite; thereby minimizing overlap of adjacent scans and avoiding image smear due to the forward motion of the satellite during the integration time. The lower portion shows the scan pattern of the LOS on the surface of the earth. Cycle time is the sum of the scan time (the time to complete one cross track scan) and the slew time (the time required between the end of one scan and the start of the next scan in the opposite direction). So Figure 14 shows two cycles. The effect of Earth rotation rate, being two-orders of magnitude smaller than the scan rate, can be 12 R. R. Auelmann

13 ignored in the design of the sensor. However, it has to be taken into account when registering the imagery that is either compressed or elongated (depending on the scan direction). Figure 14. Bi-Directional Cross-Track Scan Uni-directional Scans. As the name implies, in uni-direction scan, all the imaging scan legs are in the same direction. Here we examine a unidirectional scan approach using a toggle mirror in front of the telescope. The desired LOS scan rate is LR IFOV. Let θ denote the desired LOS scan angle. To execute a scan the mirror rotates (say clockwise) at a constant rate LR IFOV /2 through the angle θ /2. At the completion of the scan, the mirror returns back to its starting position (Figure 15). No imaging occurs during this return interval. The cycle time (from the start of one scan to the start of the next) is And the scan time is t scan = t cycle t return. t cycle = h N IFOV v g 13 R. R. Auelmann

14 Figure 15. Toggle Mirror Motion In the unidirectional cross track scan forward motion compensation can be implemented in either of two ways: by pitching the vehicle as required in bi-directional scan or by flying the satellite with a fixed yaw offset. In the true cross track scan (Figure 16) the detector array and the toggle mirror rotation axis are aligned parallel to the satellite roll axis, and the vehicle is pitched at a constant rate while the mirror rotates so as to produce a ground scan perpendicular to the satellite ground track. At the completion of the scan, the toggle mirror returns to its starting position and the vehicle is pitched back before the start of the next scan. In the pseudo cross track scan (Figure 17) the sensor (exclusive of the toggle mirror) is offset from the yaw axis by the fixed angle g given by tanγ = v /h LR IFOV In most cases this is an angle less than 10. The toggle mirror axis stays parallel to the vehicle roll axis. The net effect is that the scan projection on the ground is parallel to the vector sum of the LOS scan velocity and the satellite ground velocity. This is termed a pseudo cross track scan because the ground scan direction is not perpendicular to the satellite ground track. If the daylight passes occur on the southbound leg (as shown), this implementation favors scans that are from right to left, because they more closely align to the east west direction, which is favored by most users. The pseudo cross track scan is preferred over the true cross track scan because a small static offset of the telescope is simpler than an active time varying pitch maneuver of the satellite. 14 R. R. Auelmann

15 Figure 16. True Unidirectional Cross Track Scan Figure 17. Pseudo Unidirectional Cross Track Scan Continuous Repeating Cross Track Scan. Here we consider a special, but important, case of cross track scan. The satellite is in a circular orbit. The scan is perpendicular to and centered along the satellite ground track. And the cycle time is equal to the scan width w at the point of closest approach divided by the satellite ground speed v g. The cycle time is the sum of the scan time (the time to complete one scan) and the slew time (the time to reposition for the next scan): t cycle = t scan + t slew This case is of particular interest because the cross-scan sequence can be continued indefinitely. This is because each scan pair (left to right and right to left) has the same geometry as the preceding scan pair. As such, this is a viable approach for providing global and/or wide area coverage. 15 R. R. Auelmann

16 The geometry normal to the scan is depicted in Figure 18. The constant scan rate ω scan is determined by the detector line rate (LR) and the instantaneous field of view (IFOV): ω scan = LR IFOV As a consequence of the varying slant range, the GSD increases with scan offset from the centerline; however, in most cases this effect is secondary. ω scan t h 2θ 2ϕ r E Figure 18. Cross-scan Geometry The minimum scan width w is equal to the product of the number of detectors N across the array (in the cross scan direction), the IFOV and the orbit altitude h. Hence the cycle time is t cycle = w v g = h N IFOV v g where v g is the satellite ground track speed. The scan angle 2θ is given by 2θ = LR IFOV t scan The scan length along the surface of the Earth is given by L = 2r E ϕ where ϕ is determined by 16 R. R. Auelmann

17 [ ] & 1 ) ϕ = cos 1 ' ρ sin2 θ + sin 4 θ ( 1+ ρ 2 )sin 2 θ + ρ 2 * ( + ρ = r E r E + h, θ = LR IFOV t scan 2 For θ << 1 the scan length along the ground can be approximated by L = h LR IFOV t scan Then the ratio of the scan length to the scan width is L/w = LR t scan N = h LR IFOV v g LR N t slew For the limiting case where t slew is zero, this becomes To be efficient this ratio should be at least two. h LR IFOV ( L/w) limit = v g Figure 19 is a bird s eye view of this special case showing the relative positions of the satellite ground track and scan positions for a pair of bi-directional cross track scans. The depiction is simplified in that it does not show the increase in scan footprint and resulting scan overlaps. Figure 19. Continuous Cross Track Scanning Here we consider cross track design using the SPOT 5 satellite as a point of departure. Recall that SPOT 5 has a pair of HRG pushbroom (along track) sensors. Each sensor provides approximately 3-m resolution imagery (using the Supermode technique) over a nominal 60-km wide 17 R. R. Auelmann

18 swath from its 828-km altitude Sun synchronous circular orbit. With the pair of sensors, complete global coverage can be provided every 26 days. A single cross-track scan sensor, operating from the same orbit, can provide 3-m resolution imagery over a much wider swath. The scan is centered along the satellite ground track, and it is repeating in the sense that each bi-directional scan pair has the same relative geometry as the preceding scan pair. The design in Table 3 will produce a 579-km swath making it possible to provide complete global coverage from the SPOT orbit in only five days, a more than five-fold improvement over SPOT 5. The design is for the (0.45 µm µm) panchromatic band. The scan angle2θ required to produce the desired swath is The corresponding Earth central angle 2ϕ is 5.2, and the slant range at the ends of the scan is 883 km. The postulated detector array has 21,600 x 64 elements, where the 64 elements are in TDI. The pixel pitch is 8 µm. In order to provide 3-m resolution GSD at the extremes of the scan, the required focal length is 2.45 m. The corresponding IFOV is 3.27 µr. The cycle time is 8.87 sec. For an aggressive line rate LR of 40,000/sec the scan time is 5.11 sec allowing 3.76 sec for slew between scans. The effective duty cycle (scan time to cycle time) is a modest 61%. With 64 elements in TDI the integration time is an ample 1.6 ms for an optical Q of one, achieved through the use of a 0.21-m aperture diameter. The main differences between this sensor and the SPOT-5 HRG sensor are: a significantly longer focal length and smaller aperture diameter, and the use of TDI with a significantly high LR required for cross track scan. TABLE 3. Hypothetical Cross-Scan Sensor in a SPOT Orbit Orbit Parameters units Complete orbits per day, I 14 Coverage repeat period, N solar days 26 Residual, K 5 Orbits per day, Q = I + (K/N) Orbits per repeat period, R = NQ 369 Orbit period min Mean orbit rate, n rad/sec Semi-major axis, a km 7206 Orbit altitude km 828 Sun synchronous inclination deg North-to-South Equatorial crossing hour 10:30AM Orbit speed km/sec Ground speed km/sec Earth rotation per orbit period (deg) Distance between passes km 2824 Daily step interval km 109 Cross Track Viewing Geometry Earth centered viewing offset, phi deg Range, r km Off nadir view angle, theta deg 19.1 Zenith angle, psi deg 21.7 Cross track swath width (end to end) km 579 Cross Track Sensor Spectral band µm Mean wavelength µm Pixel pitch, p µm 8.00 Aperture diameter, D m Focal length, f m 2.45 IFOV µrad 3.27 f/d 11.7 Optical Q Detector array (along track), N pixels Detectors in cross track TDI, M pixels 64 Along-track FOV mr 70.5 Allowable cycle time sec 8.87 Scan time sec 5.11 Slew time sec 3.76 Line rate K lps 40 Max. integration time ms 1.60 Ground Projection Nadir GSD m 2.7 r*ifov m 2.9 r*ifov / cos psi m 3.1 GSD at each end of scan m 3.0 Minimum along track scan width km R. R. Auelmann

19 Full Rotaton Scans. Starting in 1967 the CORONA KH-4B film return satellites provided imagery in strips over 200 km long with best resolution (as measured by today s definition) comparable to that provided today by the IKONOS-2 commercial imaging satellite. It did this using a panoramic camera which rotated a full 360 to provide unidirectional cross track scans. (Actually, the KH-4B had two counter rotating cameras, one pointing forward and the other pointing aft so as to provide stereo imagery.) Full rotation is attractive because it avoids the scan reversals required in the bi-directional scan approach. Also, all the scans are parallel, so forward motion compensation can be provided with a fixed yaw tilt of the payload analogous to the pseudo cross track scan implementation of Figure 17. (This advantage could not be exploited on the KH-4B because its two cameras rotated in opposite directions). Developing a digital version of the KH-4B that flies at a higher altitude (to provided several years of coverage) shouldn t be all that difficult. Wrong. For this full rotation approach (in its simplest implementation) the forward motion of the satellite ground track in the time τ to complete a full rotation must equal the cross track width at the ground track crossing: v g τ = h N IFOV where h is the orbit altitude, v g is the speed of the satellite ground track, IFOV is the instantaneous field of view, and N is the number of detectors in the cross scan direction. Thus the rotation rate is Ω = 2π /τ and the corresponding line rate is LR = Ω/IFOV. From this it follows that the required LR can be expressed as LR = 2π v g h N ( IFOV ) 2 Conversely, we can compute the required N for a given LR: N = 2π v g h LR ( IFOV ) 2 In Table 4 we compare the KH-4B design parameters and performance at its 155 km perigee, to a hypothetical digital imaging satellite designs that flies at 450 km. For the moment concentrate on Design A (ignoring Designs A1 and A2). These provide a 0.79-m nadir GSD. The KH-4B scan angle is 70 for which the scan length is 218 km. For the same end of scan GSD, the scan length for the digital design is 605 km because of the higher orbit. The corresponding scan angle is 67. Design A has almost six times as many pixels in the cross scan (along track) direction as the KH-4B. This accounts for the fact that the required line rate for the digital system is about half that of the KH-4B, even though its IFOV is smaller (due to the higher orbit). On the KH-4B the film was stationary as the telescope scanned. So its extreme line rate (675 Klps) was only an indication of how fast the exposure slit at the back of the telescope translated across the film, and had no meaning as far as electronic readout. Not so with the digital system for which 320 Klps is a true requirement. Currently the highest achievable CCD line rates are 40 Klps, but 80 Klps is within the realm of possibility. In this sense, the required digital line rate is at least a factor of four high. 19 R. R. Auelmann

20 The telescope for the digital design is much larger than the KH-4B telescope, mainly because of the higher orbit. The other reason, affecting the focal length, is the larger pixel size (8 µm versus ~ 3 µm for the KH-4B). Because of the larger size, it makes more sense to rotate a mirror in front of the telescope than the telescope itself. The rotating mirror would be elliptical in plan form with the minor axis being equal to the aperture diameter and the major axis a being dependent on the offset angle θ as given by a = 2 D cos(θ /2) sin θ /2 ( ) The use of a rotating mirror affords another advantage. By reflecting off both sides of the mirror, the rotation rate can be reduced by a factor of two, which reduces the required line rate from 320 Klps to 160 Klps, still a factor of two too large. This corresponds to Design A1. TABLE 4. Continuous Rotation Cross Track Scan Designs Performance Corona Digital A Digital A1 Digital A2 Orbit altitude, h km Ratio, ρ Ground speed, v g km/s Telescope FOV deg N IFOV µr LR Klps Nadir GSD m GSD at end of scan m Scan width, w km Cycle time, t cycle sec Rotation rate rad/s Scan time, t scan sec Scan efficiency Half scan angle, θ deg (sin ϕ)^ Earth central angle, ϕ deg Zenith angle, ψ deg Scan length, L km L/w Telescope Optical Q Mean wavelength, λ µm Diameter, D m Pixel pitch, p µm Focal length, f m Detector array length m F# Number of telescopes Mirror sides NA Mirror major axis m NA We can carry this approach one step farther by adding a second telescope facing the first telescope and using the same rotating mirror as depicted in Figure 20. This allows the rotation rate to be reduced by another factor of two, thereby reducing the required line rate to 80 Klps, which may be achievable. This corresponds to Design A2. 20 R. R. Auelmann

21 While the solution depicted in Figure 20 may be viable for a GSD of 0.79 m, provided a line rate of 80 Klps is achievable, it falls considerably short for a half meter GSD. This would require increasing the line rate by the square of the nadir GSD to 200 Klps. However, there is another option: the use of an n-sided rotating drum mirror. Figure 20. Two Opposing Sensors with a Double-Sided Rotating Mirror A four-sided drum mirror (Figure 21) would reduce the required line rate by a factor of two compared to the rotating double-sided mirror, and a six-sided drum mirror would reduce the required line rate by a factor of three compared to the double-sided mirror. The problem of coarse is that the size of the drum mirror grows as more sides are added. Figure 21. Use of a Rotating Drum Mirror 21 R. R. Auelmann

22 Figure 22 shows the relationships among the facet angle, facet length and overall drum size (represented by the circumscribed diameter D drum ). The drum diameter is linearly proportional to the ray bundle diameter D, defined by the sensor aperture diameter. Figure 22. Drum Mirror Geometric Relationships The drum size also depends on the scan angle defined as ±θ LOS with respect to the satellite ground track, though the relationship is complicated. Figure 23 depicts two positions of a single facet of a hexagon drum mirror (though the following analysis applies to any number of facets). In the blue facet position the incoming ray bundle is vertical (i.e. it lies in the satellite orbit plane) and is centered on the facet. In the red position the facet has rotated through the angle θ R such that the ray bundle is displaced to one edge of the facet. This defines the scan limit to one side (the left). A larger facet rotation would cause all or part of the ray bundle to fall off the mirror facet. This limit can be determined from the relationship from which Finally we obtain the desired result: Δ = R i 2 D 2 = R sin 45 + θ i ( R ) L 2 sin ( 45 θ R ) R i D = ( cosθ R sinθ R )tan θ f /2 [ ( ) ( cosθ R + sinθ R )] D drum D = 2 { 1+ [ cos( θ LOS /2) sin( θ LOS /2)]tan( θ f /2) [ cos( θ LOS /2) + sin( θ LOS /2)]}cos θ f /2 ( ) Solutions of this equation are presented in Table 5 for four drum mirror configurations. 22 R. R. Auelmann

23 Figure 23. Geometric Construction to Determine Scan Angle Limit TABLE 5. Drum Mirror Size as a Function of Scan Angle Offset From Centerline Facets, n Facet angle, θ f (deg) D drum /R i L/R i LOS offset, θ LOS (deg) D drum /D D drum /D D drum /D D drum /D R. R. Auelmann

24 Limited Duration Scans. For higher resolution systems it becomes less and less practical to employ continuous cross track scan. Indeed continuous, repeating, cross track scan is not viable for GSD 0.5 m without a very large number of detectors operating at very high line rates. However, cross track scan can be a viable approach if the continuous geometry constraint is dropped, as discussed next. In this case the scan may be offset to the side of the satellite ground track and not be limited to side looking as in the special case. Moreover, the viewing geometry, as depicted in Figure 24, can change dramatically over the course of a series of cross scans. The advantages are two-fold: areas well to the side of the satellite ground track can be imaged, and the scan lengths can be extended. The disadvantage is that, only a finite number of scans can be completed before either an image quality or a LOS angle limit is hit; hence it is of limited time duration. Also large variations in the GSD can result from the changes in viewing geometry. Figure 24. Simplified Representation of the Limited Duration Cross Tack Scan. The rather complex viewing geometry is shown in Figure 25. The of the line of sight (LOS) slant range from the satellite position S to the ground point B is r. The angles ξ and η denote the great circle along track and cross track offsets of the scan center aim-point B relative to the satellite sub nadir point C. The orbit altitude is h and the radius of the Earth is r E. The LOS orientation is defined by the zenith angle ψ and the offset angle β. The geometric relations that define the LOS are 24 R. R. Auelmann

25 cosϕ = cosξ cosη sinβ = sinξ sinϕ ξ = ξ o v g r E t r S = r E + h, ρ = r E /r S, r = r S 1+ ρ 2 2ρcosϕ ) r ψ = cos 1 S r cosϕ r, E +. * r - The array projection normal to the LOS at B' is Figure 25. Cross Track Scan Viewing Geometry bb' = N r IFOV For cross track scan the array projection aa' upon the surface of the Earth must be orthogonal to the great circle arc B'B normal to the satellite ground track. The instantaneous scan width is then where w = aa'= bb' cosχ = N r IFOV cosχ 25 R. R. Auelmann

26 χ = sin 1 ( sinψ sinβ ) The latter is derived for the spherical triangle pz'n with sides 90 -ψ, 90 - β and 90 - χ. To meet this condition the array must be rotated about the LOS such that the corner angle γ = pz n between the 90 - χ and 90 -ψ arcs is given by & 1 ) γ = tan 1 ( + ' tanβ cosψ * The distance L depends on the length p l of the pixel projection in the scan direction. To determine p l we start with the fact that the ground sample distance is defined as where GSD = r IFOV cosψ = p w p l p w = r IFOV cosχ is the derived pixel projection in the cross scan direction. Given the definition of GSD it follows that the pixel projection in the scan direction is r IFOV cosχ p l = cosψ Thus the scan length is given by the integral L = LR IFOV t 0 rcosχ dt cosψ East-West Scans. Consider now a variation from cross track scan. Most visible imaging satellites fly in low altitude, circular, Sun synchronous orbits. Because such orbits are nearly polar, cross track scans are approximately in the east-west or west-east direction. Tasking is simplified if the scans are precisely in the east-west or west-east directions, which amounts to scanning along constant latitude. This is a special case of rhumb line scanning (i.e. scanning at a constant bearing angle) which, unlike cross track scans, are not along great circle arcs. The geometry for east-west scanning is slightly different from cross track scanning. The main differences lie in the computation of ϕ and β. Also, because the scans are at constant latitude, it is simple to accommodate the effects of Earth. Figure 26 depicts a portion of a spherical Earth where O is the center of the Earth and N is the North Pole. The satellite ground track is along the great circle A A C (red line). The great circle arc NC and the satellite ground track are stationary; that is, they do not rotate with the Earth. Point A denotes the sub-nadir point of the satellite at time t. It is located by the right spherical angles λ A and Λ A. As in cross track scan, the arc length ξ, measured from A to C, is a linear function of time: 26 R. R. Auelmann

27 Then for a specified orbit inclination i, we have ξ = ξ o v g r E t ( ) ( ) λ A = sin 1 sinisinξ Λ A = tan 1 cositanξ Figure 26. Satellite Ground Track and Image Scan Pass Geometry The scan is along the fixed latitude λ T arc BB E TC where B represents the start of the scan in nonrotating coordinates. Note that the constant latitude arc BC is not a great circle. At time t, after the start of the scan, B has moved to B E due to the rotation of the Earth, while the LOS aim point has moved to T. Accordingly, B E T is the scan distance. The inertial position of T is defined by the right spherical angles λ T and Λ T. The latter is a time varying function L Λ T = Λ To Ω E t r E cosλ T where Ω E is the rotation rate of the Earth ( deg/s), and L is the time varying scan length. The great circle arc ϕ between A and T is defined by ϕ = cos 1 [ sinλ A sinλ T + cosλ A cosλ T cos(λ A + Λ T )] 27 R. R. Auelmann

28 From the spherical triangle TNA, we determine the angle β between the great circle arc ϕ and the constant latitude arc BC & sinλ β = sin 1 A sinλ T cosϕ) ( + ' cosλ T sinϕ * The rest of the model is identical to cross track scan. This includes the equations for the slant range r, the zenith angle ψ, the angles c and γ, the scan width w, and scan length L. Satellite Angular Rates. The body angular rates are of particular interest when the scan is produced by rotation of the entire satellite most likely using control moment gyros. Here we present expressions for the angular velocity components ω x, ω y and ω z of the satellite while scanning where xyz be a body-fixed Cartesian frame of reference. (Other computations are required for the slew segments between scans.) The detector array is aligned parallel to the x (roll) axis, and the optical axis is parallel to the z (yaw) axis. The y (pitch) axis completes the triad. When imaging, the z-axis is along the LOS pointing toward point T on the ground, and the x-axis is oriented so that its projection on the surface of the Earth is orthogonal to the east-west direction. The pz'n spherical triangle is the same depicted in Figure 27 for the derivation of γ. Figure 27 depicts the env Cartesian frame of reference with origin at the aim point T. The e-axis points due east, the n-axis points due north and the v-axis is along the local vertical. The pqv frame is obtained by rotating about v through the angle β. The p'qz' frame is obtained by rotating about the q- axis through the zenith angle ψ. The xy'z' frame is obtained by rotating about the z' axis through the angle γ. Observe that y'= y and z'= z. The y' and z'axes were introduced only as a convenience for representing the coordinate transformations. The corresponding transformation equations, expressed in matrix form, are as follows: x $ cosγ sinγ 0' & ) y' = & sinγ cosγ 0 ) z' %& 0 0 1( ) p' q, z' p' $ cosψ 0 sinψ' & ) q = & ) z' %& sinψ 0 cosψ () p q, v p $ cosβ sinβ 0' e & ) q = & sinβ cosβ 0 ) n v %& 0 0 1( ) v From this sequence of rotations we resolve angular rates γ, ψ, β + Λ T sinλ T and Λ T cosλ T into rates about the body xy'z' axes. The results are: ω x ω y' ω z' = γ ψ sinγ cosγ + β + Λ T sinλ T 0 ( ) cosγ sinψ sinγ sinψ cosψ + Λ T cosλ T cosγ cosψ sinβ + sinγ cosβ sinγ cosψ sinβ + cosγ cosβ sinψ sinβ Note that the target longitude rate is given by Λ r T = Ω E LR IFOV cosχ r E cosλ T cosψ 28 R. R. Auelmann

29 Finally, because the y-axis is opposite to y' and the z-axis is opposite to z' we have ω y = ω y' and ω z = ω z' Figure 27. Angular Rate Components Relative to the xyz Frame Cross-Track Orbital Smear. Here we examine the orbital smear for a line-scan imaging system in which the scan direction is normal to the satellite ground track; that is, a cross-track or whiskbroom scan. In this case, orbital smear occurs when the scan path is biased either forward or aft of the satellite, and manifests itself as either a contraction (when viewing forward) or an expansion (when viewing aft) in the cross-track ground projection. The smear is greatest at the point when the scan crosses the satellite ground track. It is convenient to separate the smear into two parts: (1) that due to the component of satellite velocity along the LOS and (2) that due to the component of velocity normal to the LOS. For a circular orbit, these two velocity components are v sinθ and v cosθ, respectively. The component of velocity v sinθ along the LOS is given by δs( pixels) = N 2 vt int r sinθ In other words it is the smear due to the change in slant range r. The minus sign indicates that this smear component is a contraction because the velocity component is towards T. Because the smear due to the component of velocity v cosθ normal to the LOS is a bit more complicated to explain, we introduce Figure 28 from which we deduce the following: 29 R. R. Auelmann

30 Figure 28. Smear due to the Component of Velocity Normal to the LOS δs = a sinψ cos 2 ψ δψ δs s = δs a/cosψ ( ) = tanψδψ ( ) span N /2 pixels in a/cosψ δs( pixels) = N 2 tanψδψ, δψ = ( v cosθ)t int r δs( pixels) = N 2 vt int cosθ tanψ r This smear component is due to the change in LOS angle. The changes in range and LOS angle are correlated so the combined effect is given by Smear (pixels) = N 2 vt int r ( sinθ + cosθ tanψ) The first term is due to the range change and the second term is due to the angle change. Because these are deterministic errors, they are added rather than root sum squared. The minus sign indicates a contraction in the ground footprint. Had the velocity been reversed, the smear would be an expansion. 30 R. R. Auelmann

31 Consider now the orbital smear for five orbit altitudes (450km, 545 km, 611 km, 657 km and 773 km), with LOS zenith angles ranging between 0 and 60. For 64 elements in TDI and a line rate of 40Klps, the integration time is 1.6 ms. Assume also a large N of 100,000. Results are shown in Figure 29. To limit orbital smear to less than a pixel, one can accept degraded image quality toward the ends of the scan projection, or reduce N. Figure 29. Maximum Orbital Smear for Cross Track Scan 31 R. R. Auelmann

32 Conical Scan In conical scan the telescope LOS is maintained at a fixed offset angle θ with respect to the local vertical, and is rotated at a constant rate Ω about the local vertical so as to sweep out a cone. The scan width is chosen so that there is minimum ground overlap between successive scans as the satellite advances in its orbit. The main attraction of conical scan is its relatively high image duty cycle (the fraction of the total time imaging). Another advantage is that it affords two looks at the same ground point, separated only minutes apart. However, conical scan is generally limited to lower resolution imaging (than say cross track scan) because of the difficulty in compensating for the forward motion the satellite ground track. The difficulty stems from the fact that the scan direction is continuously changing with respect to the satellite ground track. Implementation. There are essentially two ways to implement a conical scan. The first way (Figure 30) is to hard-mount the sensor to the satellite with the optical axis offset from the yaw axis. The satellite continuously rotates at the desired conical scan rate about its yaw axis, while the yaw axis is maintained in alignment with the local vertical. This approach places the scan burden on the satellite attitude control system. And if a de-spun antenna is used to transmit the imagery to the ground there is the added complication of transferring the sensor data to the de-spun portion of the satellite. Fgure 30. Conical Scan with Spinning Satellite An alternate implementation is to rotate the telescope relative to the satellite about an axis parallel to the nadir pointing satellite yaw axis, while the line scan detector array is hard-mounted to the satellite. It requires a K-mirror, aligned to the telescope rotation axis, to counter rotate at half the telescope rate, so as to project the detector array in object space in the plane defined by the LOS and the rotation axis. This is the approach proposed by Hughes Aircraft in an unsuccessful bid for the Defense Meteorological Satellite Program (circa 1990). The concept is depicted in Figure R. R. Auelmann

33 Figure 31. Conical Scan Sensor with Rotating Telescope Scan Geometry. The detector array consists of M by N pixels of square detectors where M is the number of detectors in the scan direction for time delayed integration (TDI) and N is the number of detectors in the cross-scan direction. The array field of view (FOV) in the cross scan direction is equal to N times the detector instantaneous field of view (IFOV). The detector array sweeps out a swath of width w scan = r FOV cosψ projected on the surface of the Earth. Here r is the slant range from the satellite S to the target point T and ψ is the zenith angle at the target (see Figure 32). 33 R. R. Auelmann

34 Figure 32. Conical Scan Geometry The product of the scan period P scan and the velocity v g of the satellite ground track is chosen so that w scan = v g P scan. In this way successive scans are laid down without gaps. To avoid excessive overlap to either side of the ground track, imaging only takes place within ±α of the satellite ground. With the angle α defined by % 2r α = sin 1 E ϕ ( ' * & 2r E ϕ + w scan ) coverage is provided a distance r E ϕ to either side of the satellite ground track. The linear scan velocity (at the center of the scan FOV) is v scan = Ωr E sinϕ, Ω = 2π /P scan The corresponding line rate is LR = v scan /( r IFOV ) where IFOV is the detector instantaneous field of view. Aside from the fact that conical scan is ill-suited to high resolution imaging (mainly because of uncompensated forward motion), the main shortfalls are: (1) all the imagery is collected at relatively low LOS elevation angles (large zenith angles); (2) the aspect angle between the sensor, target and Sun is widely varying. One result is that stereo imagery created by overlying forward and aft images of the same scene may look strange, especially when viewing objects well to the side of the ground track. 34 R. R. Auelmann

35 Low Resolution, Daily Revisit Design. Consider the hypothetical problem of providing once daily global access at 10-m GSD (the same resolution provided by SPOT 1). And assume the SPOT 825- km altitude Sun synchronous orbit (Table 6) for which the distance between successive passes is 2824 km along the Equator. Because of the orbit inclination the maximum perpendicular distance between passes is 2792 km. To provide once daily coverage the scan must be offset half this distance (1396 km) requiring a scan angle θ of The corresponding zenith angle is A conical scan sensor design that meets the GSD requirement without coverage gaps is presented in Table 7. With an 8-µm pixel pitch a 2.18-m focal length is required to achieve the 10-m GSD. To avoid need for satellite forward motion compensation, the integration time is limited to 1 ms, so that the smear is limited to an acceptable 0.4 pixels. On the assumption that 2-ms integration time is required to provide adequate SNR for an optical Q of 1, the aperture diameter is oversized to achieve an optical Q of 0.71 (since the required integration time is proportional to Q squared). This dictates an aperture diameter of 0.25 m. The number of detectors across the array (28000) is chosen so as to provide the necessary scan-width when operating at the maximum assumed line rate of per second. This requires a telescope 6 FOV telescope in the cross scan direction. The rotation period is just over 70 sec. TABLE 6. Orbit and Scan Geometry Orbit Parameters units SPOT Complete orbits per day, I 14 Coverage repeat period, N solar days 26 Residual, K 5 Orbits per day, Q = I + (K/N) Orbits per repeat period, R = NQ 369 Orbit period min Mean orbit rate, n rad/sec Semi-major axis, a km 7206 Orbit altitude km 828 Inclination, i deg Orbit speed km/sec Ground speed km/sec Earth rotation per orbit period (deg) Distance between passes km 2824 Viewing Geometry Earth centered viewing offset, ϕ deg 12.5 Range, r km 1696 Off nadir view angle, θ deg 54.7 Zenith angle, ψ deg 67.2 r E (sinϕ) km 1384 Centerline offset km 1396 TABLE 7. Conical Scan Sensor Design Conical Scan Sensor VNIR Spectral band µm 0.4 to 0.5 Mean wavelength, λ µm 0.65 Aperture diameter, D m 0.25 FWHM, λ/d µr 2.6 Q 0.71 Pixel pitch, p µm 8 Focal length, f m 2.18 F# 8.72 IFOV µr 3.7 Detectors across, N Array length m Telescope FOV deg 6.06 Rotation period sec Rotation rate, Ω rad/s γ µrad 4.50 Line rate, LR Klps 20 Detectors in TDI 20 Integration time ms 1.01 Footprint Resolution normal to LOS m 6.2 GSD m 10 Scan width, w scan km 463 Scan angle, α deg 59.1 Forward motion smear pixel 0.41 Figure 33 shows the translating scan pattern over two rotations with the side cutoff angle α of 59. The satellite is moving left to right. The 463-km scan width ensures there is no coverage gap between passes along the ground track. Coverage overlaps increase as the distance from the satellite ground track increases. The cutoffs are to avoid excessive overlap at large side offsets. With the cutoffs the scans are divided into forward and aft segments. Thus a ground point is viewed twice, once with the 35 R. R. Auelmann

36 forward scan and minutes later with the aft scan, but from different aspect angles. The relatively minor effect of Earth rotation is not shown. Figure 33. Translating Conical Scan Path with Side Offset Cutoffs 36 R. R. Auelmann

37 Frame Imaging. Most digital cameras employ starring two-dimensional array formats, which expose the entire image plane simultaneously. In other words, they are frame cameras. Digital cameras for home use typically have CCD or CMOS detectors with between 4and 10 million pixels, while larger arrays with over 16 million pixels, usually in square formats, have been commercially available to industry users since the mid 90s. Fairchild Imaging has a CCD array of almost 85 million pixels. The main attraction of frame imaging is the huge number of pixels that are simultaneously imaged, making them immune to low frequency line wander which plaques line scan sensors, thereby maintaining near perfect pixel-to-pixel registration within a frame. Moreover, by partial overlapping of adjacent frames it is possible to achieve a high degree of cross frame registration. One complication is the need for a shutter when the detector array is a CCD. Applications. Frame cameras are used extensively, though not exclusively, for aerial imaging. Aerial imaging can live with smaller swath widths because aircraft, unlike satellites, are relatively unconstrained in their flight patterns. They can fly back and forth across an area to map it. Also, because of their much lower speeds, aircraft frame cameras can often image without the need to back-scan while imaging. Frame cameras are well suited for point target imaging especially from very high altitude satellites, when large areas of the ground can be accessed with only small changes in the LOS. In some instances, the changes in LOS within the telescope FOV can be implemented using an internal steering mirror between the telescope and the focal plane. Imaging from a geosynchronous orbit would be a good application, except that the resolutions would be very low unless the telescope was very large. Frame cameras are less well suited for large area collection from low altitude orbits than line-scanners. There are two main problems. The first is that it is difficult to implement the wide image swaths generally called for in wide area collection. With a line scan sensor, overlapping line arrays have small depth in the along scan direction, which simplifies the design of the telescope. Large FOV in one direction is a lot easier than large FOV in two directions. The relative difficulty associated with using staggered area arrays is depicted in Figure 34. The difficulty can be reduced when two-side butt-able arrays come on line. The second problem is the requirement for step-stare operation to compensate for the forward motion of the satellite. Without it compensation a low altitude frame camera would sustain roughly 7 m of smear for even a 1 ms integration time. To avoid this the LOS must be maintained on the ground point for the integration time, and then stepped forward for the next image collect. This complication is avoided by a line scanner using TDI with the penalty of only one single pixel of smear. 37 R. R. Auelmann

38 Figure 34. Area Arrays versus Staggered Line Scan Arrays Orbital Smear. Unlike scanners, frame imaging usually require the LOS be maintained aimed at a fixed point on the ground during the finite integration time. Then, unlike for scanners, there is no scan smear at the center of the image. But there is orbital smear due to the changing aspect angle. The one exception is imaging from a geosynchronous orbit. The orbital smear geometry is depicted in Figure 35. The smear components are categorized according to the velocity component causing the smear. Range smear is due to the component of velocity v h along the LOS. Zenith smear is due to the component of velocity v z normal to the LOS and in the plane of the aim point local vertical and the LOS vector. Azimuth smear is due to the component of velocity v x normal to the aim point local vertical and the LOS vector. The nominal image footprint is shown in blue. The red lines depict the smear distortion to the particular error components. Range smear causes an expansion (or contraction) in both directions. Zenith smear occurs when viewing for e or aft of the flight path, and the expansion (or contraction) is only in the direction of motion. Azimuth smear occurs when viewing to either side of the flight path, and causes a skewing of the footprint. In all cases the orbital smear is zero at the aim point (center of the footprint) and increases linearly toward the edge of the footprint. The smear is most conveniently expressed in terms of pixels. The equations in Figure 35 assume a square array with N pixels on a side. 38 R. R. Auelmann

39 vη vζ local vertical z S vx Range edge of FOV (pixels) = N 2 v t η int r r ψ ζ Zenith edge of FOV (pixels) = N 2 v t ζ int tanψ r T array footprint x Azimuth edge of FOV (pixels) = N 2 v t x int tanψ r Figure 35. Orbital Smear Components for a Frame Camera Consider the two limiting cases when viewing in the orbit plane and when viewing directly to the side of the orbit plane as depicted in Figure 36. The angle between the satellite velocity vector and the satellite local horizontal is φ measured in the orbit plane. When the LOS points in the orbit plane, the smear components are given by Range smear (pixels) = N 2 t int v r sin ( φ ψ + ϕ ) Zenith smear (pixels) = N 2 t v int tanψ cos φ ψ + ϕ r ( ) The azimuth smear is zero. When viewing directly to the side of the orbit plane, the smear components are Range smear (pix) = N 2 t int v sinφ cos ψ ϕ r ( ) Zenith smear (pix) = N 2 t int v tanψ sinφ sin ψ ϕ r ( ) Azimuth smear (pix) = N 2 t int v tanψ cosφ r 39 R. R. Auelmann

40 Figure 36. Two Limiting Cases For a circular orbit φ is zero and the smear components reduce to: Range smear (pixels) = N 2 t int v r sin ( ψ + ϕ ) Zenith smear (pixels) = N 2 t v int tanψ cos ψ + ϕ r ( ) when viewing in the orbit plane, and Azimuth smear (pix) = N 2 t int v r tanψ when viewing directly to the side. 40 R. R. Auelmann

Stereo Imaging and Geolocation

Stereo Imaging and Geolocation Stereo Imaging and Geolocation Stereo imaging serves two purposes. The first is to produce three-dimensional topographic images. These can be used to produce digital elevation models DEM) and to improve

More information

GEOG 4110/5100 Advanced Remote Sensing Lecture 4

GEOG 4110/5100 Advanced Remote Sensing Lecture 4 GEOG 4110/5100 Advanced Remote Sensing Lecture 4 Geometric Distortion Relevant Reading: Richards, Sections 2.11-2.17 Review What factors influence radiometric distortion? What is striping in an image?

More information

Geometric Correction

Geometric Correction CEE 6150: Digital Image Processing Geometric Correction 1 Sources of Distortion Sensor Characteristics optical distortion aspect ratio non-linear mirror velocity detector geometry & scanning sequence Viewing

More information

Simulation of Brightness Temperatures for the Microwave Radiometer (MWR) on the Aquarius/SAC-D Mission. Salman S. Khan M.S. Defense 8 th July, 2009

Simulation of Brightness Temperatures for the Microwave Radiometer (MWR) on the Aquarius/SAC-D Mission. Salman S. Khan M.S. Defense 8 th July, 2009 Simulation of Brightness Temperatures for the Microwave Radiometer (MWR) on the Aquarius/SAC-D Mission Salman S. Khan M.S. Defense 8 th July, 2009 Outline Thesis Objective Aquarius Salinity Measurements

More information

DEVELOPMENT OF CAMERA MODEL AND GEOMETRIC CALIBRATION/VALIDATION OF XSAT IRIS IMAGERY

DEVELOPMENT OF CAMERA MODEL AND GEOMETRIC CALIBRATION/VALIDATION OF XSAT IRIS IMAGERY DEVELOPMENT OF CAMERA MODEL AND GEOMETRIC CALIBRATION/VALIDATION OF XSAT IRIS IMAGERY Leong Keong Kwoh, Xiaojing Huang, Wee Juan Tan Centre for Remote, Imaging Sensing and Processing (CRISP), National

More information

DIGITAL SURFACE MODELS OF CITY AREAS BY VERY HIGH RESOLUTION SPACE IMAGERY

DIGITAL SURFACE MODELS OF CITY AREAS BY VERY HIGH RESOLUTION SPACE IMAGERY DIGITAL SURFACE MODELS OF CITY AREAS BY VERY HIGH RESOLUTION SPACE IMAGERY Jacobsen, K. University of Hannover, Institute of Photogrammetry and Geoinformation, Nienburger Str.1, D30167 Hannover phone +49

More information

Robotics (Kinematics) Winter 1393 Bonab University

Robotics (Kinematics) Winter 1393 Bonab University Robotics () Winter 1393 Bonab University : most basic study of how mechanical systems behave Introduction Need to understand the mechanical behavior for: Design Control Both: Manipulators, Mobile Robots

More information

3 - SYNTHETIC APERTURE RADAR (SAR) SUMMARY David Sandwell, SIO 239, January, 2008

3 - SYNTHETIC APERTURE RADAR (SAR) SUMMARY David Sandwell, SIO 239, January, 2008 1 3 - SYNTHETIC APERTURE RADAR (SAR) SUMMARY David Sandwell, SIO 239, January, 2008 Fraunhoffer diffraction To understand why a synthetic aperture in needed for microwave remote sensing from orbital altitude

More information

Calibration of IRS-1C PAN-camera

Calibration of IRS-1C PAN-camera Calibration of IRS-1C PAN-camera Karsten Jacobsen Institute for Photogrammetry and Engineering Surveys University of Hannover Germany Tel 0049 511 762 2485 Fax -2483 Email karsten@ipi.uni-hannover.de 1.

More information

Navigation coordinate systems

Navigation coordinate systems Lecture 3 Navigation coordinate systems Topic items: 1. Basic Coordinate Systems. 2. Plane Cartesian Coordinate Systems. 3. Polar Coordinate Systems. 4. Earth-Based Locational Reference Systems. 5. Reference

More information

University of Technology Building & Construction Department / Remote Sensing & GIS lecture

University of Technology Building & Construction Department / Remote Sensing & GIS lecture 5. Corrections 5.1 Introduction 5.2 Radiometric Correction 5.3 Geometric corrections 5.3.1 Systematic distortions 5.3.2 Nonsystematic distortions 5.4 Image Rectification 5.5 Ground Control Points (GCPs)

More information

Geometric Rectification of Remote Sensing Images

Geometric Rectification of Remote Sensing Images Geometric Rectification of Remote Sensing Images Airborne TerrestriaL Applications Sensor (ATLAS) Nine flight paths were recorded over the city of Providence. 1 True color ATLAS image (bands 4, 2, 1 in

More information

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG.

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG. Computer Vision Coordinates Prof. Flávio Cardeal DECOM / CEFET- MG cardeal@decom.cefetmg.br Abstract This lecture discusses world coordinates and homogeneous coordinates, as well as provides an overview

More information

Fall 2016 Semester METR 3113 Atmospheric Dynamics I: Introduction to Atmospheric Kinematics and Dynamics

Fall 2016 Semester METR 3113 Atmospheric Dynamics I: Introduction to Atmospheric Kinematics and Dynamics Fall 2016 Semester METR 3113 Atmospheric Dynamics I: Introduction to Atmospheric Kinematics and Dynamics Lecture 5 August 31 2016 Topics: Polar coordinate system Conversion of polar coordinates to 2-D

More information

Modern Surveying Techniques. Prof. S. K. Ghosh. Department of Civil Engineering. Indian Institute of Technology, Roorkee.

Modern Surveying Techniques. Prof. S. K. Ghosh. Department of Civil Engineering. Indian Institute of Technology, Roorkee. Modern Surveying Techniques Prof. S. K. Ghosh Department of Civil Engineering Indian Institute of Technology, Roorkee Lecture - 12 Rectification & Restoration In my previous session, I had discussed regarding

More information

Optics Vac Work MT 2008

Optics Vac Work MT 2008 Optics Vac Work MT 2008 1. Explain what is meant by the Fraunhofer condition for diffraction. [4] An aperture lies in the plane z = 0 and has amplitude transmission function T(y) independent of x. It is

More information

Exterior Orientation Parameters

Exterior Orientation Parameters Exterior Orientation Parameters PERS 12/2001 pp 1321-1332 Karsten Jacobsen, Institute for Photogrammetry and GeoInformation, University of Hannover, Germany The georeference of any photogrammetric product

More information

GEOG 4110/5100 Advanced Remote Sensing Lecture 4

GEOG 4110/5100 Advanced Remote Sensing Lecture 4 GEOG 4110/5100 Advanced Remote Sensing Lecture 4 Geometric Distortion Relevant Reading: Richards, Sections 2.11-2.17 Geometric Distortion Geometric Distortion: Errors in image geometry, (location, dimensions,

More information

Determining satellite rotation rates for unresolved targets using temporal variations in spectral signatures

Determining satellite rotation rates for unresolved targets using temporal variations in spectral signatures Determining satellite rotation rates for unresolved targets using temporal variations in spectral signatures Joseph Coughlin Stinger Ghaffarian Technologies Colorado Springs, CO joe.coughlin@sgt-inc.com

More information

Distortion Correction for Conical Multiplex Holography Using Direct Object-Image Relationship

Distortion Correction for Conical Multiplex Holography Using Direct Object-Image Relationship Proc. Natl. Sci. Counc. ROC(A) Vol. 25, No. 5, 2001. pp. 300-308 Distortion Correction for Conical Multiplex Holography Using Direct Object-Image Relationship YIH-SHYANG CHENG, RAY-CHENG CHANG, AND SHIH-YU

More information

Diffraction. Single-slit diffraction. Diffraction by a circular aperture. Chapter 38. In the forward direction, the intensity is maximal.

Diffraction. Single-slit diffraction. Diffraction by a circular aperture. Chapter 38. In the forward direction, the intensity is maximal. Diffraction Chapter 38 Huygens construction may be used to find the wave observed on the downstream side of an aperture of any shape. Diffraction The interference pattern encodes the shape as a Fourier

More information

Navigational Aids 1 st Semester/2007/TF 7:30 PM -9:00 PM

Navigational Aids 1 st Semester/2007/TF 7:30 PM -9:00 PM Glossary of Navigation Terms accelerometer. A device that senses inertial reaction to measure linear or angular acceleration. In its simplest form, it consists of a case-mounted spring and mass arrangement

More information

Astromechanics. 12. Satellite Look Angle

Astromechanics. 12. Satellite Look Angle Astromechanics 12. Satellite Look Angle The satellite look angle refers to the angle that one would look for a satellite at a given time from a specified position on the Earth. For example, if you had

More information

PHYS 202 Notes, Week 8

PHYS 202 Notes, Week 8 PHYS 202 Notes, Week 8 Greg Christian March 8 & 10, 2016 Last updated: 03/10/2016 at 12:30:44 This week we learn about electromagnetic waves and optics. Electromagnetic Waves So far, we ve learned about

More information

GEOG 4110/5100 Advanced Remote Sensing Lecture 2

GEOG 4110/5100 Advanced Remote Sensing Lecture 2 GEOG 4110/5100 Advanced Remote Sensing Lecture 2 Data Quality Radiometric Distortion Radiometric Error Correction Relevant reading: Richards, sections 2.1 2.8; 2.10.1 2.10.3 Data Quality/Resolution Spatial

More information

Introduction to quaternions. Mathematics. Operations

Introduction to quaternions. Mathematics. Operations Introduction to quaternions Topics: Definition Mathematics Operations Euler Angles (optional) intro to quaternions 1 noel.h.hughes@gmail.com Euler's Theorem y y Angle! rotation follows right hand rule

More information

CORRECTING RS SYSTEM DETECTOR ERROR GEOMETRIC CORRECTION

CORRECTING RS SYSTEM DETECTOR ERROR GEOMETRIC CORRECTION 1 CORRECTING RS SYSTEM DETECTOR ERROR GEOMETRIC CORRECTION Lecture 1 Correcting Remote Sensing 2 System Detector Error Ideally, the radiance recorded by a remote sensing system in various bands is an accurate

More information

specular diffuse reflection.

specular diffuse reflection. Lesson 8 Light and Optics The Nature of Light Properties of Light: Reflection Refraction Interference Diffraction Polarization Dispersion and Prisms Total Internal Reflection Huygens s Principle The Nature

More information

Chapter 26 Geometrical Optics

Chapter 26 Geometrical Optics Chapter 26 Geometrical Optics 26.1 The Reflection of Light 26.2 Forming Images With a Plane Mirror 26.3 Spherical Mirrors 26.4 Ray Tracing and the Mirror Equation 26.5 The Refraction of Light 26.6 Ray

More information

X-ray Diffraction from Materials

X-ray Diffraction from Materials X-ray Diffraction from Materials 2008 Spring Semester Lecturer; Yang Mo Koo Monday and Wednesday 14:45~16:00 8. Experimental X-ray Diffraction Procedures 8.1 Diffraction Experiments using Films 8.1.1 Laue

More information

Matthew Schwartz Lecture 19: Diffraction and resolution

Matthew Schwartz Lecture 19: Diffraction and resolution Matthew Schwartz Lecture 19: Diffraction and resolution 1 Huygens principle Diffraction refers to what happens to a wave when it hits an obstacle. The key to understanding diffraction is a very simple

More information

The Jitterbug Motion

The Jitterbug Motion The By 80-4 Poplar St. Rochester, NY 460 Copyright, September 00 09-9-00 Introduction We develop a set of equations which describes the motion of a triangle and a vertex of the Jitterbug. The Jitterbug

More information

Chapters 1-4: Summary

Chapters 1-4: Summary Chapters 1-4: Summary So far, we have been investigating the image acquisition process. Chapter 1: General introduction Chapter 2: Radiation source and properties Chapter 3: Radiation interaction with

More information

Experiment 8 Wave Optics

Experiment 8 Wave Optics Physics 263 Experiment 8 Wave Optics In this laboratory, we will perform two experiments on wave optics. 1 Double Slit Interference In two-slit interference, light falls on an opaque screen with two closely

More information

TEAMS National Competition Middle School Version Photometry Solution Manual 25 Questions

TEAMS National Competition Middle School Version Photometry Solution Manual 25 Questions TEAMS National Competition Middle School Version Photometry Solution Manual 25 Questions Page 1 of 14 Photometry Questions 1. When an upright object is placed between the focal point of a lens and a converging

More information

Technical Manual SATGEN II SATELLITE DATA GENERATION PROGRAM. Document M Revision 1.0 April dbm. 32A Spruce Street Oakland, NJ 07436

Technical Manual SATGEN II SATELLITE DATA GENERATION PROGRAM. Document M Revision 1.0 April dbm. 32A Spruce Street Oakland, NJ 07436 Technical Manual SATGEN II SATELLITE DATA GENERATION PROGRAM Document M2001322 Revision 1.0 April 2011 dbm 32A Spruce Street Oakland, NJ 07436 Phone 201-677-0008 FAX 201-667-9444 1 Table of Contents Contents

More information

14 Chapter. Interference and Diffraction

14 Chapter. Interference and Diffraction 14 Chapter Interference and Diffraction 14.1 Superposition of Waves... 14-14.1.1 Interference Conditions for Light Sources... 14-4 14. Young s Double-Slit Experiment... 14-4 Example 14.1: Double-Slit Experiment...

More information

PRISM geometric Cal/Val and DSM performance

PRISM geometric Cal/Val and DSM performance PRISM geometric Cal/Val and DSM performance Junichi Takaku RESTEC Takeo Tadono JAXA Nov. 2008 Contents PRISM geometric Cal/Val Interior orientation parameters Exterior orientation parameters Triangulation

More information

Geometric Accuracy Evaluation, DEM Generation and Validation for SPOT-5 Level 1B Stereo Scene

Geometric Accuracy Evaluation, DEM Generation and Validation for SPOT-5 Level 1B Stereo Scene Geometric Accuracy Evaluation, DEM Generation and Validation for SPOT-5 Level 1B Stereo Scene Buyuksalih, G.*, Oruc, M.*, Topan, H.*,.*, Jacobsen, K.** * Karaelmas University Zonguldak, Turkey **University

More information

Extraction of surface normal and index of refraction using a pair of passive infrared polarimetric sensors

Extraction of surface normal and index of refraction using a pair of passive infrared polarimetric sensors Extraction of surface normal and index of refraction using a pair of passive infrared polarimetric sensors Firooz Sadjadi Lockheed Martin Corporation Saint Paul, Minnesota firooz.sadjadi@ieee.org Farzad

More information

PRECISE GEOREFERENCING OF CARTOSAT IMAGERY VIA DIFFERENT ORIENTATION MODELS

PRECISE GEOREFERENCING OF CARTOSAT IMAGERY VIA DIFFERENT ORIENTATION MODELS PRECISE GEOREFERENCING OF CARTOSAT IMAGERY VIA DIFFERENT ORIENTATION MODELS J. Willneff, T. Weser, F. Rottensteiner, C. S. Fraser * Cooperative Research Centre for Spatial Information, Department of Geomatics,

More information

Michelson Interferometer

Michelson Interferometer Michelson Interferometer The Michelson interferometer uses the interference of two reflected waves The third, beamsplitting, mirror is partially reflecting ( half silvered, except it s a thin Aluminum

More information

PREPARATIONS FOR THE ON-ORBIT GEOMETRIC CALIBRATION OF THE ORBVIEW 3 AND 4 SATELLITES

PREPARATIONS FOR THE ON-ORBIT GEOMETRIC CALIBRATION OF THE ORBVIEW 3 AND 4 SATELLITES PREPARATIONS FOR THE ON-ORBIT GEOMETRIC CALIBRATION OF THE ORBVIEW 3 AND 4 SATELLITES David Mulawa, Ph.D. ORBIMAGE mulawa.david@orbimage.com KEY WORDS: Geometric, Camera, Calibration, and Satellite ABSTRACT

More information

TEAMS National Competition High School Version Photometry Solution Manual 25 Questions

TEAMS National Competition High School Version Photometry Solution Manual 25 Questions TEAMS National Competition High School Version Photometry Solution Manual 25 Questions Page 1 of 15 Photometry Questions 1. When an upright object is placed between the focal point of a lens and a converging

More information

This was written by a designer of inertial guidance machines, & is correct. **********************************************************************

This was written by a designer of inertial guidance machines, & is correct. ********************************************************************** EXPLANATORY NOTES ON THE SIMPLE INERTIAL NAVIGATION MACHINE How does the missile know where it is at all times? It knows this because it knows where it isn't. By subtracting where it is from where it isn't

More information

Chapter 37. Wave Optics

Chapter 37. Wave Optics Chapter 37 Wave Optics Wave Optics Wave optics is a study concerned with phenomena that cannot be adequately explained by geometric (ray) optics. Sometimes called physical optics These phenomena include:

More information

Chapter 38. Diffraction Patterns and Polarization

Chapter 38. Diffraction Patterns and Polarization Chapter 38 Diffraction Patterns and Polarization Diffraction Light of wavelength comparable to or larger than the width of a slit spreads out in all forward directions upon passing through the slit This

More information

Diffraction. Introduction: Diffraction is bending of waves around an obstacle (barrier) or spreading of waves passing through a narrow slit.

Diffraction. Introduction: Diffraction is bending of waves around an obstacle (barrier) or spreading of waves passing through a narrow slit. Introduction: Diffraction is bending of waves around an obstacle (barrier) or spreading of waves passing through a narrow slit. Diffraction amount depends on λ/a proportion If a >> λ diffraction is negligible

More information

Chapter 37. Interference of Light Waves

Chapter 37. Interference of Light Waves Chapter 37 Interference of Light Waves Wave Optics Wave optics is a study concerned with phenomena that cannot be adequately explained by geometric (ray) optics These phenomena include: Interference Diffraction

More information

Spectrographs. C. A. Griffith, Class Notes, PTYS 521, 2016 Not for distribution.

Spectrographs. C. A. Griffith, Class Notes, PTYS 521, 2016 Not for distribution. Spectrographs C A Griffith, Class Notes, PTYS 521, 2016 Not for distribution 1 Spectrographs and their characteristics A spectrograph is an instrument that disperses light into a frequency spectrum, which

More information

Chapter 8: Physical Optics

Chapter 8: Physical Optics Chapter 8: Physical Optics Whether light is a particle or a wave had puzzled physicists for centuries. In this chapter, we only analyze light as a wave using basic optical concepts such as interference

More information

Homework Set 3 Due Thursday, 07/14

Homework Set 3 Due Thursday, 07/14 Homework Set 3 Due Thursday, 07/14 Problem 1 A room contains two parallel wall mirrors, on opposite walls 5 meters apart. The mirrors are 8 meters long. Suppose that one person stands in a doorway, in

More information

PSI Precision, accuracy and validation aspects

PSI Precision, accuracy and validation aspects PSI Precision, accuracy and validation aspects Urs Wegmüller Charles Werner Gamma Remote Sensing AG, Gümligen, Switzerland, wegmuller@gamma-rs.ch Contents Aim is to obtain a deeper understanding of what

More information

Chapter 36. Diffraction. Dr. Armen Kocharian

Chapter 36. Diffraction. Dr. Armen Kocharian Chapter 36 Diffraction Dr. Armen Kocharian Diffraction Light of wavelength comparable to or larger than the width of a slit spreads out in all forward directions upon passing through the slit This phenomena

More information

Linear algebra deals with matrixes: two-dimensional arrays of values. Here s a matrix: [ x + 5y + 7z 9x + 3y + 11z

Linear algebra deals with matrixes: two-dimensional arrays of values. Here s a matrix: [ x + 5y + 7z 9x + 3y + 11z Basic Linear Algebra Linear algebra deals with matrixes: two-dimensional arrays of values. Here s a matrix: [ 1 5 ] 7 9 3 11 Often matrices are used to describe in a simpler way a series of linear equations.

More information

APPLICATION OF AERIAL VIDEO FOR TRAFFIC FLOW MONITORING AND MANAGEMENT

APPLICATION OF AERIAL VIDEO FOR TRAFFIC FLOW MONITORING AND MANAGEMENT Pitu Mirchandani, Professor, Department of Systems and Industrial Engineering Mark Hickman, Assistant Professor, Department of Civil Engineering Alejandro Angel, Graduate Researcher Dinesh Chandnani, Graduate

More information

GEOMETRIC CONDITIONS OF SPACE IMAGERY FOR MAPPING

GEOMETRIC CONDITIONS OF SPACE IMAGERY FOR MAPPING GEOMETRIC CONDITIONS OF SPACE IMAGERY FOR MAPPING K. Jacobsen (*), G. Büyüksalih (**), A. Marangoz (**), U. Sefercik (**) and İ. Büyüksalih (**) *University of Hannover *jacobsen@ipi.uni-hannover.de **

More information

UNIT VI OPTICS ALL THE POSSIBLE FORMULAE

UNIT VI OPTICS ALL THE POSSIBLE FORMULAE 58 UNIT VI OPTICS ALL THE POSSIBLE FORMULAE Relation between focal length and radius of curvature of a mirror/lens, f = R/2 Mirror formula: Magnification produced by a mirror: m = - = - Snell s law: 1

More information

Motion Control (wheeled robots)

Motion Control (wheeled robots) Motion Control (wheeled robots) Requirements for Motion Control Kinematic / dynamic model of the robot Model of the interaction between the wheel and the ground Definition of required motion -> speed control,

More information

ξ ν ecliptic sun m λ s equator

ξ ν ecliptic sun m λ s equator Attitude parameterization for GAIA L. Lindegren (1 July ) SAG LL Abstract. The GAIA attaitude may be described by four continuous functions of time, q1(t), q(t), q(t), q4(t), which form a quaternion of

More information

Camera Parameters Estimation from Hand-labelled Sun Sositions in Image Sequences

Camera Parameters Estimation from Hand-labelled Sun Sositions in Image Sequences Camera Parameters Estimation from Hand-labelled Sun Sositions in Image Sequences Jean-François Lalonde, Srinivasa G. Narasimhan and Alexei A. Efros {jlalonde,srinivas,efros}@cs.cmu.edu CMU-RI-TR-8-32 July

More information

Application Note. Revision 1

Application Note. Revision 1 Risley Prism Scanner Two wedge prisms can be used to create an angular deviation of a beam from its optical axis to create continuous circular scan patterns or discrete beam pointing, which is commonly

More information

Chapter 38 Wave Optics (II)

Chapter 38 Wave Optics (II) Chapter 38 Wave Optics (II) Initiation: Young s ideas on light were daring and imaginative, but he did not provide rigorous mathematical theory and, more importantly, he is arrogant. Progress: Fresnel,

More information

Chapter 4 Dynamics. Part Constrained Kinematics and Dynamics. Mobile Robotics - Prof Alonzo Kelly, CMU RI

Chapter 4 Dynamics. Part Constrained Kinematics and Dynamics. Mobile Robotics - Prof Alonzo Kelly, CMU RI Chapter 4 Dynamics Part 2 4.3 Constrained Kinematics and Dynamics 1 Outline 4.3 Constrained Kinematics and Dynamics 4.3.1 Constraints of Disallowed Direction 4.3.2 Constraints of Rolling without Slipping

More information

CHAPTER 2 SENSOR DATA SIMULATION: A KINEMATIC APPROACH

CHAPTER 2 SENSOR DATA SIMULATION: A KINEMATIC APPROACH 27 CHAPTER 2 SENSOR DATA SIMULATION: A KINEMATIC APPROACH 2.1 INTRODUCTION The standard technique of generating sensor data for navigation is the dynamic approach. As revealed in the literature (John Blakelock

More information

Mobile Robot Kinematics

Mobile Robot Kinematics Mobile Robot Kinematics Dr. Kurtuluş Erinç Akdoğan kurtuluserinc@cankaya.edu.tr INTRODUCTION Kinematics is the most basic study of how mechanical systems behave required to design to control Manipulator

More information

High-Precision Positioning Unit 2.2 Student Exercise: Calculating Topographic Change

High-Precision Positioning Unit 2.2 Student Exercise: Calculating Topographic Change High-Precision Positioning Unit 2.2 Student Exercise: Calculating Topographic Change Ian Lauer and Ben Crosby (Idaho State University) Change is an inevitable part of our natural world and varies as a

More information

SNAP Centre Workshop. Introduction to Trigonometry

SNAP Centre Workshop. Introduction to Trigonometry SNAP Centre Workshop Introduction to Trigonometry 62 Right Triangle Review A right triangle is any triangle that contains a 90 degree angle. There are six pieces of information we can know about a given

More information

Measuring Ground Deformation using Optical Imagery

Measuring Ground Deformation using Optical Imagery Measuring Ground Deformation using Optical Imagery Sébastien Leprince California Institute of Technology, USA October 29, 2009 Keck Institute for Space Studies Workshop Measuring Horizontal Ground Displacement,

More information

Condenser Optics for Dark Field X-Ray Microscopy

Condenser Optics for Dark Field X-Ray Microscopy Condenser Optics for Dark Field X-Ray Microscopy S. J. Pfauntsch, A. G. Michette, C. J. Buckley Centre for X-Ray Science, Department of Physics, King s College London, Strand, London WC2R 2LS, UK Abstract.

More information

cse 252c Fall 2004 Project Report: A Model of Perpendicular Texture for Determining Surface Geometry

cse 252c Fall 2004 Project Report: A Model of Perpendicular Texture for Determining Surface Geometry cse 252c Fall 2004 Project Report: A Model of Perpendicular Texture for Determining Surface Geometry Steven Scher December 2, 2004 Steven Scher SteveScher@alumni.princeton.edu Abstract Three-dimensional

More information

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical

More information

Stereo imaging ideal geometry

Stereo imaging ideal geometry Stereo imaging ideal geometry (X,Y,Z) Z f (x L,y L ) f (x R,y R ) Optical axes are parallel Optical axes separated by baseline, b. Line connecting lens centers is perpendicular to the optical axis, and

More information

Graphics and Interaction Transformation geometry and homogeneous coordinates

Graphics and Interaction Transformation geometry and homogeneous coordinates 433-324 Graphics and Interaction Transformation geometry and homogeneous coordinates Department of Computer Science and Software Engineering The Lecture outline Introduction Vectors and matrices Translation

More information

Modelling and simulation of a solar tower power plant

Modelling and simulation of a solar tower power plant Modelling and simulation of a solar tower power plant M. Ewert 1, O. Navarro Fuentes 1 Master student of computer science at RWTH Aachen University, Aachen, Germany, matthias.ewert@rwth-aachen.de. Master

More information

COMP30019 Graphics and Interaction Transformation geometry and homogeneous coordinates

COMP30019 Graphics and Interaction Transformation geometry and homogeneous coordinates COMP30019 Graphics and Interaction Transformation geometry and homogeneous coordinates Department of Computer Science and Software Engineering The Lecture outline Introduction Vectors and matrices Translation

More information

PHYSICS. Chapter 33 Lecture FOR SCIENTISTS AND ENGINEERS A STRATEGIC APPROACH 4/E RANDALL D. KNIGHT

PHYSICS. Chapter 33 Lecture FOR SCIENTISTS AND ENGINEERS A STRATEGIC APPROACH 4/E RANDALL D. KNIGHT PHYSICS FOR SCIENTISTS AND ENGINEERS A STRATEGIC APPROACH 4/E Chapter 33 Lecture RANDALL D. KNIGHT Chapter 33 Wave Optics IN THIS CHAPTER, you will learn about and apply the wave model of light. Slide

More information

COORDINATE TRANSFORMATION. Lecture 6

COORDINATE TRANSFORMATION. Lecture 6 COORDINATE TRANSFORMATION Lecture 6 SGU 1053 SURVEY COMPUTATION 1 Introduction Geomatic professional are mostly confronted in their work with transformations from one two/three-dimensional coordinate system

More information

form are graphed in Cartesian coordinates, and are graphed in Cartesian coordinates.

form are graphed in Cartesian coordinates, and are graphed in Cartesian coordinates. Plot 3D Introduction Plot 3D graphs objects in three dimensions. It has five basic modes: 1. Cartesian mode, where surfaces defined by equations of the form are graphed in Cartesian coordinates, 2. cylindrical

More information

Extreme Ultraviolet Radiation Spectrometer Design and Optimization

Extreme Ultraviolet Radiation Spectrometer Design and Optimization Extreme Ultraviolet Radiation Spectrometer Design and Optimization Shaun Pacheco Mentors: Dr. Guillaume Laurent & Wei Cao Project Description This summer project was dedicated to spectrally resolving the

More information

Geolocation with FW 6.4x & Video Security Client Geolocation with FW 6.4x & Video Security Client 2.1 Technical Note

Geolocation with FW 6.4x & Video Security Client Geolocation with FW 6.4x & Video Security Client 2.1 Technical Note Geolocation with FW 6.4x & Video Security Client 2.1 1 10 Geolocation with FW 6.4x & Video Security Client 2.1 Technical Note Geolocation with FW 6.4x & Video Security Client 2.1 2 10 Table of contents

More information

Visualisation Pipeline : The Virtual Camera

Visualisation Pipeline : The Virtual Camera Visualisation Pipeline : The Virtual Camera The Graphics Pipeline 3D Pipeline The Virtual Camera The Camera is defined by using a parallelepiped as a view volume with two of the walls used as the near

More information

Ground Plane Motion Parameter Estimation For Non Circular Paths

Ground Plane Motion Parameter Estimation For Non Circular Paths Ground Plane Motion Parameter Estimation For Non Circular Paths G.J.Ellwood Y.Zheng S.A.Billings Department of Automatic Control and Systems Engineering University of Sheffield, Sheffield, UK J.E.W.Mayhew

More information

2D and 3D Transformations AUI Course Denbigh Starkey

2D and 3D Transformations AUI Course Denbigh Starkey 2D and 3D Transformations AUI Course Denbigh Starkey. Introduction 2 2. 2D transformations using Cartesian coordinates 3 2. Translation 3 2.2 Rotation 4 2.3 Scaling 6 3. Introduction to homogeneous coordinates

More information

25-1 Interference from Two Sources

25-1 Interference from Two Sources 25-1 Interference from Two Sources In this chapter, our focus will be on the wave behavior of light, and on how two or more light waves interfere. However, the same concepts apply to sound waves, and other

More information

Chapter 7: Geometrical Optics. The branch of physics which studies the properties of light using the ray model of light.

Chapter 7: Geometrical Optics. The branch of physics which studies the properties of light using the ray model of light. Chapter 7: Geometrical Optics The branch of physics which studies the properties of light using the ray model of light. Overview Geometrical Optics Spherical Mirror Refraction Thin Lens f u v r and f 2

More information

Chapter 11. Parametric Equations And Polar Coordinates

Chapter 11. Parametric Equations And Polar Coordinates Instructor: Prof. Dr. Ayman H. Sakka Chapter 11 Parametric Equations And Polar Coordinates In this chapter we study new ways to define curves in the plane, give geometric definitions of parabolas, ellipses,

More information

Edge and local feature detection - 2. Importance of edge detection in computer vision

Edge and local feature detection - 2. Importance of edge detection in computer vision Edge and local feature detection Gradient based edge detection Edge detection by function fitting Second derivative edge detectors Edge linking and the construction of the chain graph Edge and local feature

More information

COHERENCE AND INTERFERENCE

COHERENCE AND INTERFERENCE COHERENCE AND INTERFERENCE - An interference experiment makes use of coherent waves. The phase shift (Δφ tot ) between the two coherent waves that interfere at any point of screen (where one observes the

More information

IMPORTANT INSTRUCTIONS

IMPORTANT INSTRUCTIONS 2017 Imaging Science Ph.D. Qualifying Examination June 9, 2017 9:00AM to 12:00PM IMPORTANT INSTRUCTIONS You must complete two (2) of the three (3) questions given for each of the core graduate classes.

More information

Time-of-flight basics

Time-of-flight basics Contents 1. Introduction... 2 2. Glossary of Terms... 3 3. Recovering phase from cross-correlation... 4 4. Time-of-flight operating principle: the lock-in amplifier... 6 5. The time-of-flight sensor pixel...

More information

Formulas of possible interest

Formulas of possible interest Name: PHYS 3410/6750: Modern Optics Final Exam Thursday 15 December 2011 Prof. Bolton No books, calculators, notes, etc. Formulas of possible interest I = ɛ 0 c E 2 T = 1 2 ɛ 0cE 2 0 E γ = hν γ n = c/v

More information

To graph the point (r, θ), simply go out r units along the initial ray, then rotate through the angle θ. The point (1, 5π 6

To graph the point (r, θ), simply go out r units along the initial ray, then rotate through the angle θ. The point (1, 5π 6 Polar Coordinates Any point in the plane can be described by the Cartesian coordinates (x, y), where x and y are measured along the corresponding axes. However, this is not the only way to represent points

More information

Lecture 6: Waves Review and Examples PLEASE REVIEW ON YOUR OWN. Lecture 6, p. 1

Lecture 6: Waves Review and Examples PLEASE REVIEW ON YOUR OWN. Lecture 6, p. 1 Lecture 6: Waves Review and Examples PLEASE REVEW ON YOUR OWN Lecture 6, p. 1 Single-Slit Slit Diffraction (from L4) Slit of width a. Where are the minima? Use Huygens principle: treat each point across

More information

Mathematics in Orbit

Mathematics in Orbit Mathematics in Orbit Dan Kalman American University Slides and refs at www.dankalman.net Outline Basics: 3D geospacial models Keyhole Problem: Related Rates! GPS: space-time triangulation Sensor Diagnosis:

More information

Trajectory Planning for Reentry Maneuverable Ballistic Missiles

Trajectory Planning for Reentry Maneuverable Ballistic Missiles International Conference on Manufacturing Science and Engineering (ICMSE 215) rajectory Planning for Reentry Maneuverable Ballistic Missiles XIE Yu1, a *, PAN Liang1,b and YUAN ianbao2,c 1 College of mechatronic

More information

Solar Panel Irradiation Exposure efficiency of solar panels with shadow

Solar Panel Irradiation Exposure efficiency of solar panels with shadow Solar Panel Irradiation Exposure efficiency of solar panels with shadow Frits F.M. de Mul MEDPHYS Software & Services 2012 www.medphys.nl email: info(at)medphys.nl Solar Panel Irradiation 1. Local Times,

More information

Use of n-vector for Radar Applications

Use of n-vector for Radar Applications Use of n-vector for Radar Applications Nina Ødegaard, Kenneth Gade Norwegian Defence Research Establishment Kjeller, NORWAY email: Nina.Odegaard@ffi.no Kenneth.Gade@ffi.no Abstract: This paper aims to

More information

STEREO EVALUATION OF ALOS/PRISM DATA ON ESA-AO TEST SITES FIRST DLR RESULTS

STEREO EVALUATION OF ALOS/PRISM DATA ON ESA-AO TEST SITES FIRST DLR RESULTS STEREO EVALUATION OF ALOS/PRISM DATA ON ESA-AO TEST SITES FIRST DLR RESULTS Authors: Mathias Schneider, Manfred Lehner, Rupert Müller, Peter Reinartz Remote Sensing Technology Institute German Aerospace

More information

Physics 1C, Summer 2011 (Session 1) Practice Midterm 2 (50+4 points) Solutions

Physics 1C, Summer 2011 (Session 1) Practice Midterm 2 (50+4 points) Solutions Physics 1C, Summer 2011 (Session 1) Practice Midterm 2 (50+4 points) s Problem 1 (5x2 = 10 points) Label the following statements as True or False, with a one- or two-sentence explanation for why you chose

More information