Fast time domain beamforming for synthetic aperture sonar

Size: px
Start display at page:

Download "Fast time domain beamforming for synthetic aperture sonar"

Transcription

1 Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo 1st December 2004 Thesis presented for the degree of Master of Science

2

3 CONTENTS 3 Contents Abstract 5 Aknowledgments 6 List of acronyms 7 List of symbols 9 1 Introduction Background Problem to be addressed Contributions made by this thesis Key results Thesis outline Synthetic aperture sonar (SAS) fundamentals SAS imaging principle SAS image reconstruction Sampling constraint Multielement receiver systems Phase centre approximation (PCA) Motion Image quality SAS signal processing overview Beamforming Time domain beamforming (TDB) Backprojection (BP) Dynamic focusing (DF) Fast time domain methods Applications across diciplines Radar

4 4 CONTENTS Seismology Computed tomography (CT) Medical ultrasound Fast Factorized Back-Projection (FFBP) Principle Details Error analysis FFBP performance Simulation setup Results Processing load details Quality Remarks Experimental results from the HUGIN AUV The HUGIN family of AUVs The SAS program for HUGIN Experimental setup Imaging results Conclusion Future work

5 CONTENTS 5 Abstract To this day the most widely used imaging methods in synthetic aperture sonar (SAS) are frequency domain inversion methods. This is due to the fact that these methods are faster than the corresponding time domain methods, as the fast Fourier transform can be utilized. However, methods using the fast Fourier transform make a number of approximations, for example that the sensor array elements can be modeled to lie on a straight line. In many applications, for instance in SAS imaging, this is seldom the case. It is inherent in the time domain inversion methods that they can handle an arbitrary array geometry, but these methods are also considerably slower than their frequency domain counterparts. This thesis will discuss a relatively new and fast time domain inversion method based on factorizing the image scene and the sensor array. Ulander, Hellsten and Stenström were the first to publish their development of this method in [Ulander et al., 2003]. The algorithm introduces an error than can be tuned to suit the requirements of the current application. Speed can be traded for image quality. FFBP was implemented and tested on both real and simulated data. The algorithm was shown to be fast and give images of good quality for widebam SAS systems. For narrowbeam SAS systems, however, the quality of the images was good, but the speed was lower than with standard backprojection. The thesis is part of a research program at FFI (Norwegian defence research establishment) and Kongsberg Simrad AS that aims to make an interferometric synthetic aperture sonar for the HUGIN AUV.

6 6 CONTENTS Aknowledgments I would like to thank Roy Edgar Hansen and Hayden John Callow at FFI (Norwegian defence research establishment) and Andreas Austeng with the Department of Informatics at the University of Oslo for all their help with the implementation of the algorithm and the writing of the thesis, as well as with providing data and literature.

7 CONTENTS 7 List of acronyms Acronym AUV BP CAT CBP CT CTD CVL DF DPCA FBP FFBP FFI FFT FHBP FRA HUGIN IFFT INS Description Autonomous underwater vehicle Backprojection Computerized axial tomography Convolution backprojection Computed tomography Conductivity, temperature and depth Correlation velocity log Dynamic focusing Displaced phase center approximation Filtered backprojection Fast factorized backprojection Norwegian defence research establishment Fast Fourier transform Fast hierarchical backprojection Fourier reconstruction algorithm High-precision untethered geosurvey and inspection system Inverse fast Fourier transform Inertial navigation system Table 1: List of acronyms part 1

8 8 CONTENTS Acronym ISLR KM LBL MI MR PCA PF PM PSLR RAS RD RNN Rx SAR SAS SDFFT TDB Tx UUV Description Integrated side lobe ratio Kirchhoff migration Long baseline Multilevel inversion Magnetic resonance Phase center approximation Polar format Phase modulation Peak to side lobe ratio Real Aperture Sonar Range-Doppler Royal Norwegian Navy Receiving element Synthetic Aperture Radar Synthetic Aperture Sonar Sparse data fast Fourier transform Time domain beamforming Transmitting element Untethered underwater vehicle Table 2: List of acronyms part 2

9 CONTENTS 9 List of symbols Type Symbol Description Axes u Direction of travel for the platform t Time x Along-track, azimuth, cross-range or slow time domain y Range or fast time domain z Vertical axis k u Wavenumber in u-direction k x Wavemnumber in x-direction k y Wavenumber in y-direction k x (ω, k u ) Wave number in x-direction as a function of ω and k u k y (ω, k u ) Wave number in y-direction as a function of ω and k u Operators F 1 Inverse Fourier transform Convolution Complex conjugate O( ) Order of magnitude Table 3: List of axes and operators

10 10 CONTENTS Type Symbol Description Data spaces p(t) Pulse as a function of time P (ω) Pulse in the frequency domain s(t) Received one-dimensional signal ss(t, u) Received two-dimensional signal Ss(ω, u) Received signal in the fast-time frequency domain SS(ω, k u ) Two-dimensional Fourier transform of the received signal ss M (t, u) Matched filtered received signal gg(u, y) Ideal target reflectivity function as a function of the sensor elements position in azimuth and range GG(k x, k y ) Two-dimensional Fourier transform of the target reflectivity function rect( t ) Rectangular pulse of duration τ τ δ(x) Delta impulse function a s (ω, θ) Sonar radiation pattern as a function of angular frequency, and angle σ Reflection coefficient for an omnidirectional target Table 4: List of data spaces

11 CONTENTS 11 Symbol Units Description t s Delay for received signal t s s Steering delay t f s Focusing delay z m Distance between vertically displaced sensor elements R m Range c m/s Propagation speed δx m Azimuth resolution δy m Range resolution λ m Wavelength L m Length of aperture D m Length between pings M Number of pings N Number of receivers d m Azimuth sample spacing h m height B Hz Pulse bandwidth τ s Pulse length T s s Time of arrival for the first sample in the pulse T f s Time of arrival for the last sample in the pulse ω rad/s Angular frequency α Chirp rate f Hz Frequency f D Hz Doppler band width k rad/m Wave number v m/s Platform velocity Table 5: List of parameters part 1

12 12 CONTENTS Symbol Units Description Ω θ rad Beam width θ rad Angle from sensor element to target Ω x rad/m Target support in k x Ω y rad/m Target support in k y u T x m Position of transmitter in u-direction u Rx m Position of receiver in u-direction u T x Rx m Position of the collocated transmitter-receiver pair in u-direction u p m Position of the platform in u-direction tr m Distance between transmitter and receiver cr m Distance between phase center and receiver P RF Hz Pulse repetition frequency δt s Sampling criterion for the signal Q R (j) Factorization factor for the receivers in stage j Q I (j) Factorization factor for the image in stage j N s Number of stages N r Number of range samples u m Distance between centers of subapertures X Number of pixels in x-direction Y number of pixels in y-direction r Range error L A m Length of subaperture L I m Length of subimage γ i Unknown factor for stage i Table 6: List of parameters part 2

13 1 INTRODUCTION 13 1 Introduction 1.1 Background Sonar means SOund Navigation and Ranging. A sonar transmits acoustic waves. The waves travel until they hit an object, then they are reflected back to the sonar. By measuring the time it takes for a signal to return to the sonar, one can determine how far from the sonar the object is located. Sonar systems have been used for more than a century to detect underwater objects. Synthetic aperture sonar (SAS) is a technique to make the resolution of the system independent of range. The sonar transmits and receives signals while moving along a line. The signals received from various sonar positions are combined to form an image. The technique was not developed until the 1950s (by Wiley [Wiley, 1985]). The technique was quickly adopted in the radar world, but the application to sonar was slower. The first results of SAS experiments in a test tank was published in [Sato et al., 1973] in The traditional real aperture sonar (RAS) is still most used. The purpose of using SAS over RAS is that you get a much better resolution in the final image because you synthesize a long array by moving the physical array along a line. Images of the seafloor are useful for many applications, e.g. to find objects such as mines and wrecks, and for making maps of the seafloor. There are a variety of methods for forming images out of SAS data. Most SAS systems these days use frequency domain methods; these methods are fast, but have constraints as to how much the sonar platform motion can diverge from a straight line. At the other extreme are the time domain methods, which are slow, but can handle any platform motion. No books about SAS are published to this day, but as the principles are the same as for synthetic aperture radar (SAR), good books to read are [Franceschetti and Lanari, 1999] and [Soumekh, 1999]. However, some PhDthesis are written that cover the field of SAS well. These are e.g. [Banks, 2002], [Callow, 2003] and [Hawkins, 1996]. 1.2 Problem to be addressed This masters thesis was written for the degree of Master of Physics at the University of Oslo. It addresses the field of syntetic aperture sonar image formation, and one imaging method in particular; fast factorized backprojection (FFBP). The goal of the thesis was to implement FFBP and test it on both real and simulated data. The real data was collected by Edgetech SAS mounted on a HUGIN AUV (HUGIN autonomous underwater vehicle). FFBP is a time domain method. It is faster than standard backprojection (BP), and it doesn t face the limitations of the frequency domain methods when it comes to platform motion. The tests

14 CONTRIBUTIONS MADE BY THIS THESIS were supposed to tell how the algorithm would perform with regards to speed and quality of the images for different datasets and different parameters of the algorithm. The algorithm should also be compared to standard backprojection to show that it is fast and that the quality of the images can be made adequate for most applications. 1.3 Contributions made by this thesis Both FFBP and standard BP has been implemented in this process, as well as a simulator of SAS data. Tests of algorithm speed and image quality were carried out. Real data were provided by FFI (Norwegian defence research establishment). 1.4 Key results The results are promising; they show that FFBP is in fact much faster than BP for certain datasets, at the same time as beeing able to handle an arbitrary array geometry, something fast methods working in the frequency domain cannot. The quality of the images can also be tuned to suite any requirement. However, higher quality of the images comes at the expence of an increased computation time, so a trade-off must be made. There are some constraints to the algorithm. An approximation error controls the quality of the images, and this error depends on both parameters of the sonar system, as well as the scene to be imaged. For narrowbeam SAS systems, such as Edgetech SAS, the algorithm is not suitable. For widebeam SAS systems, however, the results show that FFBP can produce a good image in substantially shorter time than BP can, as well as supporting an arbitrary array geometry. 1.5 Thesis outline Section 2 will cover the basics of SAS and is intended to give the reader an overview of the field. Section 3 gives an overview over different beamformers, and concentrates espcially on time domain beamformers. It also compares the advantages and disadvantages of time domain and frequency domain methods. A description of two time domain methods is given; BP and dynamic focusing (DF). The section also covers fast time domain mehods as well as list applications of beamforming in other diciplines. Section 4 describes the main method of this thesis; fast factorized backprojection (FFBP).

15 1 INTRODUCTION 15 Section 5 presents the results of testing FFBP on simulated data. Both performance with respect to speed and image quality is described. Section 6 presents the HUGIN AUV SAS system. It also shows the results of applying this implementation of FFBP to data from Edgetech SAS. Section 7 gives the conclusion of the thesis as well as give some suggestions for future work related to FFBP.

16 16 Pulse locations Along track u h Range Slant range Crossrange Real aperture footprint Cross track y x Swath Figure 1: Illustration of the geometry of a synthetic aperture sonar system. The platform carrying the sonar moves along the u-axis. It transmits pulses at the locations marked as pulse locations. The area marked as real aperture footprint is the area the physical sonar can see at any given time. The figure is taken from [Hansen, 2001]. 2 Synthetic aperture sonar (SAS) fundamentals This section covers the basic principles of SAS imaging and is intended to give readers who are not familiar with this field an understanding of the terminology used and the assumptions made. This section will also show why a better resolution can be acheived with SAS than with RAS. 2.1 SAS imaging principle To get a visual picture of how the SAS system works, it is instructive to study the imaging system shown in figure 1. The platform where the sensor elements are mounted follows a path in the x- direction (also called along-track, azimuth, cross-range direction, or slow time domain). We still refer to the direction the sonar is moving as u, to separate the sonar location from the location of the scene. At the positions marked as pulse locations, the transmitting sensor element sends out a pulse p(t) of length τ. t is called the fast time domain due to the fact that the acoustic waves travel with a much higher speed than the platform does. See e.g. [Johnson and Dudgeon, 1993, chapter 2] for a more thorough study of the physics of propagating waves. The pulse is reflected by objects on the sea floor and the receiving sensor element records it as s(t t). t is related to the range from the transmitter to the object and back to the receiver. By assuming that the receiving and the transmitting

17 D N O 2 SYNTHETIC APERTURE SONAR (SAS) FUNDAMENTALS 17 * = * = 4 4 F F 5 = J H = C A 2 I E J E B H F E C F! 6 E A H A? ) C J H =? 4 = C A + H I I H = C A 5 M = J D 4 A = = F A H J K H A B J F H E J + H I I J H =? Figure 2: A sonar system consisting of two vertically displaced receiver arrays. This configuration is useful for finding the height profile of the image scene. The figure is taken from [Hansen and Sæbø, 2003]. element are at the same positon we have that t = 2R c (1) where R is the range to the object in meters and c is the speed of the propagating wave (for acoustic waves in water c 1500 m/s). Both the transmitter and the receiver have a radiation pattern that dictates how much of the imaging scene the sensor elements can see at any given time. This area is marked as the real aperture footprint in figure 1, and is what the sonar sees at each ping (the transmission and reception of a pulse is called a ping). By moving the sonar in the u-direction, data from several footprints can be combined to give a better resolution image than could be made using only one ping. This is the essence in SAS imaging. Because we are trying to image a 3-dimensional space using only 2 dimensions (x and y), there will be ambiguities as to the height of the imaged scene and as to which side of the sonar the signal is coming from. The first problem can be solved using a system of two (or more) vertically separated receivers (shown in figure 2 for two receivers). This gives us a chance to measure the phase difference between the same pulse received at the two receivers (see figure 3), and hence solve for the unknown height. The other ambiguity is solved by placing the sonar on one side of the platform as opposed to underneath it. This is referred to as a side-looking SAS system. There will also be shadowing and layover of objects, which can be removed by interferometry or multiple runs over the same area. However, the shadow can sometimes be useful in classifying objects on the seafloor. More information on these effects can be found in [Franceschetti and Lanari, 1999, chapter 1].

18 SAS IMAGING PRINCIPLE z Rx1 Tx Rx2 r 1 r 2 reflector Swath y Figure 3: Two receivers are vertically separated by z. By measuring the phase difference between the same pulse receives at the two receivers, the height of the reflector can be resolved. The figure is taken from [Hansen, 2001]. Some of the symbols might not be consistent with the text. y Swath L D d Range resolution y x Crossrange (azimuth) resolution x R Figure 4: Range and azimuth resolution in a synthetic aperture sonar. The azimuth resolution of the synthetic aperture is given by the maximum synthetic aperture length, and is range independent. The figure is taken from [Hansen, 2001]. Some of the symbols might not be consistent with the text. As stated above, the motivation for using SAS as opposed to RAS is the huge improvement in resolution. Resolution is defined as the minimum spacing we can have between two objects and still see that they are in fact two different objects. We define both azimuth and range resolution. See figure 4 for a visualization. From [Franceschetti and Lanari, 1999, page 25] we have that two targets, if they are separated by one beamwidth, can be resolved in azimuth for a RAS system only if they are not within the sonar beam Ω θ at the same time. This means that δx Rλ L where R is the range to the objects, λ is the wavelength corresponding to the center frequency and L is the length of the aperture. This is only an approximation of the true resolution, however, and it corresponds to the width of the mainlobe (2)

19 2 SYNTHETIC APERTURE SONAR (SAS) FUNDAMENTALS 19 of the beampattern of the aperture. This can also be derived from the Rayleigh criterion as stated in [Johnson and Dudgeon, 1993, page 65]. In a SAS system the effective length of the aperture is the synthesized length of the aperture i.e. the length the aperture moves along the u-axis. This takes away the dependence of the azimuth resolution on range, because we can have as long an effective aperture as we want. The azimuth resolution for a SAS system is δx = L 2. (3) A smaller antenna gives a better resolution, due to the fact that the beam is wider. See e.g. [Franceschetti and Lanari, 1999, page 28] for details. After matched filtering in range the range resolution is the same for both systems and is according to [Franceschetti and Lanari, 1999, page 17] given by δy = where B is the pulse bandwidth. B 1/τ where τ is the length of the pulse. From this we see that the only way to improve the range resolution is to use shorter pulses. Short pulses have little transmitted power, which again reduces the practical range. This can be avoided by the use of modulated long ones followed by pulse compression. There are various ways to code pulses. By far the most common pulse coding scheme used is binary phase coding. In this scheme the phase of the signal is changed within the duration of the pulse, according to a binary code. One criteria for choosing a code is that it should produce low range sidelobes on decoding. The decoding, or compression, involves correlating the received signal with a replica of the transmitted code. c 2B Another example of pulse coding is a chirp pulse. The pulse is sent as ( ) ( ) t p(t) = rect e j ωt+ αt2 2 τ where rect( t τ ) is a rectangular pulse of duration τ, ω = 2πf is the angular frequency with f the carrier frequency, and α is the chirp rate describing the rate with which the frequency of the chirp will vary. It is related to the pulse band width by αt 2πB. The phase of a chirp pulse varies quadratically versus time while the frequency changes linearly versus time, and we say that the chirp signal is a phase modulated (PM) signal. The deriviative of phase gives the instantaneous frequency of the signal. If α > 0, the instantaneous frequency increases with time, and the signal is said to bee upsweep. In the opposite situation, it is said to be downsweep. In the reconstruction, the complex conjugate of the received signal is mixed with the phase of the transmitted chirp to get the compressed or deramped signal that we need. Chirps are described further in [Soumekh, 1999, page 23]. The only disadvantage with using pulse coding is added transmitter and receiver (4) (5)

20 SAS IMAGING PRINCIPLE Figure 5: A SAR operating in strip-map mode. The squint angle of the system are kept constant throughout the data collection period to illuminate a fixed strip in the (slant) range domain. The figure is taken from [Franceschetti and Lanari, 1999]. complexity, but the advantages generally outweigh the disadvantages, and so it is widely used. SAR have three different operating modes, whereas SAS only uses one of them; strip-map mode. However, it seems like a good idea to eventually start using all of them on SAS to, so a description of all the modes for SAR is given below. The illustrations are also for SAR. The descriptions are based on information found in [Franceschetti and Lanari, 1999]. Strip-map SAR/SAS In this configuration the radar/sonar maintains the same broadside radiation pattern throughout the data collection period on a fixed strip in the (slant) range domain. This means that the illuminated azimuth area is varied from one ping to the next. A broadside radiation pattern means that the main lobe of the radiation pattern is perpencdicular to the synthetic aperture. One can also have a squinted radiation pattern (the main lobe of the radiation pattern is not at broadside, but at some other angle) in strip-map mode, as long as the squint angle is kept constant througout the data collection period. Strip-map SAR/SAS is mainly encountered in reconnaissance or surveillance problems. See figure 5. Broadside strip-map mode is assumed througout this thesis. Spot-light SAS

21 2 SYNTHETIC APERTURE SONAR (SAS) FUNDAMENTALS 21 Figure 6: The beam of a spot-light SAR system is steered to point at the same area on the ground at all times during the data collection period. The figure is taken from [Franceschetti and Lanari, 1999]. This mode uses mechanical or electronic steering of the physical radar so that it always illuminates the same finite area of interest during the data collection period. It is often used to take a closer look at areas on the ground that looks interesting for the current application after strip-map imaging. One could imagine that this would also be useful for sonar, e.g. when searching for mines. See figure 6. Scan-mode SAS In this mode the radar is continously on, but the antenna beam is periodically moved to illuminate neighbouring subareas. See figure 7. The reason to use this mode is to overcome the limitation one faces in strip-map mode as to the range extension of the imaged area. 2.2 SAS image reconstruction Most of the derivation in this subsection is based on information from [Soumekh, 1999]. Assume a reflector in the scene with an ideal reflectivity function given by gg n (u, y) = σ n δ(u x n )δ(y y n ) (6) that we want to measure. (x n, y n ) is the location of the target and (u, y) is the position of the sensor element. σ n is the reflection coefficient. This equation does only

22 SAS IMAGE RECONSTRUCTION Figure 7: In scan-mode, the beam of the SAR is periodically moved to illuminate neighbouring subareas. The figure is taken from [Franceschetti and Lanari, 1999]. describe non-dispersive point reflectors, however. A pulse is transmitted, and when it is received it has been delayed and convolved with the target reflectivity function. The received signal is also subject to losses and absorption. Read more about these effects in e.g. [Urick, 1983]. The received signal can be written as 1 ss n (t, u) = gg n (u, y)p(t 2 c R n)dydx (7) x y Rn 2 where R n = (u x n ) 2 + (y y n ) 2. By applying the Born approximation, 1 the total signal received from the point reflector is ss(t, u) = n ss n (t, u) = n 1 R 2 n σ n p(t 2 c R n). (8) If we also take the sonar amplitude pattern into account, we can write this expression in the frequency fast-time domain as Ss(ω, u) = P (ω) n 1 R 2 n σ n a s (ω, θ)e j2krn (9) where k is the wave number with k = 2π λ, and a s(ω, θ) is the amplitude pattern of the sonar. The latter is made up of the radiation patterns of both the receiving 1 The signal received from one reflector can be modelled as independent from signals from other reflectors, hence superposition can be applied.

23 2 SYNTHETIC APERTURE SONAR (SAS) FUNDAMENTALS 23 and the transmitting element. It can, for a rectangular aperture, be derived like this (according to [Van Trees, 2002, page 72]): Since k x = 2π λ a s (ω, θ) = sin(θ), we get L/2 L/2 = sin( L 2 k x) L 2 k x 1 L e jπkxx dx = ej L 2 kx e j L 2 kx L 2jk x 2 = sinc( L 2 k x). (10) a s (ω, θ) = sinc( L sin(θ)) (11) λ where θ is the angle the incoming/outgoing signal makes with the aperture. With respect to a target in the scene, this angle can be defined as θ n = arctan( u x n y y n ). (12) We assume the aperture to be rectangular throughout this thesis. The support band of a s (ω, θ) is a decreasing function of L, which implies that we can get a narrower beamwidth with a long sensor array. The Fourier transform of the received signal with respect to the slow-time u is SS(ω, k u ) = P (ω) n a s (ω, θ)e jkx(ω,ku)xn jky(ω,ku)yn where and = P (ω)gg[k x (ω, k u ), k y (ω, k u )] (13) k y (ω, k u ) = 4k 2 k 2 u (14) k x (ω, k u ) = k u. (15) Because of the nonlinear nature of k y, the mapping from (k x, k y )-domain to (x, y)- domain to get the final image is far from trivial. The process is called Stoltmapping and is shown in figure 8. The Stolt mapping is essentially a polar to cartesian conversion. The baseband version of these expressions is given by multiplying them with e j2kr and e jkx(ω,ku)xn+jky(ω,ku)yn, for the time domain and frequency domain case, respectively. By working with basebanded data, we use lower frequencies, and thus interpolation is simplified. Also, the sample rate can be reduced. To find GG[(k x (ω, k u ), k y (ω, k u ))] given only SS(ω, k u ), one would intuitively state that GG[(k x (ω, k u ), k y (ω, k u ))] = SS(ω, k u). (16) P (ω)

24 SAS IMAGE RECONSTRUCTION Figure 8: This figure illuistrates the process of Stolt-mapping. Some of the symbols might not be consistent with the text. From the non-liear grid the spectral data are on, we wish to transform them to lie within the dashed line. B ky and B kx limits support band of k y and k x, respectively. The figure is taken from [Pat, 2000].

25 2 SYNTHETIC APERTURE SONAR (SAS) FUNDAMENTALS 25 This is, however, not possible, because the bandwidth of the pulse is finite. Instead we use GG[k x (ω, k u ), k y (ω, k u )] = P (ω)ss(ω, k u ) = P (ω) 2 SS(ω, k u ) (17) gg(x, y) = F 1 [GG(k x (ω, k u ), k y (ω, k u ))]. (18) We call this process matched filtering. It can also be implemented in the time domain through a convolution of ss(t, u) with p ( t). Matched filtering is a correlation of the received signal with the transmitted puls. F 1 [ P (ω) 2 ] is called the point spread function of the imaging system. The inverse two-dimensional Fourier transform of the support region of the signal dictates the shape of the point spread function. To develop an analytical model for the point spread function, one could approximate the target support via a rectangle in the (k x, k y ) domain with widths Ω y = 2(k max k min ) (19) in the k y domain and Ω x = 8π (20) L in the k x domain. The model is composed of the following separable two-dimensional sinc functions in the (x, y) domain: sinc( Ω xx 2π )sinc(ω yy ). (21) 2π This two-dimensional sinc pattern represents the point spread function of the stripmap SAS systems for an ideal reflector. In reality, when we are not dealing with ideal reflectors, the shape of the point spread function will also be influenced by the azimuth window function, and the full point spread function is represented by the Dirichlet function. Read more about it in [Franceschetti and Lanari, 1999, page 28]. This funcion is periodic, and to avoid grating lobes in the point spread function, the azimuth sampling has to be chosen after a certain criteria. This is discussed furter in subsection 2.3. It is only in the narrowband case that approximating the target spectral support through a rectangular region is a good approximation. In wideband systems (where the carrier frequency is comparable with the bandwidth) the point spread function cannot be approximated by a twodimensional sinc patten. In this case the point spread function takes the shape of a funnel 2 which must be calculated numerically. The system model presented in this section ignores the effects of refraction, multipath and the effects of the medium on the propagation speed. The interested reader can read more about these effects in e.g. [Urick, 1983]. The model also assumes that the platform is stationary during transmission and reception of the 2 [Soumekh, 1999] discusses this further.

26 SAMPLING CONSTRAINT signals. This is the common stop and hop assumption that usually works well for SAR. For SAS however, this is not the case, and the reconstruction must compensate for the sonars movement between transmission and reception. See e.g [Hawkins, 1996] for a more thorough discussion of this assumption. 2.3 Sampling constraint We need a certain sampling criterion in both the fast time and slow time domain. In the fast time domain the time between transmitting pulses must be long enough to prevent the echo returns from the previous pulse to interfere with the current. The first echoed signal sample arrives at the receiver at the fast-time T s = 2R min c (22) and the last echoed signal sample arrives at T f = 2R max c + τ (23) where R min and R max are the closest and farthest radial range distances of the range swath, respectively. This means that the pulse repetition frequency must satisfy P RF 1 T f. (24) Since the bandwidth of the signal is ±f, where f is the highest frequency in the basebanded data, the fast-time sample spacing should satisfy the following Nyquist criterion δ t 1 2f = 1 B. (25) B is the bandwidth of the signal. According to [Franceschetti and Lanari, 1999, page 28] the point spread function in azimuth has grating lobes with successive maxima at 2π d L u = qπ, q = 0, ±1, ±2... (26) where u is u normalized with respect to the azimuth footprint. To completely aviod grating lobes in azimuth we require that no grating lobes are within the visible area of the point spread function (±90 in the azimuth signal extention u 1. This leads to 2 2π d 1 L 2 π 2 d L 2 (27)

27 2 SYNTHETIC APERTURE SONAR (SAS) FUNDAMENTALS 27 where d is the azimuth sample spacing. This sampling constraint in azimuth can also be derived from a Doppler concept. Without going into further detail, the Doppler bandwidth will be f D = 2v L (28) and we must have f D v d d L 2 (29) where v is the platform velocity. See [Franceschetti and Lanari, 1999, page 30] for details. These equations gives the lower and upper limit for the PRF as 2v L P RF 1 T f. (30) Several processing steps must be performed in the reconstruction: Pulse compression The first step in the reconstruction is the pulse compression (or matched filtering). The raw data and the matched filter must be appropriately zero padded before the convolution to avoid aliasing caused by wrap around effects. It is customary to perform the matched filtering in the frequeny domain and make use of the fast Fourier transform (FFT). Read more about matched filtering in [Franceschetti and Lanari, 1999]. Baseband conversion Baseband conversion of the fast-time data is applied by multiplying the signal with e j2kr. Hilbert transform A Hilbert transform 3 is used to make an analytic signal. Filtering To get a better resolution signal it is also customary to upsample and lowpass filter the signal. 2.4 Multielement receiver systems One of the significant differences between sonar and radar systems is that most synthetic aperture sonars travel faster, compared to the propagation speed of the waves used, than that required to meet the spatial sampling criterion, and so the aperture is insufficiently sampled. When this happens we get aliasing in the k u - domain and grating lobes in the final image. From the last section we have that 3 See [Mitra, 2001, chapter 11.7] for more information on the Hilbert transform.

28 PHASE CENTRE APPROXIMATION (PCA) d L = Nd Tx Rx Ping p Equivalent Tx-Rx phase centers d/2 L/2 Tx Rx Ping p+1 D = L/2 Figure 9: An illustration of how phace center antenna elements are calculated from the receiver and transmitter positions. The figure is taken from [Hansen, 2001]. the maximum distance the sensor element can move between transmitting pulses is d = L 2. (31) For anything but a very short range sonar, this implies a very low platform speed. The sonar has severe trouble with maintaining a straight line of travel at low speeds, and the quality of the image is lowered. Since an increase in PRF would imply a reduction of range extention of the image, this is often not an option. According to [Gough and Hawkins, 1997] there are some circumstances where the grating lobes in the images can be minimized by a preprocessing step, however, it is almost always impossible to retrieve the missing data without errors, approximations or a priori information. The solution is to use a multiple receiver array; an array of receivers in the u-direction. A multielement receiving array is shown in figure 9 with a transmitter located between the first and the second receiver. The transmitter can also be located elsewere. The effect of a multiple receiving array is an increase in the sampling frequency in azimuth without having to reduce the speed of the platform. Read more about multiple receiver arrays in [Pat, 2000]. 2.5 Phase centre approximation (PCA) The position midway between the transmitter (Tx) and each receiver (Rx) is called the phase center. An array of phase centers are called a phace center antenna (PCA). By replacing each Tx-Rx pair by the equivalent PCA element, the bistatic system is replaced with a monostatic one. The idea is illustrated in figure 9. PCA is used to make the reconstruction more computationally efficient by instead of computing the range from the transmitter to the target and from the target to the receiver, one only computes the range from the phase center to the target. Many algorithms are not developed for single Tx/multiple Rx systems, so it is useful to be able to convert a displaced Tx-Rx pair configuration to that of a collocated Tx-Rx pair configuration to use these algorithms. After we have synthesized an

29 2 SYNTHETIC APERTURE SONAR (SAS) FUNDAMENTALS 29 array of collocated Tx-Rx pairs, we must rearrange the data space so that it corresponds to what we would have received from the collocated Tx-Rx pairs, and not the displaced Tx-Rx pairs used in reality. For each transmitted ping, N receivers sample the aperture at a position in azimuth equivalent to the position of their phase centers. For each ping, we must find the position of the phase centers in order to find the correct order of the sampling positions. It may be that some data lines have to be moved to the front or behind other data lines, as the order of the sampling positions have been changed. However, it is important that the new constructed data space still fulfills the azimuth sampling constraint of d L. 2 As the value of d increases, the less overlap we have between the collocated Tx- Rx pairs for successive pings. The maximum distance the platform can travel between successive pings equals the extension of the full phace center antenna. From [Pat, 2000] we can read that given the azimuth positions of the transmitter u T x and the receivers u Rxn relative to the platform as well as the position of the platform u p along the aperture, it is possible to calculate the position of each collocated transmitter-receiver pair u T x Rx along the aperture as u T x Rx (m, n) = u p (m) + u Rx(n) u T x 2 where m is the ping number and n is the receiver number. These positions are used in the data space conversion process. (32) For the phase center approximation to be valid, the range from the transmitter via the target and back to the receiver has to diverge little from the range between the phase center and the target. This gives us the following constraint (taken from [Bellettini and Pinto, 2002]) 2 tr 4λ (1 cos2 (Ω θ /2)) cr (33) where tr is the distance between the transmitter and the receiver, Ω θ is the azimuth beamwidth and cr is the distance from the phase center to the receiver. The constraint can be a problem with wide beams, but the condition is usually satisfied if the transmitter is positioned such that it is in the middle of the receiving array during reception, as this gives the least strict constraint. Thus the transmitter is located to the right of the middle of the receiving array. Since the sonar is moving between transmission and reception, the criterion becomes range dependent. 2.6 Motion Synthetic aperture processing for sonar application has not gained widespread use, because of the complexity of the processing. Both erroneous platform motion

30 O N MOTION 5 M = O 2 E J? D 5 K H C A 4 ; = M 0 A = L A Figure 10: The different ways in which the platform can move. The figure is taken from [Hansen, 2001]. and random disturbances within the medium cause phase errors in the data. Unless phase compensation (also called motion compensation) is applied, a meaningful image cannot be produced. Figure 10 shows the different ways in which the platform can move. An inertial navigation system (INS) can be used to obtain continous position fixes of the platform, and one can thus perform phase compensation based on the absolute position of the platform. From [Sheriff, 1992] we can read that because of their inherent drift and bias errors, however, typical inertial navigation systems do not have the accuracy needed for high frequency phase compensation. And even if they had the needed accuracy, they still wouldn t be able to solve the entire phase compensation problem. They would only correct for erroneous platform motion, and not the problems associated with medium instability. An unstable medium complicates the prediction of propagation speed, and the rest of the processing chain might inherit these errors. [Sheriff, 1992] states that experiments have established that the ocean remain stable for periods up to 8 minutes, however, the periods are discontinuous, and the instabilities increase with frequency. When we have platform movement normal to the intended line of motion, the travel time for the pulse from the transmitter to the receiver will be altered in unknown amounts, and data can not be integrated correctly. Also when the actual receiver positions along the aperture in u-direction does not correspond to their assumed positions due to different true and predicted forward velocities of the platform, we cannot make a focused image. However, movement normal to the intended line of motion is the most severe problem, and can be mitigated by the use of the displaced phase center antenna (DPCA) algorithm. The DPCA algorithm relies on ping to ping correlation of phase centers. The technique was first suggested by [Raven, 1978]. The signal received at a phase center is equivalent to that which would be received there by a single Tx/Rx element. This consept is applied to a synthetic aperture to estimate and correct for phase errors from one receive position to the next. If the spacing between two

31 @ 2 SYNTHETIC APERTURE SONAR (SAS) FUNDAMENTALS 31 6 N 4 2 E C F L A H = F F E C F D = I A? A J A H I 6 N 4 N 2 E C Figure 11: The displaced phase center antenna algorithm uses overlapping phase centers to see if the actual position of the receivers is the same as their predicted position. If not, a phase compensation factor can be applied to the data. The figure is taken from [Hansen, 2001]. receiving elements is d, the phase centers are separated by d. The pulse repetition 2 interval can be adjusted such that it coincides with a forward movement of the platform equal to d. Consider an array of receivers labeled from 1 to N, with a 2 transmitter located somewhere on the array. If the platform moves at a constant velocity along a straight line, the phase center from element no. 2s present position perfectly overlaps element no. 1s phase center location from the previous ping. Provided that the acoustic medium is constant between pings, the phase measured at element no. 2s present output and element no. 1s output from the previous pulse should be identical. If the platform does not move with constant linear motion, and/or the medium has changed between pings, a phase difference will appear at the output of the two elements. By measuring the phase difference, a correction factor can be applied to element no. 1s present output. Data corrected in this manner can be passed to a beamformer to produce an image. It is possible to reduce the speed of the platform, and thus get a bigger overlap between the phase centers, and again a better accuracy (illustrated in figure 11 where 4 pcas overlap at each ping). This will, however, come at the expence of a reduced size of the imaged scene or longer survey times. It is also harder for the platform to maintain a straight line of motion at low speeds. [Bellettini and Pinto, 2002] have estimated the optimal overlap factor. The discription of the DPCA algorithm is based on information taken from [Sheriff, 1992]. After the DPCA algorithm has been applied to the data, if compensation for azimuth motion errors has not been applied, the image may still be out of focus. However, it will be possible to apply autofocus techniques to remove the remaining errors in the image. There are two broad classes of autofocusing algorithms: Those tracking the phase of point reflectors (phase gradient algorithms) or measuring the Doppler frequency bandwidth (power spectrum analysis). Those measuring the geometrical displacement between separate looks or

32 IMAGE QUALITY the defocusing blur (map-drift, reflectivity displacement method or complex correlation in frequency domain). More on autofocus can be found in [Nahum, 1998] and [Callow, 2003, chapter 7]. 2.7 Image quality It is important to establish the SAS image quality to benchmark different beamformers, different navigation strategies etc. There are several measures used to find the quality of an image. Those described here are based on the imaging of one reflector. We look at the point spread functions in both azimuth and range, and extract various parameters. Some are described here: Peak to side lobe ratio (PSLR) PSLR is the ratio between the peak of the main lobe and that of the most prominent side lobe. There are four different measurements of PSLR; on both sides of the main lobe both in azimuth and range. It is customary to report the largest value for both directions. It is not necessarily the closest lobe to the main lobe that is the most prominent one, and so it can be convenient to report the two computed PSLRs corresponding to the lobe closest to the main lobe and to the absolute one. The sinc-function that is the approximated point spread function of an ideal point reflector has a noticeable disadvantage; the first sidelobes are quite high (PSLR -13 db). Such high sidelobes can produce artifacts in the image if high intensity reflectors are present. It is often desireable to reduce these sidelobes at the expense of geometric resolution. This can be achieved by introducing weighing functions. By applying a Hamming filter, the value drops to about -43 db, but the width of the main lobe is increased. Integrated side lobe ratio (ISLR) ISLR is defined as the ratio between the energy of the main lobe and that integrated over all the side lobes. Because the extent of the scene is limited, we typically integrate over several (10 to 20) lobes on both sides of the main one. Its value is about -10 db for the sinc-function, and drops to approximately -20 db with the use of a Hamming filter. There are also other definitions of ISLR, but we will stick with this one in this thesis (see e.g. [Martinez and Marchand, 1993] for an overview of other definitions). Information is taken from [Franceschetti and Lanari, 1999, page 112].

33 2 SYNTHETIC APERTURE SONAR (SAS) FUNDAMENTALS 33 Sonar raw data Slant-range DPCA (2D) Motion Estimation Broadside interferometry DPCA+INS Simple DPCA+INS Full Motion Compensation Straight line Arbitrary motion Wavenumber Chirp Scaling Beamforming Beamforming Time-domain Beamforming Fast Backprojection PGA, PCA, Mosaic PGA Contrast Optimization Map drift Autofocus SAS Image (echo strength) Coarse: Cross Correlation Full: 2D Phase Unwrap Interferometry Bathymetric map Figure 12: To form an image from SAS data is a complicated process. This diagram shows the main parts in the processing chain. The figure is taken from [Hansen and Sæbø, 2003]. 2.8 SAS signal processing overview This subsection gives an overview of the different parts in the processing chain of image formation from SAS data. The chain is illustrated in figure 12. The subsection is intended to give the reader a better understanding of where the methods described in this thesis comes into the processing chain. Motion estimation is always done, as a focused image require knowledge of the positions of all the PCA elements. In order to use frequency domain beamformers, the data has to be motion compensated to lie on a straight line if they don t already do. For time domain beamformers, this step is not necessary, which is a great advantage. The beamforming step is where the image is formed, and it is also where FFBP can be used. Autofocus can be applied if the image is not well focused after beamforming.

34 SAS SIGNAL PROCESSING OVERVIEW Interferometry is a processing technique for utilization of the extra information gotten from vertically separated receivers. The technique produces a bathymetric map of the seafloor, i.e. a height profile.

35 3 BEAMFORMING 35 3 Beamforming The process of constructing an image from synthetic aperture data consist of a two-dimensional matched filtering. First the received echo from each ping is compressed. Then the azimuth compressed synthetic aperture is formed by matched filtering the variation of the signal across the synthetic aperture. A beamformer performs this operation. Time domain beamforming is, as the name implies, beamforming done in the time domain. The other main group of beamformers perform all of their work in the frequency domain. They both have clear advantages and disadvantages. The time domain methods due to the fact that they can handle an arbitrary system geometry, but are slow, and the frequency domain methods due to the fact that they are relatively fast since they can utilize the FFT, but require the sampling positions to lie uniformly on a straight line. Also, the frequency domain methods needs a lot of memory to store and evaluate the 2- dimensional Fourier transforms, and the data must also be zero-padded to avoid wrap-around effects from the Fourier transforms. There are different algorithms in both main groups, all developed to suit different types of systems, required quality, required speed and other criteria. One important issue is wether or not the receiving array is in the near or far field of the scene. The reflections from an omnidirectional target propagates in spherical shells. The further the aperture is located from the target, the larger the diameter of the shells will be when they arrive at the aperture, and the more the shells will look plane. To find out if the wave can be modeled as plane (which highly simplifies the beamforming), the length between the true position of the wave and the assumed position of the wave (assuming plane wave) measured at receiver is calculated. We usually demand that this length is kept below λ ( [Bruce, 1992]). This gives 8 λ 8 R 2 + u 2 R u2 2R R 4u2 (34) λ where u is the position of the sensor element. If the range satisfies this criteria, we are in the far field of the scene, and the wave can be modeled as plane, otherwise one must take into account the spherical nature of the wave when beamforming. For SAS, we are always in the near field. For more general information on beamformers, see [Johnson and Dudgeon, 1993] or [Van Trees, 2002]. 3.1 Time domain beamforming (TDB) As the main algorithm of this thesis (FFBP) takes place in the time domain, we will look more closely at TDB. We can divide time domain methods into un-

36 BACKPROJECTION (BP) focused and focused algorithms. The unfocused ones are the simplest. For one ping, one finds the direction of arrival of the incoming wave, and from this the time differences from when the wave meets one receiving element to when it meets the next is calculated. The data are delayed the appropriate amount (called the steering delay) and then summed. Consequently, this type of unfocused beamformer is called a delay-and-sum beamformer. It is on the form y(t) = n s n (t sn ) (35) where y(t) is the output of the beamformer and sn is the steering delay for the n th Rx element. It is very efficient because it is range-independent, but is only valid if the scene to be imaged is in the far field of the aperture. This is never the case with a SAS system. When one is in the near field, in addition to the steering delay, a focus delay must also be applied to account for the spherical nature of the wave. The focus delay is range-dependent, so a focused beamformer will be slower than its unfocused counterpart. It can be written as y(t) = n s n (t sn fn ) (36) where fn is the focus delay for the n the Rx element. See [Johnson and Dudgeon, 1993] for more details. Within the group of focused time domain beamforming algorithms, there are two important methods we shall look at next. 3.2 Backprojection (BP) BP is an exact inversion technique. It works in both near and far field, which means that the range to the different contributions will be important for finding the focusing delays. Calculating these is essentially what the algorithm is all about. We start with the fast-time matched-filtered signal ss M (t, u) = ss(t, u) p ( t), (37) which has also been mixed to baseband. The algorithm has its name from the fact that for a given synthetic aperture location u, the fast-time data of ss M (t, u) are traced back in the fast-time domain (backprojected) to isolate the echo return of the reflector at (x n, y n ). This can be written as gg(x n, y n ) = ss M [2R n /c, u]du = ss M [t t n, u]du (38) The equations are taken from [Soumekh, 1999]. u In other words, we wish to coherently add the data at the fast-time bins that corresponds to the location of that pixel for all synthetic aperture locations u. u

37 3 BEAMFORMING 37 To implement this method in practice, the available discrete fast-time samples of ss M (t, u) must be interpolated to recover ss M [t t n, u], and the integral takes the form of a sum. Often linear interpolation is used. However, it is possible to apply as advanced an interpolator as wanted at the expence of increased computation time. Although this algorithm can handle an arbitrary array geometry and makes no approximations, except for the interpolation, it has one major drawback. Consider an image with X Y pixels and an array with N sensors. For each aperure and pixel position we need to compute the range between the sensor element and the pixel, interpolate in the received signal and finally add the value found to the image matrix. In total, the number of operations is proportional to X Y N = N 3 (39) if we assume that X = Y = N. This fact limits the use of the algorithm to very small aperture and image sizes. For small images, direct backprojection is quite efficient and often preferred due to its simplicity and robustness. The algorithm has another major advantage in that it conserves memory requirements. Only one time series has to be stored in memory besides the image matrix. As soon as the data line has been backprojected to all image pixels, it is no longer needed. This makes the algorithm suitable for real time processing. 3.3 Dynamic focusing (DF) In the sonar community, dynamic focusing usually means focused polar beamforming. The data collected by an imaging system are in polar coordinates. In fact, one line of raw data can be considered as one polar image with a resolution of 360 (or the beamwidth of the sensor element). To obtain all the information necessary to make an image, one could sample less frequently in azimuth at larger range than at shorter range. As we saw in the last subsection, this is not taken into account in BP. In BP, the pixels define the sampling grid. In polar processing, or DF, one makes the image entirely in polar coordinates. This is a common method in ultrasound RAS imaging (see [Haun, 2004]). To use this method with SAS, however, introduces problems. At a given PCA position the data are on a polar grid. But as one moves the platform, the data are on another polar grid, and these grids are difficult to combine. A a polar-to-cartesian transformation must be performed in order to coherently combine the data from ping to ping, and the computational cost of such a transformation is very high. The process are further complicated if one has an array of receivers. DF are never used for a bistatic system.

38 FAST TIME DOMAIN METHODS 3.4 Fast time domain methods With these facts in mind it is possible to construct in-between-algorithms, where one takes the best from both worlds (BPand DF). Method that lie on this crossroad are called fast time domain methods. One such method is FFBP, which is the main topic in this thesis and is described in section 4. Other fast time domain methods also exist for various applications, and we will look at some of them in the next subsection. Most fast time domain methods have that in common that they (at least in the beginning) use polar coordinates and in some way divide the backprojection problem into smaller subproblems that can be solved with less computation. The solutions to these subproblems are then combined in an appropriate way to retrieve the final image. It is essentially a smart rearranging of the order of the calculations. 3.5 Applications across diciplines To get an overview of the field of beamforming we will look at the system requirements and the imaging methods used for some of the most common applications of beamforming apart from sonar. The main differences are due to huge variations of the physical parameters involved. Much of the mathematics involved in the different fields are quite similar, but a lot of it has been obscured by different terminology and different names on essentially the same algorithm Radar The radar area is the area in which antenna arrays were first used. Radars are usually mounted on an air plane or a satellite, but can also be ground- or shipbased. They are almost always in the far field of the scene, and so they only have to deal with plane waves. This simplifies the imaging process. Radars mounted on satellites have the easiest imaging process due to the fact that a satellite can follow a nearly linear path with constant velocity, which is harder for airplanes and especially for ships. However, airborne radars have to deal with reflections from the ground, something the ship- and groundbased radars can ignore. As with sonar, the three different modes of SAR have different algorithms especially tailored for each one. Frequency domain methods utilizing the FFT are common as the receiver positions can be modelled to lie on a straight line. One must separate the cases in which we have a narrowband system and the cases in which we have a wideband system. If we are working with a narrowband system, we can perform a 2D FFT, a multiplication with the transfer function and a 2D IFFT (inverse FFT) to construct the image. However, in reality we can seldom assume a narrowband

39 3 BEAMFORMING 39 system. In these cases a range cell migration compensastion has to be applied to account for the non-linear nature of the range samles. This process is also called Stolt-interpolation. The range-doppler (RD) algorithm is the oldest and still the most popular frequency domain method in the area of SAR processing ( [Franceschetti and Lanari, 1999]). Other popular methods are the wavenumber algorithm (also called the Ω K algorithm) and the chirp scaling algorithm. They all perform some kind of range cell migration compensation, but are relatively fast due to the use of FFTs. Time domain methods are also used in this area, although on a much smaller scale. If the image scenes are small or if the trajectory of the platform deviates much from a straight there is no point in using FFT-based method. If the imaged scene is small, the FFT-based methods are not much faster than the time domain methods. And if there is much motion error, the cost of applying motion compensation and autofocus algorithms in order to use FFTs is so large that the computational savings are lost. As radars are usually in the far field, delay-and-sum and BP are the relevent algorithms in time domain. Some algorithms work solely in polar coordinates, e.g. the polar format (PF) algorithm and the convolution backprojection (CBP) algorithm. They are developed for radars working in spotlight-mode, and are adopted from the world of computed tomography ( [Franceschetti and Lanari, 1999, page 274]). They are based on the projection slice theorem and avoids the traditional nemesis of motion through resolution cells. The Fourier transform cannot be used for a signal sampled on a polar grid. The algorithms are still relatively fast, as the Hankel transform can be used instead. [Milman, 2000] gives a good explanation of the Hankel transform and its use. [Ulander et al., 2003] demonstrated FFBP on an airborne ultra-wide band, wide beamwidth VHF SAR system named CARABAS. However, there are other fast time domain methods that have been proposed in the recent years. A quadtree backprojection algorithm was proposed by [Oh et al., 1999] in [McCorkle and Rofheart, 1996] introduced a similar algorithm as early as 1996 with the same computational cost as FFBP by factorizing both the image scene and the aperture with a factorization factor of 2. Another fast time algorithm use only two stages (called local backprojection). These algorithms are in fact special cases of FFBP. [Olofsson, 2000] discusses a method highly related to FFBP; fast polar packprojection. As these fast time methods work in the time domain, they can handle an arbitrary platform trajectory without the same computational burden as standard time domain methods. The computational cost is traded for image quality, but the error can be controlled by setting certain parameters. It looks as though these methods will be more and more common for SAR in the near future.

40 APPLICATIONS ACROSS DICIPLINES Seismology The objective of imaging seismology is to conctruct an image of how the earth looks under the surface by the use of two types of waves; preassure and shear waves. One is interested in the structure and physical properties of the media. In the same manner as the sea, the earth is an inhomogeneous medium for the acoustic wave to travel in, and this must be taken into account in the beamforming process. Various approximations are applied to the composition of the earth. Often the earth is modelled as consisting of a stack of layers, where the physical properties are equal in each layer. However, the estimation of the velocity distribution and the various attenuation factors at different positions in the media is a big part of the image formation process in seismic imaging. The whole process of finding the velocity distribution and the attenuation factors and forming the image is called migration. In the same manner as water, the earth attenuates and spreads different amounts of the signal depending upon both the physical properties of the medium and the frequency of the sent signal. An important quality measurement in sesmic beamformers, is how they handle dips. According to [Rocca et al., 1989] a dip is another term for a Doppler frequency shift. Sesmic images can be displayed in both time or depth. Kirchhoff migration (KM) is a popular collection of algorithms in time domain that can be somewhat compared to BP in SAR and SAS, and works both in near and far field with an arbitrary array geometry. They sum received signals along a diffraction hyperbola whose curvature is governed by the medium velocity. Amplitude and phase corrections are added before summation to account for spherical spreading. KM can be cumbersome in handling lateral velocity variations i.e. are only accurate in media with only vertical velocity variations. KM is computationally expensive, can generate strong far-field migration artifacts, and they have problems when it comes to migrating complex subsurface structures. Finite-difference algorithms is another class of imaging methods where some of them work in the time domain and others work in the frequency domain. They can handle any type of velocity variation, but they have different degrees of dip approximations. Furthermore, differencing schemes, if carelessly designed, can severely degrade the intended dip approximation. The algorithms are quite slow, but can overome some of the limitations of the KM algorithms, so they are preferred when complex subsurface structures are to be imaged. Fourier-based methods using Stolt-interpolation are also in use. They are fast, they are based on an assumption of constant propagation velocity, and thus introduces considerable approximations to the model. One such method is the phase-shift method. Some of the frequency domain methods are modified to also comprise variable velocities by introducing the consept of stretching in time. Here, time sections are converted to approximately constant-velocity sections, and then migrated by the constant-velocity Stolt-method. This conversion is es-

41 3 BEAMFORMING 41 sentially stretching in the vertical (time) direction. Once the section is migrated in the stretched domain, it is converted back to the original time domain. Because none of the groups of algorithms can handle all possible events, many algorithms that combine the different groups have been developed. The information on the different algorithms are taken from [Yilmaz, 2001] Computed tomography (CT) Most of the information in this subsection is taken from [Basu and Bresler, 2001]. Computed tomography (also known as CAT scanning (Computerized axial tomography)) is the cross-sectional imaging of objects from transmitted or reflected data. Pulses are sent through the object of interest from different angles and data are collected at a receiving array placed on the othe side of the object. The sending of a pulse in a given direction is called a projection. The different projections are combined to give a cross-sectional image of the object. Tomography is widely used in the area of medical diagnosics. The type of algorithm used in the reconstruction process depends almost exclusively on from how many directions we have projection data. If data are available from all possible directions, and the measurement noise is negligible, the filtered backprojection (FBP) method is chosen. This is a time domain technique much like BP in SAR ans SAS. The difference is the filtering operation before the backprojection. The backprojection is the bottleneck of the method, as the filtering can be done using FFT s. Fourier reconstruction algorithms (FRAs) are also widely used in CT. They are based on the Fourier slice-projection theorem and involve FFTs of the projections, a transform from polar to cartesian coordinates, and an 2D IFFT to recover the image. Unfortunately, the interpolation step generally requires a large number of computations to avoid the introduction of artifacts in the image. Although this method is fast in theory, experiments show that the gain by using this method over FBP is less than predicted for reasonable image sizes. [Basu and Bresler, 2000] proposed a fast recursive method much like FFBP in They called it fast hierarchical backprojection (FHBP). It also factorises the image into smaller and smaller images, at the same time as the number of view-angles are reduced. In this algorithm (which takes place in polar coordinates) radial and angular oversampling can be controlled to trade off complexity for accuracy of reconstruction. The complexity is O(N 2 logn) and if parameters are chosen wisely, the distortions in the image is negligible while the speedup is considerable. [Basu and Bresler, 2001] describes how to choose the interpolator length, the oversampling factor and the number of stages amongst other parameters to get as small an error as possible in the reconstruction. [Boag et al., 2000] deals with the inverse operation. Like in the areas of SAS and SAR, these types

42 APPLICATIONS ACROSS DICIPLINES of methods are predicted a bright future. Other fast methods have also been proposed for CT, such as the multilevel inversion (MI), The linogram method, and the links method ( [Basu and Bresler, 2001]) Medical ultrasound Ultrasound imaging is one of the most popular imaging methods in medicine. The equipment is simpler and more portable than for for instance X-ray, CT or magnetic resonance (MR) (it consists of a handheld transducer coupled to a processor and a monitor), and it has few sideeffects. The transducer (consisting of an array of elements) both transmits and receives the signals, and the objective is to combine the different received sigals to an image. The transducer can have a variety of geometries, from linear to curved in two dimensions. Tissue is a highly inhomogeneous medium, and different bodyparts reflect different amounts. Bones and air gives great reflection and it can be difficult to obtain meaningful data of objects behind these media. Contrast fluid are often used to enhance certain parts of the body. This information was taken from [Holm]. As with sonar, we are in the near field of the scene, and the spherical nature of the wave must be accounted for. Time domain methods are almost exclusively used in this area, and the algorithm of choise is dynamic receive focusing, where the focusing delays can be changed continously to obtain new focal points. See e.g. [Haun, 2004] for details.

43 4 FAST FACTORIZED BACK-PROJECTION (FFBP) 43 4 Fast Factorized Back-Projection (FFBP) This section will decribe the main method of this thesis, FFBP. As we have seen, it is seldom correct to assume that the platform carrying the sonar moves in a straight line. This limits the use of frequency domain reconstruction methods, as a lot of pre- and postprocessing must be applied to the data to correct for platform motion. There is clearly a need for time domain methods in this field, however, the standard time domain methods are slow and not very often used. FFBP is a fast time domain method, and can thus handle an arbitrary array geometry. It can also give computation times as low as those obtained using frequency domain methods. Thus it is a method that takes the best from both time and frequency domain methods; it is fast, at the same time as it can account for an arbitrary array geometry. The algorithm is fast because it introduces approximations. However, these approximations can be varied. If set to zero, no approximations are made, and the algorithm is equivalent to BP. Larger approximations lowers the computation time, but also introduces more errors in the final image. The approximations consists of representing points in a small region of the scene by only one time series. A factorization of the image scene, as well as a factorization of the synthetic aperture is applied to control the size of the areas that can be represented this way. The method was originally developed for SAR, and has not been used to any extent in the SAS community yet, but it shows promising results for this field too. [Banks, 2002, chapter 6] has applied the method to SAS with good results. [Frölind et al., 2004] also describe the mehod for SAS, as well as compare it to fast polar backprojection (FPBP), a method strongly related to FFBP. [Aydiner et al., 2003] have developed a method to perform a sparse data fast fourier transform (SDFFT), which is somewhat similar to FFBP. 4.1 Principle In the standard backprojection algorithm one backprojects all the raw data to every pixel in the final image, and the complexity was given in (44). In FFBP the final image is reconstructed through a number of stages. Polar images with increasingly higher resolution are formed in each stage. They are formed using only a subsection of the synthetic aperture, and the size of the subapertures used to make a polar image is increased in each stage. At first, each original receiver is one subaperture, but as we go through the stages, a subset of the aperture positions are combined to form a subaperture. The aperture positions in a subaperture are always adjacent ones, i.e. a subaperture cannot be formed of e.g. element number 3 and 6 from the original receivers, but one could form a subaperture of e.g. elements 1 to 6. One can look at a raw data line (time series) as a polar image with an angular resolution of 360 (or the transmitter/receiver beamwidth Ω θ ), as this data contains correct information about the range to the targets, but

44 PRINCIPLE Figure 13: Two receivers are formed to a subaperture by beamforming them to the center line. It is also visible here that the two receivers have almost the same circular pattern within the beam. The figure is taken from [Ulander et al., 2003]. not of the targets angular location. The origin of a polar image is in the middle of the subaperture. This means that if two receivers are beamformed in a certain direction, the two resulting data lines can be looked at as a polar image with origin in the middle of the two receiver positions. This new polar image will have the same range resolution as the two original polar images, but it will have a higher angular resolution along the direction the receivers were beamformed to. The more receivers that are beamformed in a given direction, the better the angular resolution of the obtained polar image. Hence, as the algorithm goes through the stages, more and more receivers are used to make a polar image, and the polar images thus has better and better angular resolution. The algorithm exploits the fact that within a given angular sector, adjacent aperture posistions have essentially the same circular pattern in the collected data. Figure 13 shows a polar image made up of a subaperture of two original receivers. They have been matched along the beam center line (the line that starts in the middle of the two receivers, and goes through the middle of the angular sector) which corresponds to beamforming in this direction. Thus the values obtained on the center line is correct, while the errors are larger and larger the further out on the edge the points lie, as illustrated in figure 14. Two elements are combined to form a subaperture and focused along the subimage center line. This line is the dashed line in the figure. All points along this line will have the correct value, whereas points elsewhere will have approximate values. The errors are largest for points that lie the furthest away from the center line. In this polar image the point PT will be represented as if it lay at PP, along the centre line and at an angle b from PT. [Ulander et al., 2003] states that the maximum range error between PP and

45 4 FAST FACTORIZED BACK-PROJECTION (FFBP) 45 Figure 14: By using the assumed position of a point instead of the actual position, an error is introduced. The error in range between P P and P T is given in 40. The figure is taken from [Banks, 2002]. Some of the symbols might not be consistent with the text. PT is r = r (a + b) r (a) = r 2 + t 2 2rt cos(a + b) r 2 + t 2 2rt cos(a) (40) where 2t is the size of the subaperture. This error is an important parameter in FFBP, and we will look more closely at it in subsection 4.3. The range line matched to the center line (i.e. the two original data lines, delayed and combined to one) can be used to represent the whole angular sector with little error if the beam is narrow enough, corresponding to using nearest neighbour interpolation in angle. A beam gets narrower when more and more receivers are used in the beamforming, thus one can also say that the range line matched to the center line can be used to represent the whole angular sector if enough data lines were used in the beamforming. Thus, as the algorithm goes through the stages, and the subaperture sizes are increased (i.e. they contain more and more original receivers), the beams are increasingly narrower, and the errors at the edge of the angular sector are increasingly smaller. There are also other way to think of this process that may ease the understanding. The increasingly narrower beams can be thought of as a far-field constraint. If the angular sector is narrow enough, the arcs depicted in figure 13 will appear to be plane, and the errors introduced by approximating the whole arc by the value at the center line will be small. One can also say that as we go through the stages and the sizes of the subapertures increase, the Doppler bandwidth of the data increases, thus the beams can be made narrower when beamforming along the center line, and the errors at the edges of the sector are smaller.

46 DETAILS Figure 15: An example of the process of forming polar images. Both the image scene and the aperture are divided before beams are formed to the centres of the new subimages. The image is taken from [Banks, 2002]. Some of the symbols might not be consistent with the text. A summation of beams using range interpolation are performed in each stage. The resolution in range are the same in every polar image, independent of stage, but the resolution in angle gets better and better in every stage as the beams are made narrower. Narrower beams corresponds to increasingly narrower main lobes in the beam patterns. The approximations used in FFBP, and hence the noise in the image, are related to the angular resolution with which the images in the intermediate stages are formed. To keep the number of data samples constant, an increase in the size of a subaperture must imply beamforming to a narrower angular region and vice versa, because of the fact that the Doppler bandwidth of the data increases for increasing subaperture lengths. This fundamental principle combined with the approximation error is what enables computationally efficient backprojection algorithms to be constructed. 4.2 Details When new polar images are formed in each stage from the polar images of the previous stage, the image scene is factorized into smaller and smaller patches. This is in order to make the angular sector to beamform to decrease in size. The number of polar images increase with each stage, the rate depending on the choice of factorization factor. An example of the forming of an intermediate polar image can be seen in figure 15. In this example, one subimage from the previous stage is divided into 9 new subimages, at the same time as the subaperture size from the previous stage is doubled. Each of these 9 polar subimages will have

47 4 FAST FACTORIZED BACK-PROJECTION (FFBP) 47 a higher angular resolution than the one from the previous stage, but they are also smaller in size. For each subaperture, we beamform along the center line of the polar image. Because the number of polar images increases in each stage, we must change the dimensions of the raw data matrix in each stage. Instead of having a raw data line for each ping and each receiver througout the algorithm, we now have one raw data line for each ping, each subaperture in the current stage and each subimage in the current stage. In this example, one polar subimage was divided into 9 polar subimages, which implies an image factorization factor of 3 (the factorization factor is applied to each side of the scene, thus an image factorization factor of 3 will divide each polar image of the previous stage into 3 2 = 9 new polar images). The factorization factor for the aperture in this example was 2. The factorization factors can be chosen at will, and need not be the same in every stage, but the choice will influence the quality of the final image and the speed of the algorithm. We will look more closely at this in subsection 5.2. The image is assumed to be quadratic to make the implementation simpler. Applying an image factorization in each stage corresponds to placing an increasingly finer grid over the original imaging scene. To beamform along the center line of a given subimage, the data are processed according to ss i (t, u i ) = Q R (i) n=1 ss i 1 (t t, n u u i 1 ) (41) where s i 1 are the data from the previous stage, s i are the data in the current stage, u i 1 are the subaperture center positions from the previous stage for the subapertures of interest, u i are the subaperture center positions in the current stage, Q R (i) is the aperture factorization factor for the current stage and and u is the distance between each subaperture center position from the previous stage. If we call the travel times from the subaperture center positions of the previous stage to the center point of the subimages in the current stage t i 1, and the travel times from the subaperture center positions in the current stage to the center points of the subimages in the current stage t i, t = t i 1 t i. (42) From these equations we see that an essential part of the implementation is being able to determine the parent-child relationship of the subimages. These relationships can be illustrated by figure 16 for an image factorization factor of 2 (i.e. each subimage produces 4 new subimages). The receiver positions are processed according to u i = 1 Q R (i) n u u i 1 (43) Q R (i) n=0 in each stage. The choice of interpolation method will affect the quality of the image and the speed of the algorithm. Linear interpolation in range and nearest

48 DETAILS Figure 16: The quadtree structure the subimages are related to eachother through. The figure is taken from [McCorkle and Rofheart, 1996]. neighbour interpolation in angle is used in this implementation, but other interpolation methods will possibly give better images at a higher computational cost. It is not necessary to make an image in cartesian coordinates at any stage but the final; all of the information is kept in the polar images which need not be displayed. The number of stages to apply depends on several parameters in the algorithm, and it is discussed further in subsection 5.2. One could in theory run the algorithm all the way through, i.e. until each pixel is one subimage. However, there is a cross-over in the number of stages where there is nothing more to gain in speed. To go on from there will make the algorithm slower (see (44) below). In practice, FFBP is interupted at a given stage, and standard backprojection is ran on each of the subimages with a reduced number of receivers. The reduced number of data lines are treated as if they were the original raw data, and this will work fine if the range error is small enough. Error analysis is the topic of the next subsection. The steps in the algorithm can be summarized as follows Determine the factorization factors. Divide the image scene into Q I (i) 2 subimages. Q I (i) are the factorization

49 4 FAST FACTORIZED BACK-PROJECTION (FFBP) 49 Figure 17: An example of 3 stages of FFBP. In each stage both the subaperture and the subimages are divided before beams are formed from the new subaperture positions to the new subimages. The figure is taken from [Banks, 2002]. factors for the image in stage i. Determine how many range samples are needed to cover the given subimage. Beamform to the centers of the resulting subimages. Interpolate and combine Q R (i) beams together. Q R (i) are the facotorization factor for the recivers in stage i. Repeat these steps N s 1 times. N s is the number of stages in the algorithm. Backproject to all image pixels with a reduced number of receivers in the last step. A summary of the method for an image factorization factor of 4 and an aperture factorization factor of 2 can be seen in figure 17. The method in this illustration starts without dividing the image scene in the first stage, however. The last stage is where the computation time is saved. The savings are largest when N, X and Y are large. The order of computation in each of the intermediate stages is the number of subapertures the number of subimages the number of range samples needed for the subimages. The order of computation in the last stage is the number of pixels the number of subapertures in the last stage. Let us look at an example to illustrate this. We set the number of stages to 3, the number of pixels to be , the number of pings to be 10, the number of

50 DETAILS receivers to be 64 and the number of range samples to be The factorization factor for the image is set to 2, it is the same in all stages, and the factorization factor for the receivers is set to 2, it is also the same for all the stages. In each stage the number of range samples is reduced. The number of range samples needed to cover the area of the subimage is not the same for all the subimages; it is rangedependent. To simplify the implementation, the largest number of range samples needed is chosen for all the polar images in a given stage. It is possible to start the first stage using the whole imaging scene as the subimage, and beamforming to the center line, but it is also possible to start with a divided scene, which is the approach used in this thesis. By starting with only one subimage, the range samples needed in the first stage would be the original number of range samples, but by starting with a divided scene, the number of range samples needed in the first stage is reduced. In this example we start the first stage beamforming from the original number of receivers to 4 subimages. The beams are also combined in this stage, to be ready for use in the next stage. The number of range samples needed was found to be 3757, and the computational load for the first stage is = In the next stage 2 original receivers have been combined to form new receivers so the number of data lines is now 320, and the number of subimages is 4 4 = 16. The number of range samples needed has been reduced to Thus the computational load for the second stage is = In the last stage, standard backprojection is performed with 160 receivers. It is backprojected to all pixels. The computational load in the last stage is thus = and the total computational load for this example is = For comparison, the computational load for BP with the same parameters would be = From this we can see that FFBP represents a considerable improvement in speed. These calculations are, however, simplifications of the real computational loads for the separate stages. Additional computation is needed to keep track of the parent-child relationships, for determining the number of range samples needed in each stage, the calculations of the new subimages and subapertures etc. Since the computational burden these factors represents are highly dependent on the

51 4 FAST FACTORIZED BACK-PROJECTION (FFBP) 51 implementation, this thesis will not try to make a clear expression for it. Instead all the extra computation is combined into one variable for each stage, γ i. At some point, this factor is so big that it is best not to run any more stages, but instead backproject to all the pixels from the remaining receivers. The growth of γ i through the stages depend on the the choice of factorization factors, and it can lead to a very slow algorithm for some choices of factorization factors. The total computational load for a general data set can be written as N s 2 N i=0 ( i j=0 Q I(j) 2 ) N r i j=0 Q R(j) + γ i XY + Ns 1 j=0 Q R (j) (44) where N is the number of original receivers, X is the number of pixels in x- direction, Y is the number of pixels in y-direction, N s is the number of stages, Q R (j) is the factorization factor for the aperture at stage j, Q I (j) is the factorization factor for the image at stage j and N r is the number of range samples needed to cover a given subimage. The γ i fators are very much dependent on the implementation, and must be determined empirically. We see here that at some point the expression for the intermediate stages will take up so much of the computation time that the savings are no longer increasing or where they are completely lost (we saw in the last example that the computational load for the intermediate stages increase with each stage). When the savings are completely lost, FFBP will take longer or the same amount of time as BP. However, it is nearly impossible to make a general expression for where these points lie, both because of the implementation-dependent γ i, but also because of the many choices of parameters involved. [Banks, 2002, chapter 6] has made a program to find the optimum number of stages given the image and aperture geometry. [Ulander et al., 2003] states that a suitable number of stages for SAR is around 3, but [Banks, 2002, chapter 6] states that it can be up to around 6 for SAS. In the next section we will look at some simulations to see how the computation time comes out in practice, and we will see that it is possible to set up some guidelines as to how the parameters should be chosen to get good performance of FFBP. The derivation of the computational load in this section has assumed a collocated transmitter/receiver configuration. When using a bistatic transmitter/receiver configuration with FFBP, the displacement needs only be accounted for in the first stage, hence the difference in computation times between the two configurations are smaller than when standard backprojection is used ( [Banks, 2002, chapter 6]). [Ulander et al., 2003] has derived that the theoretical order of computation is O(QN 2 log Q N) if N = X = Y and if the same factorization factors Q R = Q I (called Q) are used throughout the stages. The range and sample spacings are also assumed to be equal. This is the order proportional to that obtained by FFTalgorithms.

52 ERROR ANALYSIS [Banks, 2002, chapter 6] has found out that the savings in computational load using FFBP are independent of the frequency of the transmitted pulse. This is because a higher center frequency means that smaller polar images must be formed. This again increases the computation time. However, a higher frequency also require a denser azimuth sampling. As FFBP is phase preserving, the method is suitable for interferometric processing. A more thorough derivation of the algorithm can be found in [Ulander et al., 2003]. See also [Hunter et al., 2003], [Banks, 2002, chapter 6], [Frölind et al., 2004] and [Olofsson, 2000]. 4.3 Error analysis The performance of FFBP is controlled by the approximation error (or range error) from equation (40). [Ulander et al., 2003] gives the maximum error in a given stage when using nearest neighbour interpolation in angle as L A L I R max = 4R, L A R min min (45) L I 2, L A > R min where L A is the length of the subaperture, L I is the length of the subimage in x-direction and R min is the minimum range to the image. From this equation we can see that to keep the range error constant throughout the stages, we can balance an increase in L A by a decrease in L I, which is exactly what happens when the aperture and image factorization factors are equal. When the subaperture becomes longer than R min there is no longer a need to decrease the subimage size. The resolution then increases very slowly, due to the fact that the Doppler cone angle changes very slowly. If the factorization factors are not equal, the errors from the individual stages will be successively larger or smaller, depending on which factorizaton factor is bigger. This range error applies to only one stage. To find the total error of the algorithm, the range errors from all the stages must be summed. By varying the total error, image quality can be traded for speed and vice versa. A small total error gives a better image quality, but needs more computation. A large total error has more noise, but needs less computation. It is useful to talk about the total error as a fraction of λ. Both [Ulander et al., 2003] and [Banks, 2002, chapter 6] states that if the total error is greater than λ the image is destroyed. This is because the total 4 path difference between the centre and the edge of the polar image exceeds λ, 2 and the sonar data is no longer coherently combined. This constraint can also be derived from the fact that the phase error has to be less than or equal to π. The phase error can be obtained by multiplying the range error by 2k. And 2k R π R λ. In some cases, when the minimum range to the scene is not 4

53 4 FAST FACTORIZED BACK-PROJECTION (FFBP) 53 much larger than the length of a subaperture, the maximum phase error has to be less than π. [Ulander et al., 2003] means that the maximum phase error must be lowered to π in these cases, and the total approximation error the algorithm can 8 have and still produce an image of acceptable quality will also be lowered. A list of ways to decrease the approximation error follows: Start the image scene as far away from the aperture as possible. Use high factorization factors for the image. Use low factorization factors for the receivers. Use short subapertures. Use small image scenes. Examples of how the total approximation error affects the speed and image quality can be seen in the next section. [Frölind et al., 2004] has investigated how other interpolation methods in angle than nearest neighbour will affect the azimuth beampattern. They found that the sidelobe level was reduced a considerable amount by using better interpolation methods. This does increase the computation time, however. [Banks, 2002, chapter 6] has found out that, by good planning of the allocation and freeing of memory, the memory requirements are around that of the final image plus object code. According to [Ulander et al., 2003] using FFBP leads to a shift-variant point spread function; it depends on the exact spatial locations of the subaperture beams used to form the given pixel value.

54 54 5 FFBP performance In this section an analysis of the performance of FFBP is made. The speed of the algorithm and the quality of the images are measured for different approximation errors and number of stages used in the algorithm. 5.1 Simulation setup The images in this section have been made using simulated data of a point reflector, with 10 pings of a physical array of 64 receivers in the reconstruction. The total size of the syntheic aperture is 5.28 m. A center frequency of 1 MHz was used. 5.2 Results Processing load details As the speed of the algorithm is dependent upon several parameters, it is difficult to determine which set of parameters gives the fastest reconstruction. There are, however, some general trends. The number of receivers to use in the backprojection in the last stage decreases as the number of stages increases. Therefore, it sounds like a good idea to use as many stages as possible. This is only true to a certain point. One must keep in mind that the intermediate stages also require their share of computation, and that each successive intermediate stage requires more computation than the previous one. So, at one given stage, the computation needed for the intermediate stages outweighs the savings in the last stage, and there is no longer anything to save by using more stages. When run too far, FFBP can even use more time than BP. A plot of the computation times versus number of stages for some simulations can be seen in figure stage corresponds to BP. Here, the factorization factors were all equal to 2. We can clearly see the cross-over point at stage 5. At stage 6, the computation time increases. This would indicate that an optimal number of stages for this dataset is 5. However, the analysis is not that simple. The approximation error is additive through the stages. This means that the approximation error increases with each stage, and the quality of the image decreases stage by stage. If a certain quality requirement is given, it may not be possible to run the algorithm all the way to the cross-over point. The development of the approximation error through the 7 stages

55 5 FFBP PERFORMANCE Time [s] Number of stages Figure 18: A comparison of the computation time for various number of stages. The factorization factors for both the image and the receivers were 2. of figure 18 can be seen in figure 19. We see that the approximation error increases linearly with the number of stages for these parameters. As the factorization factors need not be the same in every stage, the approximation error will not always evolve linearly. It is possible to change the rate with which the approximation error increases. To keep the approximation error constant (or increasing less rapidly than with equal factorization factors), one can apply different factorization factors. Not only can there be a different factorization factor for the image and the receivers, but there can also be different factorization factors in each stage. To get the algorithm to run really fast, one could use few stages, but high receiver factorization factors. This, in turn, would lead to a very high approximation error. It is the approximation error and the choice of factorization factors that controls the speed of the algorithm. Generally, a small approximation error gives a long computation time (as the algorithm is approaching BP) and vice versa. The same total approximation error can be obtained in different ways. This in turn affect the speed of the algorithm and quality of the final image. As an example, assume that we can obtain the same approximation error in 2 stages as in 3 stages, by varying the facorization factors. Let the size of the scene in azimuth be 5 m, and the minimum range to the scene be 40 m (the same as in the dataset above). We start with 2 stages, and apply an image factorization factor of 4 and a receiver factorization factor of 2. This gives an approximation error of λ 30.3 and the algorithm takes s to

56 RESULTS 1.5 x Approximation error Number of stages Figure 19: Approximation error as a function of number of stages. The factorization factors for both the image and the receivers were 2. complete. The same approximation error could be achieved with 3 stages, by applying equal factorization factors of 2 in all stages. However, the computation time has now been raised to s. This seems to conflict with the results in figure 18. The important thing to consider here is the approximation error. The reason that the computation time decreased with an increasing number of stages in figure 18, was that for each successive stage, the approximation error was doubled. To end up with a given approximation error using more stages than 2, either the approximation error in the first stage has to be quite high, or the factorization factors for the image must be higher than the factorization factors for the receivers in order to keep the approximation errror from increasing too rapidly. These two approaches will increase the computation time compared to operating with equal factorization factors for the image and the receivers. When the factorization factors for the image are higher than the factorization factors for receivers, the number of subimages to beamform to in each intermediate stage increases faster, and more computation is required in the intermediate stages. Thus, the general trend is that a small approximation error gives small computation times, but for a given approximation error, the algorithm will be fastest for the smallest number of stages used to obtain it. However, two runs of the algorithm with the same number of stages and the same approximation error can produce images with different quality, and the speed can also be different. It depends on how the factorization

57 5 FFBP PERFORMANCE stages 3 stages 4 stages 5 stages 6 stages 1000 Time [s] Approximation error (logarithmic scale) Figure 20: A comparizon of the compuatation time for different number of stages and different approximation errors. factors are chosen. If the vector of factorization factors (for all stages) for the image is equal to the vector of the factorization factors for the receivers, the approximation error is the same, independent of the values in the vectors. However, the speed of the algorithm will be lower if high values are placed near the end of the vector, as that leads to many subimages. It also makes the quality of the images lower, as more interpolation on the same data is applied. As a consequense, it will not be meaningful to directly compare the approximation error versus time for different number of stages, since the same approximation error in a certain stage can produce different results. Figure 20 tries to do this anyhow, but one must keep in mind that this is just an example of computation times; other choices of factorization factors would produce slightly different results. The plot can, however, give us an indication of the general behaviour of the computation time for different approximation errors and number of stages. The approximation error is plotted on a logarithmic scale to enhance the features. For a higher number of stages, not so many approximation errors are shown. This is because of that both the image and the number of receivers has to be large if they are to be divided many times. For this dataset, 7 stages was only possible when all the factorization factors were equal to 2. For 6 stages it was not possible to obtain as high approximation errors as in the simulations where a lower number of stages was applied. Thus the graphs in the plot are not of equal length. For a larger number of receivers, lower approximation errors for a higher number of stages could have been obtained. Also remember that the number of receivers and pixels has to be of such

58 RESULTS an order that dividing them by the factorization factors will produce whole numbers. It is also important to note that the approximation error depends upon the size of and minimum range to the scene, as well as number of pings and receivers. For all these reasons, it is nearly impossible to find the optimum set of parameters for a general data set. Hence, [Banks, 2002, chapter 6] has made a program to find the optimum number of stages given the image and aperture geometry. There are, however, some guidelines one can follow in order for the algorithm to be fast: Use as high an approximation error as possible. If a certain approximation error is required, try to use as few steps as possible to obtain it. If it is desireable to use one (or a few) high value(s) for factorization factors in some stage(s), apply these in the first stage(s). It was not the purpose of this paper to compare FFBP with the wavenumber algorithm. [Hunter et al., 2003] have done so, and they state that FFBP can only outperform the wavenumber code with regards to speed for high approximation error, in which case the image is usually useless Quality Let us look at bit closer at the quality of the final image, and how it is connected to the speed and the approximation error of the algorithm. As stated above, a high approximation error gives a short computation time. However, it also produces images with low quality. The image quality versus approximation error is tested in figures 21 to stages were applied for all the images. All the images has pixels. Thus, a compromise has to be made between speed of the algorithm and quality of the final image. As the number of stages increase, the algorithm tolarance for the approximation error decreases. This means that an image which looks good with a certain approximation error and number of stages does not necessarily look good with the same approximation error, but made through more stages. The reason for this is that as the algorithm runs through more stages, more and more interpolation is done on the raw data. This, in turn, means that approximation errors (not to be confused with the approximation error of the algorithm) of the data increase. The limit for when the data can no longer be coherently combined (when the approximation error is larger than λ ), can thus only be used in 4 practice for one separate stage. We can see in figures 21 to 36, where only 2 stages was applied, that the resulting image qualities fit well with the theory. In figure

59 5 FFBP PERFORMANCE Backprojection Range [m] Azimuth [m] 40 Figure 21: Image made with BP. 0 Test number 0: Point spread function azimuth (maxvalues) Intensity [db] Azimuth [m] Figure 22: Point spread function for the previous image.

60 RESULTS 45 Test number 1: FFBP Range [m] Azimuth [m] 40 Figure 23: Image made with FFBP and an approximation error of λ Test number 1: Point spread function azimuth (maxvalues) Intensity [db] Azimuth [m] Figure 24: Point spread function for the previous image.

61 5 FFBP PERFORMANCE Test number 2: FFBP Range [m] Azimuth [m] 40 Figure 25: Image made with FFBP and an approximation error of λ Test number 2: Point spread function azimuth (maxvalues) Intensity [db] Azimuth [m] Figure 26: Point spread function for the previous image.

62 RESULTS 45 Test number 1: FFBP Range [m] Azimuth [m] 40 Figure 27: Image made with FFBP and an approximation error of λ Test number 1: Point spread function azimuth (maxvalues) Intensity [db] Azimuth [m] Figure 28: Point spread function for the previous image.

63 5 FFBP PERFORMANCE Test number 1: FFBP Range [m] Azimuth [m] 40 Figure 29: Image made with FFBP and an approximation error of λ Test number 1: Point spread function azimuth (maxvalues) Intensity [db] Azimuth [m] Figure 30: Point spread function for the previous image.

64 RESULTS 45 Test number 1: FFBP Range [m] Azimuth [m] 40 Figure 31: Image made with FFBP and an approximation error of λ 7. 0 Test number 1: Point spread function azimuth (maxvalues) Intensity [db] Azimuth [m] Figure 32: Point spread function for the previous image.

65 5 FFBP PERFORMANCE Test number 0: FFBP Range [m] Azimuth [m] 40 Figure 33: Image made with FFBP and an approximation error of λ 3. 0 Test number 0: Point spread function azimuth (maxvalues) Intensity [db] Azimuth [m] Practical azimuth resolution = m Theoretical azimuth resolution = m PGLR left side = db ISLR = db PGLR right side = db Figure 34: Point spread function for the previous image.

66 RESULTS 45 Test number 1: FFBP Range [m] Azimuth [m] 40 Figure 35: Image made with FFBP and an approximation error of λ 1. 0 Test number 1: Point spread function azimuth (maxvalues) Intensity [db] Azimuth [m] Figure 36: Point spread function for the previous image.

67 5 FFBP PERFORMANCE Approximation error in fractions of lambda Number of stages Figure 37: The limit of approximation error where the images are of so bad quality that they are useless. 37 it is illustrated at which approximation error the quality of the images become unacceptable for the respective stages when more than 2 stages are applied. This is just an example for some given simulations; other choices of parameters would produce different results, but the general tendency is correct. This feature of FFBP (that the tolerance for the approximation error increases through the stages) is not desireable, but can probably be minimized by use of better interpolation methods in azimuth. However, good interpolation methods are computationally expensive, so the savings of FFBP may be lost. A compromise between speed and interpolation method must be made. It may also be caused simply by bad implementation. Like with computational speed, here are some guidelines for how to produce images of good quality: Use as big an approximation error as possible. If a certain approximation error is required, try to use as few steps as possible to reach it. If it is desireable to use one (or a few) high value(s) for factorization factors for the image in some stage(s), apply these in the first stage(s). The two last points in the list is valid for obtaining both high speed and good image quality with FFBP. However, the first point in the list is in conflict with the first point in the corresponding list for high speed given in subsection 5.2.1, hence there will always be a trade-off between speed and quality.

68 REMARKS 5.3 Remarks This code is by no means optimized, and probably not without bugs. The memoryuse has not been optimized according to [Banks, 2002, chapter 6] either. Hence there may be a gap between these results and the results obtained by others. E.g. [Banks, 2002, chapter 6] has found that it in some cases can be feasible to use 6 stages in FFBP. With this code, 6 stages was never the optimum number of stages. These gaps may be caused only because of different choice of parameters of the SAS system, but it is hard to tell without further testing of the code. So the results presented here should not be interpreted as any final solutions, more as a pointer of how the algorithm behaves under certain conditions. A lot more testing would have to be done before any waterproof results could have been presented here.

69 6 EXPERIMENTAL RESULTS FROM THE HUGIN AUV 69 6 Experimental results from the HUGIN AUV In this section the SAS system the code is intended for will be presented. First, an explanation of what an AUV is, and why it is a good idea to use one over standard deep-towed sonar systems, will be given. Then some spesifications of the HUGIN family of AUVs are stated, before the SAS system intended for HUGIN (which is under development) is presented. At the end, we will look at results from applying FFBP to data from the Edgetech SAS. HUGIN means High-precision Untethered Geosurvey and INspection system. 6.1 The HUGIN family of AUVs The majority of sonar systems are towed systems where the sonar is carried by a platform (often called the towfish) towed after a boat. There are several obstacles by using this approach. A towfish can be positioned acoustically from the boat in depths of less than about 800 meters, but this method breaks down in greater depths. In these cases other positioning methods, such as using a long baseline (LBL) positioning system or using a second boat (often called a chase boat), can be applied. A LBL works by measuring the propagation time of the signals from the towfish to a number of separate sensor elements. Thus, the measured times can be used to find the distances from the towfish to the sensor elements and hence the position of the towfish with respect to the sensor elements. If a chase boat is used, the posisiton of the towfish is sent from the chase boat to the main boat using radio waves. Due to the massive amounts of tow cable required ( meters is not uncommon), the costs of deep-towed systems are extremely high. Such cable lengths demand huge handling systems and result in a substantial drag when towed. As a result, the survey speeds are limited to knots, and turns often take 4-6 hours to accomplish. A lot of time and money are spent unnecessary on these operations. In addition, a deep-towed sonar hardly ever manages to stay on the prescribed survey line. Currents often push the towfish off line by hundreds of meters. The information was taken from [Northcutt et al., 2000]. To overcome these problems, Kongsberg Simrad AS and FFI (Norwegian defence research establishment) have developed a family of autonomous underwater vehicles (AUVs) to carry the sonar. They have been commersially in use in the offshore industry since They are also used by the military for mine hunting among other things. Within this program, a prototype single-sided high resolution interferometric SAS named SENSOTEK for the HUGIN AUV is under development. In addition, a two-sided SAS system has been procured on commercial terms from Edgetech. The Edgetech SAS is installed on the HUGIN 1000 AUV, which recently was mobilised on the Royal Norwegian Navy (RNN) mine hunter KNM Karmøy, while the SENSOTEK SAS is to be installed on HUGIN 1

70 THE HUGIN FAMILY OF AUVS Figure 38: HUGIN 1000 at launch. The image is taken from [Hagen et al., 2003]. (FFIs own research vehicle) in the near future. The SAS system from Edgetech is supplied complete with postprocessing software and hardware, where the SAS processing software is ProSAS from Dynamics Technology. A HUGIN AUV can be seen at launch in figure 38. An AUV is a self-propelled, unmanned underwater vehicle that is controlled by an onboard computer. A sonar data collection process is highly simplified by using an AUV instead of a deep-towed vessel. Only one boat is required, and it can communicate directly with the AUV. Cost and logistics are reduced substantially when the tow vessel, tow cable, winch, etc. are eliminated. In addition, turns are done in practically no-time, and the average speed the AUV can maintain while on track, is higher than that of a towfish. AUVs have the advantage over standard deep-towed systems that thay can move steadily through the water, and also near the sea floor. Although there may be some disturbances from currents, the AUV will stay within a few meters of the programmed line. The survey data will also be improved due to the fact that the AUV can maintain a constant height over the sea floor. This is very difficult with a deep-towed system. When the towfish is too high over the bottom, the data quality will be poor, and if it is too low, the imaged area will be very small, and the probability of a collision with the bottom will also increase. AUVs can have a high quality inertial navigation system (INS) installed, and the potential for making better images is thus present. One of the historic problems with AUVs has been limited power resources.ffi has developed semi fuel-cell battery technology that can deliver up to 60 hours

71 6 EXPERIMENTAL RESULTS FROM THE HUGIN AUV 71 Length m Max diameter 0.75 m Volume m 3 Depth rating 1000 m Speed range 2-6 knots Nominal speed 3-4 knots Motion stability < 0.5 at 4 knots Max pitch angle ± 50 Turn radius 10 m at 4 knots Energy capacity 5-15 kwh Max power 2 kw Endurance 24 hours at 3 knots Table 7: Specifications for the HUGIN 1000 vehicle endurance, and lithium polymer battery technology that gives up to 24 hours. One can hear about both AUVs and UUVs. An AUV is an autonomous underwater vehicle, while an UUV is an untethered underwater vehicle. To be truly autonomous, one could, for example, launch an AUV from the dock, let it go out and perform the required survey without external supervision, and the AUV would return to the dock a week later with all the data. So in reality, many AUVs are really UUVs. However, because AUV is the more recognized commercial term, most UUVs are referred to as AUVs, and this is also the case in this thesis. The information below is taken from [Hagen et al., 2003]. There are two vehicleclasses within the HUGIN family of AUVs. HUGIN 3000 can go as low as 3000 meters, has an endurance of hours and uses a semi fuel-cell power source. These vehicles have enjoyed great success in the civilian survey industry, and have accumulated more than billed line kilometers. In 2002, FFI and Kongsberg started a project aimed to develop a smaller vehicle, named HUGIN This AUV can go down to 1000 meters and uses the Lithium polymer battery technology. Early in 2004, the first HUGIN 1000 prototype AUV for military applications was completed and delivered to the Royal Norwegian Navy. HUGIN 1000 has a reduction in weight and volume of up to 50 % compared to HUGIN 3000, and it thus provides a more comfortable handling onboard a boat. Table 7 lists some of the key ratings and specifications for the HUGIN 1000 vehicle. 6.2 The SAS program for HUGIN The information in this subsection is taken from [Hansen et al., 2004]. The main goal for the SAS program is to develop a two-sided interferometric

72 EXPERIMENTAL SETUP SENSOTEK SAS One-sided interferometric SAS Theoretical resolution of 1 2 cm Goes on HUGIN 1 in 2004 Edgetech SAS Two-sided non-interferometric SAS Theoretical resolution of cm Installed on HUGIN 1000 late 2003 Possible delivery to RNN Two-sided interferometric SAS Theoretical resolution better than 5 5 cm HUGIN 1000-MR mid 2005 Table 8: Overview of the different SAS systems in the HUGIN AUV program SAS for the HUGIN 1000-MR, which is to be delivered to the Royal Norwegian Navy mid The hardware and electronics are developed by Kongsberg, while the signal processing is developed by FFI. The system consists of two vertically diplaced full-length receivers each with 96 elements of size λ and a twodimensional phased array transmitter with full flexibility in the vertical plane. The system has a bandwidth of up to 50 khz and can can operate at 270 meters range at 1.5 m/s with an overlap of 1.33 (24 elements), equivalent to 1.5 km 2 /h for a one-sided system. Motion compensation is done by integrating the DPCA technique with the aided INS. DPCA estimated sway and surge is used as aiding sensors (similar to a correlation velocity log (CVL) for the navigation system. There are two different operational modes; conventional strip-map mode and multi-look mode. In multilook mode, the independent images can be used either to reduce speckle in the image, produce images in different aspect angles for multi-aspect shadow classification or to improve the phase unwrapping for full resolution interferometric processing. The aim of this thesis was to make FFBP run with data from the Edgetech SAS. 6.3 Experimental setup The data set used here were provided by FFI. The data was recorded by HUGIN 1 off the coast of Horten on 1 December When reconstructed correctly, one should see a wreck, which is probably a fishing vessel. The physical array consists of 6 elements and is m in extent. 200 pings were used in the reconstruction. The center frequency was 125 khz with a bandwidth of 15 khz.

73 6 EXPERIMENTAL RESULTS FROM THE HUGIN AUV 73 y variation [m] Roll Pitch Yaw Figure 39: Plots of how the sonar moved through the data collection period. accounted for in the data received from FFI. This was The dataset used in this thesis has been navigated using the realtime navigation solution from the inertial navigation system. No DPCA has been applied. A simple form of autofocusing (related to contrast optimization) has been applied to compensate for an unwanted squint in the system. The data has been motion compensated and regridded onto a rectangular grid. This causes some residual grating lobes, which can be reduced by doing bistatic imaging directly. The vehicle motion is shown in figure Imaging results The imaged reconstructed with BP can be seen in figure 40. This image gives a good example of that the shadow can be useful in classifying objects. One can clearly see the mast as well as the rest of the outline of the wreck. The big test for FFBP will thus be to reconstruct the image with the same clear shadow. By trying to reconstruct real data by FFBP, one can face some limitations if the data are not appropriate. By appropriate it is meant that it will be possible to reconstruct using FFBP with an acceptable error. In this case, the data provided were not appropriate; there was no way of both getting an acceptable error and seeing the whole wreck in the image. The wreck is about 25 meters in x-direction,

74 IMAGING RESULTS Backprojection Range [m] Azimuth [m] 0 Figure 40: Image of the whole scene made by BP.

75 6 EXPERIMENTAL RESULTS FROM THE HUGIN AUV 75 and located approximately 60 meters from the sonar. Edgetech SAS is a narrowbeam system, which means that the size of each element is quite big. The size of the elements affect the total approximation error of FFBP. By inserting the parameters of Edgetech SAS into the equation for the maximum range error in 40, we see that the maximum size image size we can use FFBP on is about 7 m, much less than the extent of the wreck. The maximum image extent can be raised by applying high numbers for the factorization factors for the image, however, this demands huge amounts of memory, and as we saw with the simulated data, the algorithm will be slow, maybe even slower than BP. Some operations could reduce the approximation error in this case, without applying high numbers for the factorization factors of the image. It was either increasing the minimum range to the scene or decreasing the image size in x-direction. The minimum range to the scene can be no larger than 60 meters if the wreck is to be seen, and we wish to see the whole wreck. This is still not enough to obtain an acceptable approximation error. The fact that the data set was not suitable for FFBP, brought up some interesting questions; is it possible to divide up the image scene, run FFBP on the different parts and then combine them to an image? And how will this method perform with respect to quality and computational load compared to standard FFBP? The image was divided into a mosaic, and FFBP was ran on each patch. One apparent drawback with this mosaicing is that much of the gain in computation time by using FFBP is lost. However, the mosaicing is valid for investigating the quality of the images made with real data and FFBP. Thus, this chapter will not discuss the speed of the algorithm for the real data set, but instead concentrate on the quality of the images. The test results regarding speed will not differ much from those that could be obtained by simulating the same SAS system as the one the real data were collected by. To image the whole area in figure 40, the number of patches in the mosaic would have to be very high to get an accetable error. The tests were instead made with a smaller patch of the image including the wreck. For comparison, a smaller patch of the image made with BP is shown in 41. As the size of the image was reduced, the number of pixels were also reduced from 1024 to 512. The same was done on the following images made with FFBP. The different patches in the mosaic will have different approximation error when made with FFBP because of their varying minimum range to the aperture. Thus, only the worst error is stated here. An image made with 64 patches is shown in figure 42. N s = 2, Q R = 2 and Q I = 2 were used in all the tests of FFBP. The maximim error in the image is λ 13, but it is lower at long ranges. As can be seen, FFBP is capable of reconstructing the shadow. Errors due to the mosaicing comes in addition to the FFBP errors, hence these tests can not be treated as reli-

76 IMAGING RESULTS 100 Backprojection Range [m] Azimuth [m] 0 Figure 41: Image made with BP of a part of the area.

77 6 EXPERIMENTAL RESULTS FROM THE HUGIN AUV Test number 206: FFBP Range [m] Azimuth [m] 0 Figure 42: Image made by FFBP and mosaicing. The maximum approximation error is λ 13

78 IMAGING RESULTS able. They are only intended to show that the algorithm works. An image made with 16 patches is shown in figure 43. The maximum approximation error in this image is λ. There are not much difference. 6 For comparison, an image without mosaicing is shown in 44. It is useless. The maximum approximation error is λ. More images could have been made with 1 other approximation errors, but since these tests were biased anyway, only three images made with FFBP are shown. Also for comparison, FFI provided an image of the same scene made with the wavenumber algorithm. It is shown in figure 45. After these tests, it is clear that FFBP is not a good algorithm for narrowband SAS systems, as they will have to use mosaicing to obtain an acceptable approximation error. Another aspect of using FFBP with narrowbeam SAS systems is the azimuth filtering. Azimuth filtering is important also when using BP to reconstruct narrowbeam SAS data. The problem with azimuth filtering in FFBP is how (and in what stage) to apply it. It has not yet been documented anywhere in the literature. For these tests, a method that seemed to work was applied. It will not be discussed here, however, as it requires much more studying. This is definetely an aspect of FFBP that should be addressed in the future.

79 6 EXPERIMENTAL RESULTS FROM THE HUGIN AUV Test number 206: FFBP Range [m] Azimuth [m] 0 Figure 43: Image made by FFBP and mosaicing. The maximum approximation error is λ 6

80 IMAGING RESULTS 100 Test number 206: FFBP Range [m] Azimuth [m] 0 Figure 44: Image made by FFBP. The maximum approximation error is λ 1

81 6 EXPERIMENTAL RESULTS FROM THE HUGIN AUV Figure 45: Image made by the wavenumber code. The image was provided by FFI.

New Results on the Omega-K Algorithm for Processing Synthetic Aperture Radar Data

New Results on the Omega-K Algorithm for Processing Synthetic Aperture Radar Data New Results on the Omega-K Algorithm for Processing Synthetic Aperture Radar Data Matthew A. Tolman and David G. Long Electrical and Computer Engineering Dept. Brigham Young University, 459 CB, Provo,

More information

The Staggered SAR Concept: Imaging a Wide Continuous Swath with High Resolution

The Staggered SAR Concept: Imaging a Wide Continuous Swath with High Resolution The Staggered SAR Concept: Imaging a Wide Continuous Swath with High Resolution Michelangelo Villano *, Gerhard Krieger *, Alberto Moreira * * German Aerospace Center (DLR), Microwaves and Radar Institute

More information

AN acoustic array consists of a number of elements,

AN acoustic array consists of a number of elements, APPLICATION NOTE 1 Acoustic camera and beampattern Jørgen Grythe, Norsonic AS, Oslo, Norway Abstract The wavenumber-frequency response of an array describes the response to an arbitrary plane wave both

More information

ASSESSMENT OF SHALLOW WATER PERFORMANCE USING INTERFEROMETRIC SONAR COHERENCE

ASSESSMENT OF SHALLOW WATER PERFORMANCE USING INTERFEROMETRIC SONAR COHERENCE ASSESSMENT OF SHALLOW WATER PERFORMANCE USING INTERFEROMETRIC SONAR COHERENCE Stig A. Synnes, Roy E. Hansen, Torstein O. Sæbø Norwegian Defence Research Establishment (FFI), PO Box 25, N 2027 Kjeller,

More information

GPU-Based Real-Time SAS Processing On-Board Autonomous Underwater Vehicles

GPU-Based Real-Time SAS Processing On-Board Autonomous Underwater Vehicles GPU Technology Conference 2013 GPU-Based Real-Time SAS Processing On-Board Autonomous Underwater Vehicles Jesús Ortiz Advanced Robotics Department (ADVR) Istituto Italiano di tecnologia (IIT) Francesco

More information

NOISE SUSCEPTIBILITY OF PHASE UNWRAPPING ALGORITHMS FOR INTERFEROMETRIC SYNTHETIC APERTURE SONAR

NOISE SUSCEPTIBILITY OF PHASE UNWRAPPING ALGORITHMS FOR INTERFEROMETRIC SYNTHETIC APERTURE SONAR Proceedings of the Fifth European Conference on Underwater Acoustics, ECUA 000 Edited by P. Chevret and M.E. Zakharia Lyon, France, 000 NOISE SUSCEPTIBILITY OF PHASE UNWRAPPING ALGORITHMS FOR INTERFEROMETRIC

More information

3 - SYNTHETIC APERTURE RADAR (SAR) SUMMARY David Sandwell, SIO 239, January, 2008

3 - SYNTHETIC APERTURE RADAR (SAR) SUMMARY David Sandwell, SIO 239, January, 2008 1 3 - SYNTHETIC APERTURE RADAR (SAR) SUMMARY David Sandwell, SIO 239, January, 2008 Fraunhoffer diffraction To understand why a synthetic aperture in needed for microwave remote sensing from orbital altitude

More information

Improving Segmented Interferometric Synthetic Aperture Radar Processing Using Presumming. by: K. Clint Slatton. Final Report.

Improving Segmented Interferometric Synthetic Aperture Radar Processing Using Presumming. by: K. Clint Slatton. Final Report. Improving Segmented Interferometric Synthetic Aperture Radar Processing Using Presumming by: K. Clint Slatton Final Report Submitted to Professor Brian Evans EE381K Multidimensional Digital Signal Processing

More information

A Correlation Test: What were the interferometric observation conditions?

A Correlation Test: What were the interferometric observation conditions? A Correlation Test: What were the interferometric observation conditions? Correlation in Practical Systems For Single-Pass Two-Aperture Interferometer Systems System noise and baseline/volumetric decorrelation

More information

Memorandum. Clint Slatton Prof. Brian Evans Term project idea for Multidimensional Signal Processing (EE381k)

Memorandum. Clint Slatton Prof. Brian Evans Term project idea for Multidimensional Signal Processing (EE381k) Memorandum From: To: Subject: Date : Clint Slatton Prof. Brian Evans Term project idea for Multidimensional Signal Processing (EE381k) 16-Sep-98 Project title: Minimizing segmentation discontinuities in

More information

Coherent Auto-Calibration of APE and NsRCM under Fast Back-Projection Image Formation for Airborne SAR Imaging in Highly-Squint Angle

Coherent Auto-Calibration of APE and NsRCM under Fast Back-Projection Image Formation for Airborne SAR Imaging in Highly-Squint Angle remote sensing Article Coherent Auto-Calibration of APE and NsRCM under Fast Back-Projection Image Formation for Airborne SAR Imaging in Highly-Squint Angle Lei Yang 1,, Song Zhou 2, *, Lifan Zhao 3 and

More information

Multistatic SAR Algorithm with Image Combination

Multistatic SAR Algorithm with Image Combination Multistatic SAR Algorithm with Image Combination Tommy Teer and Nathan A. Goodman Department of Electrical and Computer Engineering, The University of Arizona 13 E. Speedway Blvd., Tucson, AZ 8571-14 Phone:

More information

Several imaging algorithms for synthetic aperture sonar and forward looking gap-filler in real-time and post-processing on IXSEA s Shadows sonar

Several imaging algorithms for synthetic aperture sonar and forward looking gap-filler in real-time and post-processing on IXSEA s Shadows sonar Several imaging algorithms for synthetic aperture sonar and forward looking gap-filler in real-time and post-processing on IXSEA s Shadows sonar F. Jean IXSEA, 46, quai François Mitterrand, 13600 La Ciotat,

More information

Synthetic-Aperture Radar Processing Using Fast Factorized Back-Projection

Synthetic-Aperture Radar Processing Using Fast Factorized Back-Projection I. INTRODUCTION Synthetic-Aperture Radar Processing Using Fast Factorized Back-Projection LARS M. H. ULANDER, Member, IEEE HANS HELLSTEN GUNNAR STENSTRÖM Swedish Defence Research Agency (FOI) Exact synthetic

More information

SAR training processor

SAR training processor Rudi Gens This manual describes the SAR training processor (STP) that has been developed to introduce students to the complex field of processed synthetic aperture radar (SAR) data. After a brief introduction

More information

Digital Processing of Synthetic Aperture Radar Data

Digital Processing of Synthetic Aperture Radar Data Digital Processing of Synthetic Aperture Radar Data Algorithms and Implementation Ian G. Cumming Frank H. Wong ARTECH HOUSE BOSTON LONDON artechhouse.com Contents Foreword Preface Acknowledgments xix xxiii

More information

SONAR DATA SIMULATION WITH APPLICATION TO MULTI- BEAM ECHO SOUNDERS

SONAR DATA SIMULATION WITH APPLICATION TO MULTI- BEAM ECHO SOUNDERS SONAR DATA SIMULATION WITH APPLICATION TO MULTI- BEAM ECHO SOUNDERS Antoine Blachet a, Tor Inge Birkenes Lønmo a,b, Andreas Austeng a, Fabrice Prieur a, Alan J Hunter a,c, Roy E Hansen a,d a University

More information

EXTENDED WAVENUMBER DOMAIN ALGORITHM FOR HIGHLY SQUINTED SLIDING SPOTLIGHT SAR DATA PROCESSING

EXTENDED WAVENUMBER DOMAIN ALGORITHM FOR HIGHLY SQUINTED SLIDING SPOTLIGHT SAR DATA PROCESSING Progress In Electromagnetics Research, Vol. 114, 17 32, 2011 EXTENDED WAVENUMBER DOMAIN ALGORITHM FOR HIGHLY SQUINTED SLIDING SPOTLIGHT SAR DATA PROCESSING D. M. Guo, H. P. Xu, and J. W. Li School of Electronic

More information

Synthetic Aperture Radar Modeling using MATLAB and Simulink

Synthetic Aperture Radar Modeling using MATLAB and Simulink Synthetic Aperture Radar Modeling using MATLAB and Simulink Naivedya Mishra Team Lead Uurmi Systems Pvt. Ltd. Hyderabad Agenda What is Synthetic Aperture Radar? SAR Imaging Process Challenges in Design

More information

A SPECTRAL ANALYSIS OF SINGLE ANTENNA INTERFEROMETRY. Craig Stringham

A SPECTRAL ANALYSIS OF SINGLE ANTENNA INTERFEROMETRY. Craig Stringham A SPECTRAL ANALYSIS OF SINGLE ANTENNA INTERFEROMETRY Craig Stringham Microwave Earth Remote Sensing Laboratory Brigham Young University 459 CB, Provo, UT 84602 March 18, 2013 ABSTRACT This paper analyzes

More information

MOTION COMPENSATION OF INTERFEROMETRIC SYNTHETIC APERTURE RADAR

MOTION COMPENSATION OF INTERFEROMETRIC SYNTHETIC APERTURE RADAR MOTION COMPENSATION OF INTERFEROMETRIC SYNTHETIC APERTURE RADAR David P. Duncan Microwave Earth Remote Sensing Laboratory Brigham Young University Provo, UT 84602 PH: 801.422.4884, FAX: 801.422.6586 April

More information

Diffraction. Introduction: Diffraction is bending of waves around an obstacle (barrier) or spreading of waves passing through a narrow slit.

Diffraction. Introduction: Diffraction is bending of waves around an obstacle (barrier) or spreading of waves passing through a narrow slit. Introduction: Diffraction is bending of waves around an obstacle (barrier) or spreading of waves passing through a narrow slit. Diffraction amount depends on λ/a proportion If a >> λ diffraction is negligible

More information

Motion Compensation in Bistatic Airborne SAR Based on a Geometrical Approach

Motion Compensation in Bistatic Airborne SAR Based on a Geometrical Approach SAR Based on a Geometrical Approach Amaya Medrano Ortiz, Holger Nies, Otmar Loffeld Center for Sensorsystems (ZESS) University of Siegen Paul-Bonatz-Str. 9-11 D-57068 Siegen Germany medrano-ortiz@ipp.zess.uni-siegen.de

More information

Optimization and Beamforming of a Two Dimensional Sparse Array

Optimization and Beamforming of a Two Dimensional Sparse Array Optimization and Beamforming of a Two Dimensional Sparse Array Mandar A. Chitre Acoustic Research Laboratory National University of Singapore 10 Kent Ridge Crescent, Singapore 119260 email: mandar@arl.nus.edu.sg

More information

specular diffuse reflection.

specular diffuse reflection. Lesson 8 Light and Optics The Nature of Light Properties of Light: Reflection Refraction Interference Diffraction Polarization Dispersion and Prisms Total Internal Reflection Huygens s Principle The Nature

More information

Chapter 8: Physical Optics

Chapter 8: Physical Optics Chapter 8: Physical Optics Whether light is a particle or a wave had puzzled physicists for centuries. In this chapter, we only analyze light as a wave using basic optical concepts such as interference

More information

Scene Matching on Imagery

Scene Matching on Imagery Scene Matching on Imagery There are a plethora of algorithms in existence for automatic scene matching, each with particular strengths and weaknesses SAR scenic matching for interferometry applications

More information

Wave Phenomena Physics 15c. Lecture 19 Diffraction

Wave Phenomena Physics 15c. Lecture 19 Diffraction Wave Phenomena Physics 15c Lecture 19 Diffraction What We Did Last Time Studied interference > waves overlap Amplitudes add up Intensity = (amplitude) does not add up Thin-film interference Reflectivity

More information

Geometry and Processing Algorithms for Bistatic SAR - PROGRESS REPORT

Geometry and Processing Algorithms for Bistatic SAR - PROGRESS REPORT Geometry and Processing Algorithms for Bistatic SAR - PROGRESS REPORT by Yew Lam Neo B.Eng., National University of Singapore, Singapore, 1994 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS

More information

Plane Wave Imaging Using Phased Array Arno Volker 1

Plane Wave Imaging Using Phased Array Arno Volker 1 11th European Conference on Non-Destructive Testing (ECNDT 2014), October 6-10, 2014, Prague, Czech Republic More Info at Open Access Database www.ndt.net/?id=16409 Plane Wave Imaging Using Phased Array

More information

NEW DEVELOPMENT OF TWO-STEP PROCESSING APPROACH FOR SPOTLIGHT SAR FOCUSING IN PRESENCE OF SQUINT

NEW DEVELOPMENT OF TWO-STEP PROCESSING APPROACH FOR SPOTLIGHT SAR FOCUSING IN PRESENCE OF SQUINT Progress In Electromagnetics Research, Vol. 139, 317 334, 213 NEW DEVELOPMENT OF TWO-STEP PROCESSING APPROACH FOR SPOTLIGHT SAR FOCUSING IN PRESENCE OF SQUINT Ya-Jun Mo 1, 2, *, Yun-Kai Deng 1, Yun-Hua

More information

PSI Precision, accuracy and validation aspects

PSI Precision, accuracy and validation aspects PSI Precision, accuracy and validation aspects Urs Wegmüller Charles Werner Gamma Remote Sensing AG, Gümligen, Switzerland, wegmuller@gamma-rs.ch Contents Aim is to obtain a deeper understanding of what

More information

Coherent Change Detection: Theoretical Description and Experimental Results

Coherent Change Detection: Theoretical Description and Experimental Results Coherent Change Detection: Theoretical Description and Experimental Results Mark Preiss and Nicholas J. S. Stacy Intelligence, Surveillance and Reconnaissance Division Defence Science and Technology Organisation

More information

Challenges in Detecting & Tracking Moving Objects with Synthetic Aperture Radar (SAR)

Challenges in Detecting & Tracking Moving Objects with Synthetic Aperture Radar (SAR) Challenges in Detecting & Tracking Moving Objects with Synthetic Aperture Radar (SAR) Michael Minardi PhD Sensors Directorate Air Force Research Laboratory Outline Focusing Moving Targets Locating Moving

More information

Diffraction. Single-slit diffraction. Diffraction by a circular aperture. Chapter 38. In the forward direction, the intensity is maximal.

Diffraction. Single-slit diffraction. Diffraction by a circular aperture. Chapter 38. In the forward direction, the intensity is maximal. Diffraction Chapter 38 Huygens construction may be used to find the wave observed on the downstream side of an aperture of any shape. Diffraction The interference pattern encodes the shape as a Fourier

More information

Central Slice Theorem

Central Slice Theorem Central Slice Theorem Incident X-rays y f(x,y) R x r x Detected p(, x ) The thick line is described by xcos +ysin =R Properties of Fourier Transform F [ f ( x a)] F [ f ( x)] e j 2 a Spatial Domain Spatial

More information

Compressive Sensing Applications and Demonstrations: Synthetic Aperture Radar

Compressive Sensing Applications and Demonstrations: Synthetic Aperture Radar Compressive Sensing Applications and Demonstrations: Synthetic Aperture Radar Shaun I. Kelly The University of Edinburgh 1 Outline 1 SAR Basics 2 Compressed Sensing SAR 3 Other Applications of Sparsity

More information

Range Sensors (time of flight) (1)

Range Sensors (time of flight) (1) Range Sensors (time of flight) (1) Large range distance measurement -> called range sensors Range information: key element for localization and environment modeling Ultrasonic sensors, infra-red sensors

More information

Physics I : Oscillations and Waves Prof. S Bharadwaj Department of Physics & Meteorology Indian Institute of Technology, Kharagpur

Physics I : Oscillations and Waves Prof. S Bharadwaj Department of Physics & Meteorology Indian Institute of Technology, Kharagpur Physics I : Oscillations and Waves Prof. S Bharadwaj Department of Physics & Meteorology Indian Institute of Technology, Kharagpur Lecture - 20 Diffraction - I We have been discussing interference, the

More information

REMI and SPAC: A Comparison

REMI and SPAC: A Comparison REMI and SPAC: A Comparison Don Zhao Geogiga Technology Corp. Outline Overview Comparison by Synthetic Data Linear Array for Omni-Directional Sources Field Setup of Triangular Array for SPAC Conclusion

More information

Design, Implementation and Performance Evaluation of Synthetic Aperture Radar Signal Processor on FPGAs

Design, Implementation and Performance Evaluation of Synthetic Aperture Radar Signal Processor on FPGAs Design, Implementation and Performance Evaluation of Synthetic Aperture Radar Signal Processor on FPGAs Hemang Parekh Masters Thesis MS(Computer Engineering) University of Kansas 23rd June, 2000 Committee:

More information

First TOPSAR image and interferometry results with TerraSAR-X

First TOPSAR image and interferometry results with TerraSAR-X First TOPSAR image and interferometry results with TerraSAR-X A. Meta, P. Prats, U. Steinbrecher, R. Scheiber, J. Mittermayer DLR Folie 1 A. Meta - 29.11.2007 Introduction Outline TOPSAR acquisition mode

More information

Chapter 37. Wave Optics

Chapter 37. Wave Optics Chapter 37 Wave Optics Wave Optics Wave optics is a study concerned with phenomena that cannot be adequately explained by geometric (ray) optics. Sometimes called physical optics These phenomena include:

More information

Action TU1208 Civil Engineering Applications of Ground Penetrating Radar. SPOT-GPR: a freeware tool for target detection and localization in GPR data

Action TU1208 Civil Engineering Applications of Ground Penetrating Radar. SPOT-GPR: a freeware tool for target detection and localization in GPR data Action TU1208 Civil Engineering Applications of Ground Penetrating Radar Final Conference Warsaw, Poland 25-27 September 2017 SPOT-GPR: a freeware tool for target detection and localization in GPR data

More information

CHAPTER 5 PROPAGATION DELAY

CHAPTER 5 PROPAGATION DELAY 98 CHAPTER 5 PROPAGATION DELAY Underwater wireless sensor networks deployed of sensor nodes with sensing, forwarding and processing abilities that operate in underwater. In this environment brought challenges,

More information

25-1 Interference from Two Sources

25-1 Interference from Two Sources 25-1 Interference from Two Sources In this chapter, our focus will be on the wave behavior of light, and on how two or more light waves interfere. However, the same concepts apply to sound waves, and other

More information

3D Multiple Input Single Output Near Field Automotive Synthetic Aperture Radar

3D Multiple Input Single Output Near Field Automotive Synthetic Aperture Radar 3D Multiple Input Single Output Near Field Automotive Synthetic Aperture Radar Aron Sommer, Tri Tan Ngo, Jörn Ostermann Institut für Informationsverarbeitung Appelstr. 9A, D-30167 Hannover, Germany email:

More information

Chapter 7. Widely Tunable Monolithic Laser Diodes

Chapter 7. Widely Tunable Monolithic Laser Diodes Chapter 7 Widely Tunable Monolithic Laser Diodes We have seen in Chapters 4 and 5 that the continuous tuning range λ is limited by λ/λ n/n g, where n is the index change and n g the group index of the

More information

Chapter 38. Diffraction Patterns and Polarization

Chapter 38. Diffraction Patterns and Polarization Chapter 38 Diffraction Patterns and Polarization Diffraction Light of wavelength comparable to or larger than the width of a slit spreads out in all forward directions upon passing through the slit This

More information

10/5/09 1. d = 2. Range Sensors (time of flight) (2) Ultrasonic Sensor (time of flight, sound) (1) Ultrasonic Sensor (time of flight, sound) (2) 4.1.

10/5/09 1. d = 2. Range Sensors (time of flight) (2) Ultrasonic Sensor (time of flight, sound) (1) Ultrasonic Sensor (time of flight, sound) (2) 4.1. Range Sensors (time of flight) (1) Range Sensors (time of flight) (2) arge range distance measurement -> called range sensors Range information: key element for localization and environment modeling Ultrasonic

More information

Progress Report on: Interferometric Interpolation of 3D SSP Data

Progress Report on: Interferometric Interpolation of 3D SSP Data Progress Report on: Interferometric Interpolation of 3D SSP Data Sherif M. Hanafy ABSTRACT We present the theory and numerical results for interferometrically interpolating and extrapolating 3D marine

More information

Validation of aspects of BeamTool

Validation of aspects of BeamTool Vol.19 No.05 (May 2014) - The e-journal of Nondestructive Testing - ISSN 1435-4934 www.ndt.net/?id=15673 Validation of aspects of BeamTool E. GINZEL 1, M. MATHESON 2, P. CYR 2, B. BROWN 2 1 Materials Research

More information

Multi-azimuth velocity estimation

Multi-azimuth velocity estimation Stanford Exploration Project, Report 84, May 9, 2001, pages 1 87 Multi-azimuth velocity estimation Robert G. Clapp and Biondo Biondi 1 ABSTRACT It is well known that the inverse problem of estimating interval

More information

A method and algorithm for Tomographic Imaging of highly porous specimen using Low Frequency Acoustic/Ultrasonic signals

A method and algorithm for Tomographic Imaging of highly porous specimen using Low Frequency Acoustic/Ultrasonic signals More Info at Open Access Database www.ndt.net/?id=15210 A method and algorithm for Tomographic Imaging of highly porous specimen using Low Frequency Acoustic/Ultrasonic signals Subodh P S 1,a, Reghunathan

More information

Workhorse ADCP Multi- Directional Wave Gauge Primer

Workhorse ADCP Multi- Directional Wave Gauge Primer Acoustic Doppler Solutions Workhorse ADCP Multi- Directional Wave Gauge Primer Brandon Strong October, 2000 Principles of ADCP Wave Measurement The basic principle behind wave the measurement, is that

More information

A Fast Decimation-in-image Back-projection Algorithm for SAR

A Fast Decimation-in-image Back-projection Algorithm for SAR A Fast Decimation-in-image Back-projection Algorithm for SAR Shaun I. Kelly and Mike E. Davies Institute for Digital Communications The University of Edinburgh email: {Shaun.Kelly, Mike.Davies}@ed.ac.uk

More information

Physics 1CL WAVE OPTICS: INTERFERENCE AND DIFFRACTION Fall 2009

Physics 1CL WAVE OPTICS: INTERFERENCE AND DIFFRACTION Fall 2009 Introduction An important property of waves is interference. You are familiar with some simple examples of interference of sound waves. This interference effect produces positions having large amplitude

More information

Lecture 7 Notes: 07 / 11. Reflection and refraction

Lecture 7 Notes: 07 / 11. Reflection and refraction Lecture 7 Notes: 07 / 11 Reflection and refraction When an electromagnetic wave, such as light, encounters the surface of a medium, some of it is reflected off the surface, while some crosses the boundary

More information

TEAMS National Competition Middle School Version Photometry Solution Manual 25 Questions

TEAMS National Competition Middle School Version Photometry Solution Manual 25 Questions TEAMS National Competition Middle School Version Photometry Solution Manual 25 Questions Page 1 of 14 Photometry Questions 1. When an upright object is placed between the focal point of a lens and a converging

More information

Backprojection for Synthetic Aperture Radar

Backprojection for Synthetic Aperture Radar Brigham Young University BYU ScholarsArchive All Theses and Dissertations 2013-06-13 Backprojection for Synthetic Aperture Radar Michael Israel Duersch Brigham Young University - Provo Follow this and

More information

AN EFFICIENT IMAGING APPROACH FOR TOPS SAR DATA FOCUSING BASED ON SCALED FOURIER TRANSFORM

AN EFFICIENT IMAGING APPROACH FOR TOPS SAR DATA FOCUSING BASED ON SCALED FOURIER TRANSFORM Progress In Electromagnetics Research B, Vol. 47, 297 313, 2013 AN EFFICIENT IMAGING APPROACH FOR TOPS SAR DATA FOCUSING BASED ON SCALED FOURIER TRANSFORM Pingping Huang 1, * and Wei Xu 2 1 College of

More information

Development of Focal-Plane Arrays and Beamforming Networks at DRAO

Development of Focal-Plane Arrays and Beamforming Networks at DRAO Development of Focal-Plane Arrays and Beamforming Networks at DRAO Bruce Veidt Dominion Radio Astrophysical Observatory Herzberg Institute of Astrophysics National Research Council of Canada Penticton,

More information

ESTIMATION OF DETECTION/CLASSIFICATION PERFORMANCE USING INTERFEROMETRIC SONAR COHERENCE

ESTIMATION OF DETECTION/CLASSIFICATION PERFORMANCE USING INTERFEROMETRIC SONAR COHERENCE ESTIMATION OF DETECTION/CLASSIFICATION PERFORMANCE USING INTERFEROMETRIC SONAR COHERENCE Øivind Midtgaard a, Torstein O. Sæbø a and Roy E. Hansen a a Norwegian Defence Research Establishment (FFI), P.

More information

Matthew Schwartz Lecture 19: Diffraction and resolution

Matthew Schwartz Lecture 19: Diffraction and resolution Matthew Schwartz Lecture 19: Diffraction and resolution 1 Huygens principle Diffraction refers to what happens to a wave when it hits an obstacle. The key to understanding diffraction is a very simple

More information

Chapter 38 Wave Optics (II)

Chapter 38 Wave Optics (II) Chapter 38 Wave Optics (II) Initiation: Young s ideas on light were daring and imaginative, but he did not provide rigorous mathematical theory and, more importantly, he is arrogant. Progress: Fresnel,

More information

COHERENCE AND INTERFERENCE

COHERENCE AND INTERFERENCE COHERENCE AND INTERFERENCE - An interference experiment makes use of coherent waves. The phase shift (Δφ tot ) between the two coherent waves that interfere at any point of screen (where one observes the

More information

SEA SURFACE SPEED FROM TERRASAR-X ATI DATA

SEA SURFACE SPEED FROM TERRASAR-X ATI DATA SEA SURFACE SPEED FROM TERRASAR-X ATI DATA Matteo Soccorsi (1) and Susanne Lehner (1) (1) German Aerospace Center, Remote Sensing Technology Institute, 82234 Weßling, Germany, Email: matteo.soccorsi@dlr.de

More information

Synthetic Aperture Imaging Using a Randomly Steered Spotlight

Synthetic Aperture Imaging Using a Randomly Steered Spotlight MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Synthetic Aperture Imaging Using a Randomly Steered Spotlight Liu, D.; Boufounos, P.T. TR013-070 July 013 Abstract In this paper, we develop

More information

Results of UAVSAR Airborne SAR Repeat-Pass Multi- Aperture Interferometry

Results of UAVSAR Airborne SAR Repeat-Pass Multi- Aperture Interferometry Results of UAVSAR Airborne SAR Repeat-Pass Multi- Aperture Interferometry Bryan Riel, Ron Muellerschoen Jet Propulsion Laboratory, California Institute of Technology 2011 California Institute of Technology.

More information

Interferometric Measurements Using Redundant Phase Centers of Synthetic Aperture Sonars

Interferometric Measurements Using Redundant Phase Centers of Synthetic Aperture Sonars Interferometric Measurements Using Redundant Phase Centers of Synthetic Aperture Sonars James L. Prater Tesfaye G-Michael Naval Surface Warfare Center Panama City Division 110 Vernon Avenue Panama City,

More information

TEAMS National Competition High School Version Photometry Solution Manual 25 Questions

TEAMS National Competition High School Version Photometry Solution Manual 25 Questions TEAMS National Competition High School Version Photometry Solution Manual 25 Questions Page 1 of 15 Photometry Questions 1. When an upright object is placed between the focal point of a lens and a converging

More information

Radar Tomography of Moving Targets

Radar Tomography of Moving Targets On behalf of Sensors Directorate, Air Force Research Laboratory Final Report September 005 S. L. Coetzee, C. J. Baker, H. D. Griffiths University College London REPORT DOCUMENTATION PAGE Form Approved

More information

Progress In Electromagnetics Research, Vol. 125, , 2012

Progress In Electromagnetics Research, Vol. 125, , 2012 Progress In Electromagnetics Research, Vol. 125, 527 542, 2012 A NOVEL IMAGE FORMATION ALGORITHM FOR HIGH-RESOLUTION WIDE-SWATH SPACEBORNE SAR USING COMPRESSED SENSING ON AZIMUTH DIS- PLACEMENT PHASE CENTER

More information

Diffraction. Light bends! Diffraction assumptions. Solution to Maxwell's Equations. The far-field. Fraunhofer Diffraction Some examples

Diffraction. Light bends! Diffraction assumptions. Solution to Maxwell's Equations. The far-field. Fraunhofer Diffraction Some examples Diffraction Light bends! Diffraction assumptions Solution to Maxwell's Equations The far-field Fraunhofer Diffraction Some examples Diffraction Light does not always travel in a straight line. It tends

More information

Imaging and Deconvolution

Imaging and Deconvolution Imaging and Deconvolution Urvashi Rau National Radio Astronomy Observatory, Socorro, NM, USA The van-cittert Zernike theorem Ei E V ij u, v = I l, m e sky j 2 i ul vm dldm 2D Fourier transform : Image

More information

MEASUREMENT OF THE WAVELENGTH WITH APPLICATION OF A DIFFRACTION GRATING AND A SPECTROMETER

MEASUREMENT OF THE WAVELENGTH WITH APPLICATION OF A DIFFRACTION GRATING AND A SPECTROMETER Warsaw University of Technology Faculty of Physics Physics Laboratory I P Irma Śledzińska 4 MEASUREMENT OF THE WAVELENGTH WITH APPLICATION OF A DIFFRACTION GRATING AND A SPECTROMETER 1. Fundamentals Electromagnetic

More information

Motion Compensation of Interferometric Synthetic Aperture Radar

Motion Compensation of Interferometric Synthetic Aperture Radar Brigham Young University BYU ScholarsArchive All Theses and Dissertations 2004-07-07 Motion Compensation of Interferometric Synthetic Aperture Radar David P. Duncan Brigham Young University - Provo Follow

More information

Synthetic Aperture Radar Systems for Small Aircrafts: Data Processing Approaches

Synthetic Aperture Radar Systems for Small Aircrafts: Data Processing Approaches 20 Synthetic Aperture Radar Systems for Small Aircrafts: Data Processing Approaches Oleksandr O. Bezvesilniy and Dmytro M. Vavriv Institute of Radio Astronomy of the National Academy of Sciences of Ukraine

More information

Chapter 37. Interference of Light Waves

Chapter 37. Interference of Light Waves Chapter 37 Interference of Light Waves Wave Optics Wave optics is a study concerned with phenomena that cannot be adequately explained by geometric (ray) optics These phenomena include: Interference Diffraction

More information

Dynamic Reconstruction for Coded Aperture Imaging Draft Unpublished work please do not cite or distribute.

Dynamic Reconstruction for Coded Aperture Imaging Draft Unpublished work please do not cite or distribute. Dynamic Reconstruction for Coded Aperture Imaging Draft 1.0.1 Berthold K.P. Horn 2007 September 30. Unpublished work please do not cite or distribute. The dynamic reconstruction technique makes it possible

More information

Chapter 35 &36 Physical Optics

Chapter 35 &36 Physical Optics Chapter 35 &36 Physical Optics Physical Optics Phase Difference & Coherence Thin Film Interference 2-Slit Interference Single Slit Interference Diffraction Patterns Diffraction Grating Diffraction & Resolution

More information

mywbut.com Diffraction

mywbut.com Diffraction Diffraction If an opaque obstacle (or aperture) is placed between a source of light and screen, a sufficiently distinct shadow of opaque (or an illuminated aperture) is obtained on the screen.this shows

More information

An Intuitive Explanation of Fourier Theory

An Intuitive Explanation of Fourier Theory An Intuitive Explanation of Fourier Theory Steven Lehar slehar@cns.bu.edu Fourier theory is pretty complicated mathematically. But there are some beautifully simple holistic concepts behind Fourier theory

More information

SCANSAR FOCUSING AND INTERFEROMETRY

SCANSAR FOCUSING AND INTERFEROMETRY SCNSR FOCUSING ND INTERFEROMETRY ndrea Monti Guarnieri, Claudio Prati Dipartimento di Elettronica e Informazione Politecnico di Milano Piazza L. da Vinci, 3-0133 Milano - Italy e-mail: monti@elet.polimi.it;

More information

Optics Vac Work MT 2008

Optics Vac Work MT 2008 Optics Vac Work MT 2008 1. Explain what is meant by the Fraunhofer condition for diffraction. [4] An aperture lies in the plane z = 0 and has amplitude transmission function T(y) independent of x. It is

More information

We N Converted-phase Seismic Imaging - Amplitudebalancing Source-independent Imaging Conditions

We N Converted-phase Seismic Imaging - Amplitudebalancing Source-independent Imaging Conditions We N106 02 Converted-phase Seismic Imaging - Amplitudebalancing -independent Imaging Conditions A.H. Shabelansky* (Massachusetts Institute of Technology), A.E. Malcolm (Memorial University of Newfoundland)

More information

The HPEC Challenge Benchmark Suite

The HPEC Challenge Benchmark Suite The HPEC Challenge Benchmark Suite Ryan Haney, Theresa Meuse, Jeremy Kepner and James Lebak Massachusetts Institute of Technology Lincoln Laboratory HPEC 2005 This work is sponsored by the Defense Advanced

More information

AUTOMATIC RECTIFICATION OF SIDE-SCAN SONAR IMAGES

AUTOMATIC RECTIFICATION OF SIDE-SCAN SONAR IMAGES Proceedings of the International Conference Underwater Acoustic Measurements: Technologies &Results Heraklion, Crete, Greece, 28 th June 1 st July 2005 AUTOMATIC RECTIFICATION OF SIDE-SCAN SONAR IMAGES

More information

Array Shape Tracking Using Active Sonar Reverberation

Array Shape Tracking Using Active Sonar Reverberation Lincoln Laboratory ASAP-2003 Worshop Array Shape Tracing Using Active Sonar Reverberation Vijay Varadarajan and Jeffrey Kroli Due University Department of Electrical and Computer Engineering Durham, NC

More information

Understanding Fraunhofer Diffraction

Understanding Fraunhofer Diffraction [ Assignment View ] [ Eðlisfræði 2, vor 2007 36. Diffraction Assignment is due at 2:00am on Wednesday, January 17, 2007 Credit for problems submitted late will decrease to 0% after the deadline has passed.

More information

FLAP P6.2 Rays and geometrical optics COPYRIGHT 1998 THE OPEN UNIVERSITY S570 V1.1

FLAP P6.2 Rays and geometrical optics COPYRIGHT 1998 THE OPEN UNIVERSITY S570 V1.1 F1 The ray approximation in optics assumes that light travels from one point to another along a narrow path called a ray that may be represented by a directed line (i.e. a line with an arrow on it). In

More information

Residual move-out analysis with 3-D angle-domain common-image gathers

Residual move-out analysis with 3-D angle-domain common-image gathers Stanford Exploration Project, Report 115, May 22, 2004, pages 191 199 Residual move-out analysis with 3-D angle-domain common-image gathers Thomas Tisserant and Biondo Biondi 1 ABSTRACT We describe a method

More information

Lecture 17: Recursive Ray Tracing. Where is the way where light dwelleth? Job 38:19

Lecture 17: Recursive Ray Tracing. Where is the way where light dwelleth? Job 38:19 Lecture 17: Recursive Ray Tracing Where is the way where light dwelleth? Job 38:19 1. Raster Graphics Typical graphics terminals today are raster displays. A raster display renders a picture scan line

More information

Index. aliasing artifacts and noise in CT images, 200 measurement of projection data, nondiffracting

Index. aliasing artifacts and noise in CT images, 200 measurement of projection data, nondiffracting Index Algebraic equations solution by Kaczmarz method, 278 Algebraic reconstruction techniques, 283-84 sequential, 289, 293 simultaneous, 285-92 Algebraic techniques reconstruction algorithms, 275-96 Algorithms

More information

Chapter 15. Light Waves

Chapter 15. Light Waves Chapter 15 Light Waves Chapter 15 is finished, but is not in camera-ready format. All diagrams are missing, but here are some excerpts from the text with omissions indicated by... After 15.1, read 15.2

More information

Model-Based Imaging and Feature Extraction for Synthetic Aperture Radar

Model-Based Imaging and Feature Extraction for Synthetic Aperture Radar Model-Based Imaging and Feature Extraction for Synthetic Aperture Radar Randy Moses Department of Electrical and Computer Engineering The Ohio State University with lots of help from Lee Potter, Mujdat

More information

Chapter 36. Diffraction. Dr. Armen Kocharian

Chapter 36. Diffraction. Dr. Armen Kocharian Chapter 36 Diffraction Dr. Armen Kocharian Diffraction Light of wavelength comparable to or larger than the width of a slit spreads out in all forward directions upon passing through the slit This phenomena

More information

Development and Applications of an Interferometric Ground-Based SAR System

Development and Applications of an Interferometric Ground-Based SAR System Development and Applications of an Interferometric Ground-Based SAR System Tadashi Hamasaki (1), Zheng-Shu Zhou (2), Motoyuki Sato (2) (1) Graduate School of Environmental Studies, Tohoku University Aramaki

More information

E x Direction of Propagation. y B y

E x Direction of Propagation. y B y x E x Direction of Propagation k z z y B y An electromagnetic wave is a travelling wave which has time varying electric and magnetic fields which are perpendicular to each other and the direction of propagation,

More information

Filtering, unwrapping, and geocoding R. Mellors

Filtering, unwrapping, and geocoding R. Mellors Filtering, unwrapping, and geocoding R. Mellors or what to do when your interferogram looks like this correlation Haiti ALOS L-band (23 cm) ascend T447, F249 3/9/09-1/25/10 azimuth phase Bperp = 780 (gmtsar)

More information