Ultra Wideband Synthetic Aperture Radar Imaging Imaging algorithm

Size: px
Start display at page:

Download "Ultra Wideband Synthetic Aperture Radar Imaging Imaging algorithm"

Transcription

1 Ultra Wideband Synthetic Aperture Radar Imaging Imaging algorithm N.A. Cancrinus G.F. Max Technische Universiteit Delft

2 Ultra Wideband Synthetic Aperture Radar Imaging Imaging algorithm by N.A. Cancrinus G.F. Max in partial fulfillment of the requirements for the degree of Bachelor of Science in Electrical Engineering at the Delft University of Technology, to be defended publicly on Wednesday July 5, 2017 at 11:00 AM. Student number: N. Cancrinus G.F. Max Project duration: April 24, 2017 July 7, 2017 Supervisor: Ir. P. Aubry Thesis committee: Prof. dr. ir. A. van der Veen, TU Delft Ir. P. Aubry, TU Delft Dr. D. Cavallo, TU Delft

3 Preface The goal of this project was to implement the technique of Synthetic Aperture Radar to achieve higher localization accuracy of cars and to be able to obtain semi real-time information on the environment around roads. To achieve this, different imaging algorithms have been implemented, and a single method has been extended to directly accommodate the road-mapping application. Different measurements were done to test the algorithms and the results were promising. It is hoped that the use of this technology will become wide-spread and that it can help with creating a safer and more efficient world of traffic, where increased autonomy is facilitated. N.A. Cancrinus G.F. Max Delft, June 2017 i

4 Contents 1 Introduction Basics of radar Basics of Synthetic Aperture Radar Basics of Ultra Wideband radar The goal of the project Problem definition Structure of thesis Program of requirements Requirements for the entire system Requirements for the imaging group General imaging algorithms Definition of variables Pulse Compression Sum-and-Delay algorithm Back-projection algorithm Optical algorithm Data representation Range migration Azimuth compression Variable focus Schematic representation Results Simulator Results of real data Application specific imaging algorithms Introduction Variable-angle SAR Free-path SAR Results Conclusion and future work Conclusion Future work Acknowledgements A Appendix - MATLAB-codes 27 A.1 MATLAB code - Pulse compression A.2 MATLAB code - Sum-and-delay algorithm A.3 MATLAB code - Backprojection algorithm A.4 MATLAB code - Range migration A.5 MATLAB code - Optical algorithm A.6 MATLAB code - Freepath backprojection B Appendix - Fulfillment of the requirements 34 B.1 Global requirements for entire project B.2 Requirements for image formation subgroup Bibliography 36 ii

5 1 Introduction In this chapter some of the most important concepts that are used in this project will be introduced. After this the goal of the entire project will be defined. The division of the project in subgroups will be explained and the goal of the subgroup of this thesis will be specified. Finally, the structure of the rest of this thesis will be described Basics of radar In RADAR (RAdio Detection And Ranging) a certain electrical signal is offered to an antenna, which sends a corresponding electromagnetic signal. This electromagnetic signal interacts with the environment visible for the antenna, which will create some reflections. A part of these reflections are received by an antenna. This antenna can be the same one as the transmitting antenna, but can also be a different one. The result is a certain electric signal that can be analysed. This concept is shown as a diagram in figure 1.1. g(t) Transmitting antenna s(t) Receiving antenna Figure 1.1: A diagram showing the basics of radar. g(t) is the transmitted signal. s(t) is the received signal. The time-points where the received signal contains reflections, give information about the distance between objects in the environment and the antenna. The directivity of the antenna can give some information about the direction where objects are located. The more directive an antenna is, the smaller the area in which a detected object can be located and vice versa Basics of Synthetic Aperture Radar One of the disadvantages of using only a single radar is that there is little information about the direction. This can be improved by using a bigger radar aperture, i.e. with larger area where signals can be received. By doing so the resolution of the direction can be increased, however, building antennas larger than several meters is in most cases unpractical. It is possible to overcome this issue by using an array of smaller antennas. This way not only the construction costs can be reduced but also this structure has interesting properties. The received (or transmitted) signals can be directed to a specific direction which is called beam-forming. Thus, the data collected by each individual radar can be 1

6 1.2. Basics of Synthetic Aperture Radar 2 processed to create a better image with much less SNR in comparison with using a single stationary radar. The costs can be further reduced by using a single but moving radar with which the (virtual) array of smaller antennas can be reconstructed. The aperture created this way is synthetic and is much larger than that of a single radar, hence the name synthetic aperture radar or SAR. Figure 1.2 illustrates the situation. The fundamental idea behind SAR imaging is that the data acquisition happens at different places. Furthermore, broadside imaging is used meaning that the direction of movement (i.e. x direction) and the direction of the antenna (i.e. y direction) are perpendicular. When the antenna movement stays in one direction the technique is called stripmap SAR, when the antenna moves along a circle it is circular SAR. The x axis is often called in the literature as azimuth, cross-range or slow-time direction, while the y axis as (down-)range or fast-time direction. (a) Top view. (b) Side view. Figure 1.2: Basic setup for SAR. The antenna moves along the azimuth direction and takes measurements along the range direction. The area being mapped by SAR is called the scene. If an object is not located in the plane of the scene while a reflection of that object is received, it will be still mapped to a location in the plane as a virtual target. In this case the distance between the virtual target and the measurement locations stay the same. Hence, if seen from above, the virtual target will be seen further away (see figure 1.3). z r y r x Figure 1.3: The effect of having a target out of the plane of view. The target maps to a virtual position that is further away than if the target was dropped onto the xy-plane. The way in which the azimuth-resolution can be increased when multiple measurements are used can be described as follows. When a certain object gives a certain reflection at a certain measurement location, it can only be said that the object is within the area visible by the antenna at this measurement location. When the object is also within the area visible by the antenna at the next measurement location, it can be said that the object must be in a location visible from both measurement locations. If more measurement locations are used, the area in which the object can be becomes smaller and smaller. In essence the differences between the measurements are used to decrease the actual area in which the object can be. This decrease in area corresponds to an increase in azimuth-resolution. This

7 1.3. Basics of Ultra Wideband radar 3 concept is also shown in figure 1.4. Figure 1.4a shows the case when only one measurement location is used. In this case area in which the object can be is rather large. Figure 1.4b shows the situation when multiple measurement locations are used. In this case the azimuth range at which the object can be located is a lot smaller. (a) Resolution with one measurement. (b) Resolution with multiple measurements. Figure 1.4: The effect of taking measurements from multiple distances as seen from the top. The target (red circle) lies within the range of the antenna. The possible locations for the target are shown above. This area is reduced and thus the azimuth-resolution is increased. The literature about synthetic aperture radar (SAR) is quite developed, there are many well-written books and articles for novices [1 4]. The idea of SAR dates back in the 50 s. With this technique, airborne and later satellite images of the surface could be made from large distances, which is also today the most widely used application of SAR. Nowadays thanks to the increased computational power, it is applied in different areas, such as geoscience and security applications. To convert the data of different measurements into an image, a certain algorithm is needed. The algorithms that were used will be described later in chapter 3. Traditionally, the (digital) computational power was rather limited, therefore imaging algorithms were implemented by the means of analog systems [5, 6]. To process data, specially formed optical lenses were placed at well-defined places. As technology advanced, digital systems become increasingly important in SAR imaging [7]. At the moment the lack of computational power is significantly smaller and there is much more information on digital signal processing for SAR [8, 9]. There are many possibilities to construct an image using spectral estimation techniques [10], with a wide range of computational complexity and image quality. Another more frequently used algorithm is compressed sensing [11]. When handling large amount of data the computational time increases and may become unfeasible to create an image within realistic time. This algorithm helps significantly reducing the size of the data with minimal loss of details and consequently the image reconstruction becomes faster than ordinary methods [12, 13]. SAR can also be used to create 3D images [14 16]. The strength of such image is that the depth of the scene also becomes visible, which makes it practical for mapping inaccessible areas and (military) reconnaissance. Furthermore, the signal can penetrate through the surface, when applying applicable frequencies, thus make it a potential candidate for detecting mines [17] or for collecting scientific data Basics of Ultra Wideband radar Whereas the azimuth resolution is determined by the synthetic aperture and the spacing between measurements, the range resolution is determined by the time-duration of the transmitted pulse. When a pulse is shorter, two received reflections can lie closer together without having overlap between them. In the ideal case, a delta-pulse is transmitted, because it gives infinitely high range resolution. The difference between short and long pulses is illustrated in figure 1.5. Figure 1.5a shows the situation when a relatively long pulse is used. In this case the two objects cannot be distinguished because the reflections overlap. Figure 1.5b shows the situation when a relatively short pulse is used. In this case the objects can be distinguished because the reflections do not overlap. So, in general, the resolution increases when a shorter transmitted pulse is used. A short pulse in time-domain translates to a wide band in frequency domain. For this reason, a system that works with a relatively short pulse-duration is called an ultra wideband (UWB) system.

8 1.4. The goal of the project 4 pulse received (a) A long transmitted pulse. pulse received (b) A short transmitted pulse. Figure 1.5: A figure that shows that the range-resolution increases when a shorter pulse is used. In (a) the two objects can t be distinguished because the reflections overlap. In (b) the objects can be distinguished because the reflections do not overlap. When a transmitted pulse is very short, it is difficult to put a reasonable amount of power into it. Therefore in practice often a longer transmitted signal is used, that can be easily converted to a short pulse. In [18] the effect of two most commonly used signals (chirp and Gaussian) are evaluated in terms of the achieved resolution. One of the advantages of using (microwave) UWB is its wall penetrating property. Near-gigahertz frequencies have a good compromise between the resolution of the image and the ability to propagate through solid (non-conducting) obstacles such as walls or the ground. By making use of these properties, the interior of a house can be mapped from outside [19], which has undeniable potential in military applications. Care should be taken when mapping closed spaces as multipath reflections can lead to virtual image(s) and/or inaccurate position of the target. These unwanted effects can be compensated for [20, 21]. Another potential application for UWB SAR is in the medical field. Since below-infrared (e.g. gigahertz to terahertz) frequencies are non-ionizing they can be harmlessly used for cancer tissue detection [22]. Furthermore, the resolution achieved by these microwave signals is sufficiently large to identify unhealthy tissues thus providing a safe substitute for X-ray The goal of the project Current technologies (e.g. GPS) do not provide sufficient accuracy for localization of vehicles in urban areas or in closed spaces. In most cases, e.g. on highways and in rural areas, imperfect localization do not lead to problems. However, it may become inconvenient for the driver when the wrong path is taken in an unknown environment because of incorrect localization. Nuisance of turning the wrong way may grow to dangerous or even life-threatening situations in case the vehicle is engaged in autopilot, let alone, if it is fully autonomous. To tackle this problem a radar can be mounted on the vehicle which can take measurements on-the-go. The data that is obtained by SAR measurements can be used to map its surroundings [23] improving localization accuracy. The goal of this project is to implement the technique of SAR to achieve higher localization accuracy of vehicles and to be able to obtain semi real-time information on the environment around roads. UWB is suitable for this since centimeter (millimeter) range resolution can be achieved with relatively low cost. For this application higher spatial resolution is unpractical. The SAR technology is particularly suitable since the vehicle moves. Besides that, majority of the new cars both self-driving and traditional come with some kind of built-in radar system, mostly for collision detection and/or avoidance. The fact that the key components are already present in vehicles, further facilitates the implementation of the system. Features of the environment can be extracted from the created images and then used for accurate localization of the car [24] eventually together with other systems. When more cars are equipped with such system the map discovered by a single car can be shared between other cars. Later, based on these maps, a collective database can be built for localization purposes. This becomes especially important as self-driving cars with full autonomy must have vision about where they located at. Furthermore,

9 1.5. Problem definition 5 the map can be updated in case of unexpected obstacles on the road (such as trees falling on the road, construction works, or even accidents) and other vehicles as well as emergency services can be alerted real-time. It stays outside of the scope the thesis, however, [25] shows that (satellite) SAR images can be used for mapping purposes Problem definition There are several must-haves that need to be dealt with in order to achieve the previously mentioned goal. It is necessary for this technology to be able to take measurements from different positions, and the coordinates, where these measurements were taken, are of great importance. The radar must be able to transmit and receive wide-band signals. For that an antenna and data acquisition system with applicable properties should be utilized. As last, the data has to be processed to create an image. The project is divided into smaller groups where each group tackles different problems of the realization. Figure 1.6 shows the different groups. The movement of the radar system is handled by the hardware group. Data acquisition is performed by the antenna group. The task of the imaging group, our group, is to convert the obtained data to an image that depicts the scene. Hardware Group Data Acquisition Group Imaging Group Figure 1.6: The overview of the project. The theses of the hardware group and data acquisition group can be found in [26] and [27] respectively. The inputs of the subsystem explained in this thesis are the measurements taken from different locations and the coordinates of these measurements. The output of the subsystem is an image that represents the scene. Therefore, the task of the imaging group can be summarized as follows: Given SAR-measurements of a scene and the location from which they were taken, produce an image that corresponds to the scene Structure of thesis The thesis is structured as follows. In chapter 2 the program of requirements for road mapping will be described. In chapters 3 and 4 the designed imaging algorithms are explained. Chapter 3 contains 3 general algorithms that can be used for creating an image as well as evaluation of these algorithms with simulation and measurements. Chapter 4 builds a connection between the chosen practical application and the algorithms, illustrates how they can be applied in the field, and finally, shows and discusses the results of the measurements. Chapter 5 gives a conclusion together with future work.

10 2 Program of requirements 2.1. Requirements for the entire system The goal of the project is to use SAR-techniques to improve the localisation and navigation of cars. The idea behind this is that antennas mounted on the cars can be used to do multiple subsequent measurements. The data of these measurements can then be used to generate an image that represents the environment. This image can then be compared with a database of known images to determine up to higher resolution where a car is. The newly scanned images can also be used to update the database and thus obtain semi real-time information about the road system. These last two points fall outside the scope of this project. The focus of this project is the generation of the SAR-images. The requirements for the entire system can be divided in several parts. The functional requirements are the specifications that describe what the system must do and what is fundamentally important when doing this. These are as follows: [1.1] The system should be able to do multiple subsequent measurements. [1.2] The data of these measurements should be converted into an image that represents the scanned environment. [1.3] The system should be able to detect reflections within distances that are common in road environments. This means that the transmitted power and the measurement repetition frequency should be adequate. [1.4] The system should be scalable to larger datasets. [1.5] The resolution of the system should be high enough to be able to distinguish objects in a normal road environment. These objects could be buildings, fences, lamp posts or landmarks. The implementation requirements are prerequisites that make the system easier to use in real applications. They are as follows: [2.1] The needed hardware should be easy to implement in an actual vehicle. [2.2] The image formation should be done semi real-time. This means that the time required to make the image should be smaller or in the same order as the time it takes to do the measurements. The representation requirements are boundary conditions that are important to facilitate functional use of the system. They are as follows: [3.1] The produced image should be easily understandable by humans and should be usable for comparison with a larger dataset. The safety requirements indicate how safe the system should be. They are as follows: [4.1] The radiated power should be of such a level that there are no health dangers for people in the proximity of the antennas. 6

11 2.2. Requirements for the imaging group Requirements for the imaging group The goal of this subsystem of the project is to implement an algorithm that creates an image from the dataset obtained by the antenna. This algorithm must satisfy certain requirements. These requirements can be divided in several parts. The functional requirements are the requirements that describe what the subsystem must do and what is fundamentally important when doing this. These are as follows: [A.1] The subsystem must make an image where the amplitude of a location in the image represents the probability that there is a reflecting surface in the corresponding physical location. The inputs used to do this are the obtained measurement-data and the locations where measurements are taken. [A.2] The resolution in the range-direction should be at least 2 cm. [A.3] The resolution in the azimuth-direction should be at least 2 cm. The implementation requirements are requirements that make the subsystem easier to use in real applications. They are as follows: [B.1] The algorithm should be usable when the computational power is limited. This makes it possible to implement the algorithm on a small device or embedded system. [B.2] The system should be easily scalable to larger planes, so the algorithm should be implemented in an efficient way. Pure brute force algorithms should be avoided. [B.3] The algorithm will be implemented in MATLAB and should work without external applications. The representation requirements are boundary conditions that are important to facilitate functional use of the subsystem. They are as follows: [C.1] The result should be presented using a colour map. [C.2] The axes should match the actual physical dimensions of the plane that is imaged, so that no extra operation is required.

12 3 General imaging algorithms In this chapter three imaging algorithms will be implemented. First, the elementary variables used throughout this chapter are introduced. Then the sum-and-delay algorithm will be explained. After that the backprojection algorithm will be elaborated. Finally, the optical algorithm will be described. These algorithms are assessed by the means of simulation and experimental results will be shown Definition of variables The setup for the radar-measurements is illustrated in figure 3.1. The x coordinates of the measurements taken by the moving antenna are stored in vector m. At each measurement point m i the received data is represented as s i, a vector with N samples. The elements of s i are the time-sampled, quantized amplitude values of the received electromagnetic signal. The first and last samples of s i correspond to real distances y begin and y end, respectively. It is worth noting that y begin can have negative value if, for example, the received data is zero-padded at its beginning. The region of interest, which is a subset of the measurements, is located between y min and y max. Having an antenna with certain directivity, these measurements correspond to the reflection of the objects in the scene under the visible beam angle, θ. In a real antenna, reflections outside of the beam angle are also received, however, these are significantly smaller than the reflections inside of the beam angle. Furthermore, reflections near the sides of the beam angle have less power in comparison with reflections in the middle of the beam angle. Throughout the thesis, it is assumed that the antenna pattern characteristics resembles a square wave. This means that every object can be detected within the beam angle with equal reflected power at a given distance, and no object can be found outside of the beam angle. y end y max y x 0 m i y min y begin Figure 3.1: Overview of the measurement. x, y are Cartesian coordinates, m i is the position from which the ith measurement is done, y begin and y end are the (range) distances corresponding to the first and last samples, y min and y max define the range of interest, θ is the beam angle of the antenna. As mentioned earlier, the task of this group is to produce an image based on measurements taken at different locations. The collection of these measurements are made available in a matrix form: S = [ s 1 s 2... s M ] 8 (3.1)

13 3.2. Pulse Compression 9 As S contains time-sampled information about the amplitude of the received signal with sample frequency F s, it will become helpful to define a function that converts real-life distances to samples. The electromagnetic (EM) wave, that is reflected from an object at distance d, travels 2d from and to the antenna. For the given application it can be assumed that the EM wave travels in free-space, thus the corresponding time-sample can be then given as: f(d) = 2dF s (3.2) c It is worth noting that when the EM wave has to travel through material(s) with different electromagnetic properties (i.e. ϵ r, µ r 1) this equation will give improper results regarding the sample. Furthermore, it is possible that f(d) produces a non-integer number. Since it is not possible to select a non-integer element of s i, (linear) interpolation of the two closest samples will be taken as the result. S m Imaging group R Figure 3.2: The inputs and output of this subgroup of the project. The image to be created is a matrix R whose elements correspond to the reflected power in that point. Figure 3.2 summarizes the inputs and output of the imaging group Pulse Compression As was explained before in section 1.3, when a delta-pulse is transmitted it is hard to give this pulse an adequate amount of power. For this reason not a delta-pulse is used, but a certain other signal shape, also see the thesis of the data acquisition group [27]. However, this signal shape could have a relatively low amplitude, even when the power is high. Therefore, the absolute signal amplitude might be lower than the noise, which could cause that the received reflection is poorly visible. To improve this, information about the shape of the transmitted signal can be used. This is achieved by taking the correlation of the transmitted pulse and the received signal. At the time-points where a reflection was received there will be a match between the pulse and the received signal and the correlation will give a relatively high value. There will not be a good match between the noise and the transmitted pulse and therefore the relative noise value will be suppressed. The pulse compressed data p i of measurement s i can be written as: p i [n] = (s i g )[n] (3.3) where g [n] = g[ n] is the mirrored signal of the transmitted, time-sampled pulse g[n]. This equation holds relevant information only if the indices of the first (non-zero) element of s i and g match. In this case, the elements of p i preserve the corresponding real life distances, i.e. the distance corresponding to s i [k] will be the same distance as for p i [k]. Since the convolution lengthens p i, it is truncated such that it has the same length as s i while maintaining the consistency between indices and distances. Similarly to S, the data after pulse compression is called P where each column is a vector of pulse-compressed data: P = [ p 1 p 2... p M ] As a result of the pulse compression P has the same dimensions as S, as well as the distances corresponding to the samples in S are the same as in P Sum-and-Delay algorithm One algorithm to convert pulse compressed data into an image is the sum-and-delay algorithm. This algorithm works as follows. A certain pixel in the scene (x j, y k ) has a certain distance to a measurement (3.4)

14 3.4. Back-projection algorithm 10 point m i = (x mi, 0). It is worth noting that the spacing and location of (x j, y k ) can be arbitrary as long as they are within the visibility of the measurements. To determine whether there is a reflecting surface at pixel (x j, y k ), for every p i the time-sample can be selected that corresponds to the distance between (x j, y k ) and m i. If there is a reflecting surface at (x j, y k ), for every measurement point m i a sample will be taken from the corresponding measurement p i that has a relatively high value. For this reason it can be said that if all the samples of different p i corresponding to (x j, y k ) are added and the result has a high value, the probability is high that there is a reflecting surface at (x j, y k ). This procedure can be repeated for every j and k. When doing this, it is important to check whether a pixel is within the directivity of the antenna. Every pixel will give a result R[j, k] that represents the probability that there is a reflecting surface at (x j, y k ). In this way the desired image is formed. This procedure can be described mathematically as follows. The distance between a certain measurement point m i = (x mi, 0) and a pixel in the scene (x j, y k ) is given by: d i (x j, y k ) = (x j x mi ) 2 + yk 2 (3.5) The sample of a signal that corresponds to d i (x j, y k ) can be found using function f in equation 3.2. The result at pixel (x j, y k ), R[j, k], is found by adding the samples of different pulse-compressed measurements that correspond to the distance between given measurement and the pixel of interest (x j, y k ). This is given by the following equation: R[j, k] = i p i [f( d i (x j, y k ))] (3.6) An advantage of the sum-and-delay algorithm is that it is accurate, a relative probability-value is determined for every pixel separately. Another advantage is that the implementation of the method is straightforward. A disadvantage is that the algorithm determines the relative probability-values for all pixels one by one. This is a brute-force method and is not very efficient. This property makes this approach not very scalable, the algorithm can become very slow for large datasets. Another disadvantage is that it is necessary to wait before all the measurements are done, before the method can be started, even though it is technically possible to do most of the work while measuring. For these reasons it is desirable to implement some changes or an entirely different technique Back-projection algorithm To overcome the limitation of the sum-and-delay algorithm that it can only be started after all measurements are done, another approach should be developed. It is possible to implement the sum-and-delay algorithm in a slightly different way, which partially solves the problem. This other implementation of sum-and-delay is called the backprojection algorithm. Whereas in sum-and-delay the result is acquired per pixel, in backprojection the result is acquired per measurement. Changing the order causes that a large part of the required operations can be done during the measurements. The only step that is still required after all the measurements have been done is the addition of all the partial results to form the final result R. The sum-and-delay algorithm in principle gives the same results as the backprojection algorithm. The backprojection algorithm can be described mathematically as follows. The distance between measurement point m i = (x mi, 0) and the coordinate (x j, y k ) that corresponds to pixel [j, k] is given by equation 3.5. For every measurement a matrix Q i is made that represents the scene. The entry of Q i at pixel [j, k] that corresponds to the coordinate (x j, y k ), is the sample of the measurement p i that corresponds to the distance difference d i (x j, y k ). Similarly to before, function f can be utilized to find this sample: Q i [j, k] = p i [f( d i (x j, y k )] (3.7) This should only be done when a pixel is within the antenna beam of the measurement of interest, just as was the case with the sum-and-delay algorithm. Another similarity is that the spacing of (x j, y k ) can be arbitrary as long as it stays in the scene. As said before, when the distance does not directly match a sample of the signal, linear interpolation is used to find the adequate signal value. The final result R is then made by adding the different Q i :

15 3.5. Optical algorithm 11 y x y y... x x Figure 3.3: The function of the backprojection algorithm. For a certain measurement a part of the result is obtained. Doing this for all measurements and adding the parts will yield the final result. R = i Q i (3.8) This procedure is also shown in figure 3.3. A certain measurement gives a certain part of the result. Besides the options for doing measurements and calculations simultaneously, another advantage of the backprojection algorithm is that it can be extended without much effort to other SAR-configurations such as for example spotlight-sar, which will be explained later. Even though there are added parallellization possibilities, there is still the disadvantage of a relatively large brute-force element, just as was the case for the sum-and-delay algorithm. This means that the scalability might still be improved Optical algorithm A different algorithm that can be scaled more easily, is the optical algorithm. The reason that this algorithm called the optical algorithm is that it is a digital implementation of the original, optical method that was used to form SAR-images. The original method was optical, because there was limited digital processing power at the time. For this algorithm the data first is represented in a certain way and then some operations are used to convert the representation to an image. In the following subsections a description in more detail will be given Data representation To apply the optical algorithm, the data is first represented in a different way. An image is used where column i is equal to the pulse compressed measurement p i, so this image is a direct representation of P. Consequently, this approach is fundamentally different from sum-and-delay and backprojection algorithms. This gives an image of the actual scene, but with some modification due to the effect of SAR-imaging. To recreate the actual scene the effect of the SAR-imaging should be removed. The effect of SAR-imaging can be understood with the help of figure 3.4. When the antenna moves along the synthetic aperture there will be a certain measurement where a certain object will be visible for the antenna for the first time. At this measurement the distance between the object and the antenna is relatively large. Consequently, the time before a reflection from the object is received will also be relatively large. At later measurements the distance between the antenna and the object will be lower, and therefore, the time until the corresponding reflection will be lower. The distance will be minimal when the antenna is below the object and will increase once the antenna moves further away from the object. When the object is located at (x 0, y 0 ), the distance d to the antenna at position (x, 0) can be given mathematically as: d = (x x 0 ) 2 + y0 2 (3.9)

16 3.5. Optical algorithm 12 y 0 d 1 d 2 d 3 d 4 d 5 0 x 0 x 0 x 0 x 0 x 0 Figure 3.4: The difference in the measured distances in case of a single (stationary) target located at (x 0, y 0). At the first and last measurements the distances are the largest, when the antenna is positioned exactly "below" the target (x 0 ) the distance is the smallest. This variation in the distance will lead to a curvature-effect in the data-representation. For example, if the scene would look like figure 3.5a, SAR-imaging would lead to the image with curves shown in figure 3.5b. The width of the curves is determined by the beam width of the antenna, which determines the moment at which an object is visible for the antenna for the first time. y y y (a) Simple example scene with multiple point targets. x (b) The pulse compressed data of the scene. x Figure 3.5: The goal of the optical algorithm is to create the image (a) using the measured and pulse compressed data (b). The goal of the optical algorithm is to remove the effect of SAR-imaging by applying a couple of operations. These operations should convert the curves to points at the minima of these curves. This is the case because the minima correspond with the actual locations of objects in the scene. So the image in figure 3.5b should be converted back to the image of the scene in figure 3.5a. In the optical algorithm this is done in two steps. In the first step the curves are converted to straight lines, to remove the effect of the distance-variation when the antenna moves along the synthetic aperture [28]. This step is called range migration. The closer objects are to the radar, the more important this is, because the distance-variation then is relatively large compared to the length of the curves. Therefore, in the application of road-mapping this can be crucial. In the next step the straight lines are converted to points that represent the locations of objects in the scene. This step is called azimuth compression. These two steps will be explained in more detail in the following two subsections Range migration In range-migration, the different parts of the SAR-curves need to be shifted along the range-direction in a certain way. However, how much certain curves should be shifted in a certain place depends on the actual locations of the relevant objects, but in the first instance it is not known beforehand where the objects in the scene actually are. For example, the part at the end of a curve should be shifted more than a part around the part middle of the curve. Because the locations of the objects and thus the curves are unknown, a guess has to be made. This is done by selecting a certain subpart of the pulse-compressed data P, a certain window W i. The window is selected in such a way that there is a possibility that a SAR-curve lies in this window, see figure 3.6. A filter is applied to this window, that applies a certain distance-shift. If there was

17 3.5. Optical algorithm 13 a SAR-curve in the middle of the window, the filter will transform this curve into a straight line. In case there is no curve in the window or if the curve is not centered in the middle of the window, the distance-shift that is accomplished by the filter will not transform the curve into a straight line, but into something else. This incorrect shift will cause some distortion in the final image. The same steps are repeated for all possible windows within the pulse compressed data P. In this way, for every window is determined whether there was a curve inside the window. These filtered-windows are then passed on to the azimuth-compression step, where further processing is done. y W x Figure 3.6: One of the possible range migration windows, a window has width W. The width of the windows is determined by looking at the length of the SAR-curves. This length is determined by the directivity of the antenna, as is shown in figure 3.6. The width of the windows should be chosen in a way that an entire curve fits into it, even at the largest distance of interest y max (also see figure 3.1. This is important, because when curves are cut off, information about the objects is thrown away. It is also important to not choose the window-width way too large, because then more noise will be in the window. The beam angle also causes that the curves close to the antenna are smaller than the curves far away from the antenna. In theory this leads to less strong image formation close to the antenna. This effect is however compensated by the fact that the electromagnetic waves are attenuated less at this distance. Therefore, it was chosen to leave this as is. The width of the window in meters is W and the width in pixels is W p. Every window is centered around a certain x-coordinate. A window will provide the part of the result R with this x-coordinate, as will be explained later in more detail. To obtain information in the result for every x-coordinate for which a measurement was done, it is necessary that a window is centered around all the measurement positions. If a window is centered around an x-coordinate that belongs to one of the first or the last columns of P, there will not be enough other columns around this column to fill an entire window. This is a problem, because every column of a window needs to contain information, none can remain empty. This problem is solved by zero padding the left and right of the pulse-compressed data P. This is also shown in figure 3.7. The amount of zeros necessary corresponds to a width of around W/2 meter on each side. In this way as many windows can be made as there are measurement points m i. So y 0 0 W/2 W/2 x Figure 3.7: Zeros are added to both sides of the original pulse-compressed data P to be able to select enough windows.

18 3.5. Optical algorithm 14 y y y (a) No focus point. x (b) One focus point. x (c) Multiple focus points. x Figure 3.8: In (a) a set of the original data-curves is shown. In (b) the range-migrated data-curves are shown, when only one y focus is used. It can be seen that only one of the curves turns out straight, the other ones are out of focus. this will cause some distortion in the final image. In (c) the range-migrated curves are shown when multiple different y focus are used. In this case all curves have become straight. mathematically the window selection can be described as follows. Zeros are added to the sides of the matrix P, creating a zero-padded version P zp : P zp = [ 0 P 0 ] = [ p zp 1 p zp 2... p zp ] M+W p 1 A window W i is formed by taking a subset of the columns of P zp : W i = [ p zp i p zp i+1... p zp ] i+w p 1 (3.10) (3.11) When a set of windows is created, a filter has to be designed that performs the right operation on these windows. This filter is applied in the frequency-domain but there are possibilities to do this in the spatial-domain as well. The filter should perform the operation that converts the image in figure 3.8a to the image in figure 3.8c. The distance-shift that is required at a certain position in the image can be determined with the help of equation 3.9. This equation gave the distance between an object at position (x 0, y 0 ) and the antenna at position (x, 0). This distance should be changed to the distance between the object and the antenna when the antenna is below the object. This distance is y 0. This means that the following distance-difference should be removed: d = (x x 0 ) 2 + y0 2 y 0 (3.12) This equation shows that the distance-shift that is necessary depends on the height of the object y 0. This means that at different y-values in the window, a different distance-shift is required. This cannot be achieved with a single filter in the frequency-domain, because this would mean that the same spatial frequencies (similar curves) are treated differently. This is not possible, because the filter can only change the amplitude and phase per frequency. Later a certain trick will be applied to solve this problem, but for now it will be assumed that it is required to set y 0 to a constant value y focus, which will determine how all curves in the window will be shifted. So all curves are shifted in the same way as the curve that would be located at y = y focus. This will lead to some distortion because the curves that are not at this y will be shifted with a certain error that grows larger when the object is further away from y = y focus. Figure 3.8b illustrates the effect of being out of focus. The distance-shift that corresponds with y focus is given as follows: d = (x x 0 ) 2 + yfocus 2 y focus (3.13) Because the distance-shift in the spatial domain is only in the y-direction, it is enough to do the Fourier-transform on the window only in the y-direction. After this, the transform can be multiplied with the filter, that also is in the frequency-domain for only the vertical direction. After this the rangemigrated window can be obtained by doing the inverse Fourier-transform, also only in the vertical direction. The Fourier-transform in the y-direction of the window is as follows:

19 3.5. Optical algorithm 15 W i = F y {W i } = [ ] w i w i+1... w i+wp 1 Where the curly letters indicate that the Fourier-transform has been done and where it holds that: w i [k] = N 1 n=0 (3.14) p zp i [n]e j2πkn/n (3.15) The filter can be obtained by using the fact that a distance-shift in the spatial domain corresponds to a certain frequency-dependent phase-shift in the (spatial) frequency-domain. This property is shown in the following equation: If g(t) F G(f) Then g(t τ) F G(f)e j2πτf (3.16a) (3.16b) In other words: a shift in the time (space) domain corresponds to a frequency-dependent phase shift in the (spatial) frequency domain. Since the required distance-shift was given by equation 3.13, this means that the filter H is given as follows, where x and y are the variables in the spatial domain and u and v are the corresponding variables in the spatial frequency domain: ( ( ) ) H(x, v) = exp(j2π dv) = exp j2π (x x 0 ) 2 + yfocus 2 y focus v (3.17) This is a space-continuous function. Therefore, before the filter can be applied to the windows, sampling is required. The space between samples can be determined using the function f (equation 3.2). The indices in the x- and y-direction will be called m and n respectively. So the range-migrated window in the (x, v)-domain is as follows: W RM i [m, n] = W i [m, n]h[m, n] (3.18) Where RM stands for range-migrated. To obtain the range-migrated window in the (x, y)-domain, the inverse Fourier transform in the vertical direction is taken: So if it holds that: And: W RM i W RM i W RM i = Fy 1 {Wi RM } (3.19) = [ w RM i,0 w RM i,1... w RM ] i,w p 1 = [ wi,0 RM wi,1 RM... wi,w RM ] p 1 Than, using the definition of the inverse DFT, it holds that: (3.20) (3.21) wi,j RM [n] = 1 N 1 w RM i,j [k]e j2πnk/n (3.22) N k=0 Because Wi RM and Wi RM do not have matching columns, two indices are required to distinguish them. This is caused by the applied filter.

20 3.5. Optical algorithm Azimuth compression The range-migration step gives a set of range-migrated windows to the azimuth-compression step. The windows that correspond to the places where objects are located will contain straight lines. These straight lines will be centered around the middle of the window. This can also be understood by looking at figure 3.8. This set of windows has to be converted to an image that represents the actual scene. A simple way to do this conversion is to sum for each window the elements of every row. In this way every window is converted to a column of data. This is also represented in figure 3.9. The column that is created for a certain window belongs to the x-value that corresponds to the middle of the window. So if the result R is written as: R = [ r 1 r 2... r M ] (3.23) Then window W i centered around the x-value of measurement position m i will give column r i of the final result: W p r i = w RM i,n (3.24) All range-migrated windows together can be used to produce the entire final result R. y W i n=1 r i x i x Figure 3.9: The idea behind azimuth compression. Every window is used to create one column of the final result. This is done by summing all the elements along the rows of the window. This method works because for a certain window, the rows with straight lines correspond to objects. In these rows many elements with large values will be added together, leading to a high resulting value. In the rows where there is no straight line, there was no object and the elements will add up to a relatively low resulting value. In this way indeed the probability that there is a reflecting surface at a position is represented Variable focus In practice it is not desirable that only one focus-distance can be chosen because it causes that objects in the scene that are further away from y focus are not imaged sharply. A solution to this problem is running the algorithm for a range of different y focus, which will create a set of different results that all show another part of the scene in focus. So, for example, one of the results displays objects around y focus,1 best, another result displays the objects around y focus,2 best, and so on. Next, for each of these results the part around the corresponding y focus is selected and all these parts are combined into one final result. This will ensure that all objects in the scene are shown in focus, even when they are at different y-coordinates. So the result of the algorithm with y focus,1 gives the part of the final result around y focus,1, the result of the algorithm with y focus,2 gives the part of the final result around y focus,2 and so on. This procedure is also shown in figure The number of focus-points used is an adjustable parameter. When more focus points are chosen, the quality of the final result will be higher, but the speed of the algorithm will be lower. Therefore, it is important that a good trade-off between quality and speed is made Schematic representation A block diagram that describes the optical algorithm is shown in figure The functioning of the algorithm can be repeated shortly as follows: First the zero-padded, pulse-compressed data is converted

21 3.6. Results 17 y y focus,n x y y x y focus,1 x Figure 3.10: The idea behind the variable focus. Many different results with a different y focus are created and the best parts from each are combined to form the final result. into a set of windows W i. Next the range migration is done on every window, using a filter in the frequency-domain. After this every range-migrated window is converted to a single column by applying azimuth compression. All these columns together form the final result R. W 1 Range migration Azimuth compression r 1 y P zp... W M... Range migration... Azimuth compression r M... x Figure 3.11: Summary of the optical method. The zero-padded pulse compressed data P zp is first divided into smaller overlapping windows W i. Then range migration is applied to these windows individually. Azimuth compression ensures that the horizontal lines in the window are mapped to a single point. These results r i are placed after each other creating an image R Results Simulator To test the algorithms described in the previous chapter a simulator was created. In an ideal case a single delta pulse is transmitted and the reflections of that pulse are recorded. With this simulator the received pulse(s) from (multiple) point-like targets could be generated and thus the functionality of the algorithms could be tested. This way these algorithms could be evaluated in a noise and distortion-free environment. The working of the simulator follows these steps: 1. Generate point-like targets in the scene. 2. Generate synthetic aperture (m). 3. Generate received data (s i ) for each measurement point m i based on the beam angle (θ) of the antenna and the targets defined in the first step. 4. Apply the algorithm to be tested. In the first step the (x, y) coordinates of N target number of targets are generated. Here simple geometric objects should be thought of, such as line (segments) and circle (segments). After that, the measurement locations are determined. For simplicity, the y coordinates of the the measurements are

22 3.6. Results 18 Table 3.1: Coordinates of simulated targets Point target #1 Point target #2 Point target #3 x [m] y [m] taken as zero. In the following step the Cartesian distances are calculated between m i and each target that lays inside the beam angle defined at m i. The attenuation of the signal over this distance is accounted for. These distances are then converted to samples and the attenuated delta pulse is shifted accordingly. The simulator does not compensate for the fact that the closer the target is to the borders of the beam, the less power is received, i.e. the directivity of the simulated antenna resembles a square wave. In the following sections simulated results will be shown. There were 3 point-targets simulated whose coordinates are given in table 3.1. This measurement setup will be later used in real-life measurements (see 3.6.2). In figure 3.12 the received data S is depicted, which in this case equals to P as the transmitted (and thus the received) pulse is already compressed to a delta spike. Here the curves can be seen as a result of differences in the distances explained in This ideal data is then fed to the sum-and-delay algorithm, of which the results are shown in figure 3.13a, and the backprojection algorithm, of which the results are shown in figure 3.13b. Both of these algorithms give practically the same result, as was expected, since both methods perform the same operations, but in a different order. The spots in the result where the values are highest, i.e. the spots where the probability on a reflecting surface is highest, correspond well with the positions where the point-targets are actually located. It can also be seen that there are some sidelobes around these spots. These are inherent to SAR-imaging and are caused by the fact that for a single radar-measurement it is not exactly known where an object is within the antenna beam. When this is not known, a certain probability has to be allocated to every position within the antenna beam that is at the same distance from the antenna as the object. The ideal simulated data was also given to the optical algorithm. When doing this, in the first instance the variable focus is turned off, to show the effect of this functionality. This means that only one y focus is chosen. All objects that are approximately at this y will be displayed relatively sharp. Other objects will be displayed increasingly less sharp, dependent on how far they are from y focus. This was first done for a focus-distance of 2.1 m, at the y-coordinate of the closest two point-targets. The result is shown in figure 3.15a. It can be seen that the two closest point-targets are imaged sharply and that the point target in the back is out of focus. Next the algorithm was applied with a focus-distance of 3.3 m, which is the y-coordinate of the single point target furthest away. the result is shown in figure 3.15c. It can be seen that the single point-target in the back is now imaged sharply and that the other two point-targets are out of focus. Finally the algorithm was applied for a variable focus. The result is shown in figure 3.15e. It can be seen that now all three targets are well visible. The result is similar to the results of the sum-and-delay and backprojection algorithms. Again the point-targets are mapped to the right location and again some side-lobes are visible y (m) 2.1 x (m) Figure 3.12: Simulation of 3 point-like targets.

23 3.6. Results y (m) y (m) (a) (c) (b) (d) 1.7 x (m) x (m) Figure 3.13: The results of simulated and measured data of 3 point-like targets with sum-and-delay and backprojection algorithm. All images given here are on the same scale and have the same axes. Top row shows simulated data, bottom row shows the corresponding image with real measurements. In the left column sum-and-delay algorithm was used while in the right column backprojection algorithm was used Results of real data To test the imaging capabilities of the algorithms the same scene with 3 point-like targets was used during measurements as in the simulations (see table 3.1). This real-life scene is shown in figure The set-up that was used to move the antennas is described in [26] and the antennas that were used are described in [27]. The obtained data was first pulse compressed before the algorithm was applied. The result when the sum-and-delay algorithm was used is shown in figure 3.13c. The result when the backprojection algorithm was used is shown in figure 3.13d. Just as in the simulation, these two results are again practically the same. The cans are imaged clearly and can be distinguished well from the noise. The values corresponding to the cans in front are higher to those corresponding to the cans in the back. The reason for this is that the transmitted electromagnetic pulse has been attenuated less when it reaches the front two stacks. The received reflections that correspond to these stacks are stronger. When testing the optical algorithm on this same simple measurement, again in the first instance the variable focus is turned off, to show the effect of this functionality. Figure 3.15b shows the result when y focus = 2.1 m, the y-coordinate of the two stacks of cans in front. As expected, these stacks are displayed sharply and the cans in the back of the scene are out of focus. Figure 3.15d shows the result when y focus = 3.3 m, the y-coordinate of the stack of cans in the back. As expected, the can in the back is displayed sharply and the two cans in front are out of focus. The optical algorithm with variable focus gives the result that is shown in figure 3.15e. In this result both the cans in font and the cans in the back are shown in focus. The result is similar to the results of the sum-and-delay and backprojection algorithms. Apart from a measurement with three stacks of cans relatively far apart, also a more complex measurement was done, where the letters TU were spelled out with cans that are closer together. The measurement set-up is shown in figure In this case the optical algorithm was used to obtain the result, that is shown in figure All the different stacks of cans can be distinguished well from the noise, even though some of them are behind other stacks. The reason that this is not a problem in this case, is that measurements are taken from many positions, and that all stacks are visible from at least some of these measurement positions.

24 Results Figure 3.14: The simple set-up used to initially test the algorithms. 3.3 (a) (b) (c) (d) (e) (f) y (m) y (m) y (m) x (m) x (m) Figure 3.15: The optical method with simulation and real data. The images are shown in scale and with the exact same axis. Left column corresponds with simulation and right column with measured data. The focus distance in the top row is 2.1 m, in the middle row is 3.3 m, and in the bottom row multiple focus points were used. 4

25 Results Figure 3.16: More complex set-up with multiple stacks of cans in the form of the letters TU. (e) 3.3 y (m) x (m) Figure 3.17: More complex image with multiple point-like targets in TU-shape. The optical algorithm was used for creating the image.

26 4 Application specific imaging algorithms 4.1. Introduction The general goal of the bachelor graduation project was to implement the technique of SAR to achieve higher localization accuracy of cars and to be able to obtain semi real-time information on the environment around roads, as was given in section 1.4. Implementing the SAR-technique can be done by mounting the measurement antennas on the car. The movement of the antennas is then created automatically by the movement of the car. In this way a SAR-image of the area around the car can be made. Doing this can give two major improvements with respect to the current system. The first improvement is that the location of cars can be determined up to a much higher accuracy. In urban areas where there are many buildings that reflect signals, GPS alone is much less accurate. Especially when different roads are close together, this could lead to some errors. This problem can be decreased by comparing the SAR-image made by the car with an earlier collected database of similar SAR-images. When there is a match between the measured SAR-image and the database, the location of the car can be determined with much higher accuracy. The second improvement is that changes in the road-system can be detected semi real-time. When something happens, for example a car accident that causes a car to be stalled on the road, this can be seen on the SAR-image created by cars passing by. The database of SAR-images can be updated with this information and this can be used to inform drivers planning to drive in this direction that they should take another route. These improvements are especially important in the case of self-driving cars where the insight of humans is not available. If a self-driving car doesnt know accurately where it is located navigation errors might occur and some faults in the localization might even lead to dangerous situations. It is also important that the system of self-driving cars knows on time that something has happened on the road, so that this can be taken into account. The algorithms that were discussed up until now work only with stripmap-sar. Cars in general do not drive only in straight lines, so the SAR-algorithms should be made more general to take this into account. Firstly, the direction in which the antenna looks should be made more general, instead of only perpendicular to the direction of motion. Second, the positions m i from which the measurements are done should be generalized as well, since it is no longer given that the antennas move in a straight line. So instead of m i = (x mi, 0), it holds that m i = (x mi, y mi ). The algorithm where both the angle and location are entirely variable will be called the free-path situation. In practice the information about the location and measurement angle that correspond to different measurements will be determined using saved information about the speed and the steering angle. Because these variables are only known up to a certain accuracy and because the errors are of a cumulative nature, the maximum size of an accurate SAR-image made by a car will be limited. In principle it is the case that when it is possible to make a larger accurate SAR-image, there is more material to compare and the localization will be more accurate. 22

27 4.2. Variable-angle SAR Variable-angle SAR The first step in making the SAR-algorithm more general is to make the direction in which the antenna looks variable. With this adjustment it is possible to compensate for the fact that the antenna might turn during the measurements, which could for example be the case when a car turns while driving. It was chosen to make the adjustment on the backprojection algorithm, since this was the most easy to do. The main difference with the original version of the algorithm is that the pixels visible for a certain measurement no longer depend only on the x-coordinate, but on the x-coordinate and the antenna direction. The functionality of this change was tested by doing spotlight-sar measurements that turn the antenna in such a way that one point in the scene is in focus. The results that were obtained with these measurements are given in section Free-path SAR The last extension that is required to make the SAR-algorithm generally applicable is that the measurement positions can have two coordinates instead of one, so that it is not necessary that all measurement locations lie along a straight line. Road-mapping applications need this addition as well, since the paths of car can have a relatively arbitrary form. The adjustment is made in the backprojection algorithm where the angle already was made variable, meaning that the resulting algorithm can process the data from arbitrary measurement locations with arbitrary antenna directions in a 2D-scene. Again it was chosen to use the backprojection algorithm, since it is the most easy to adapt. The change in the algorithm is similar to the change required to make the antenna direction variable. The pixels that lie within the visibility of a measurement no longer only depend on the x-coordinate and the antenna direction, but on the x-coordinate, y-coordinate and the antenna direction. This means that equation 3.5 can be rewritten as: d i (x j, y k ) = (x j x mi ) 2 + (y k y mi ) 2 (4.1) Tests on the new algorithm were done by moving the antennas along a curved path. The results of this test are shown in section Results To test whether the generalized algorithm works correctly, several tests were done. To test the variable antenna direction, a spotlight-sar measurement was done. In such a measurement the antenna still moves along a straight line, but the antenna direction is no longer necessarily orthogonal to the direction of motion. The antenna is rotated in a way that it always looks at the same point in the scene, this is shown in figure 4.1. For this reason more information is acquired about the area around this point and less information about the other areas in the scene, which causes the resulting image to have an area of focus. y... m 1 m 2 m M x Figure 4.1: Visual representation of measurements with variable angle. Here the antenna faces the target in all measurements while the antenna moves along the x-axis. To perform a spotlight-sar test, a platform is required that can rotate the antennas. Elaboration on this platform is given in the thesis of the hardware-group [26]. A measurement where two stacks of cans were placed around the point of focus was done to see whether they could be imaged correctly. The result of this test is given in figure 4.2. It can be seen that the two stacks of cans are brought into focus and can be distinguished. It can also be seen that everything behind the cans is more smeared out. This result indicates that the algorithm with variable angle works correctly.

28 4.4. Results y (m) 2.7 x (m) Figure 4.2: The result of measured data with variable angle. During the measurements two stacks of cans (similar to previous measurements) were placed at (1.8, 3.1) m and (2.0, 3.1) m. To test the entire free-path algorithm, a test was done that resembles actual road-mapping applications, albeit on a slightly smaller scale. The antennas were placed on a manoeuvrable cart, which was then moved along a predetermined route through a room and corridor. The lay-out of the building and the route that was driven are shown in figure 4.3a. In this case measurements were only done to the left side of the cart. In the real road-mapping application it would be desirable to make radar-scans of both sides of the car, but since measurements on only one side can be extended directly to two sides, scans of one side suffice for testing purposes. The result obtained when the data of this test was given to the free-path algorithm, is shown in figure 4.3b. The walls and doorways are visible in the result, therefore a map of the building is created. The accuracy is however limited. It is suspected that this is caused by doing the test on a smaller scale, making constant errors relatively large. Another factor that could play a role is that the measurement positions and corresponding antenna angles are only known up to a limited accuracy. When more accurate information on the steering angle and speed of vehicles is used and the test is done on a more realistic scale, the images could improve and they can be used for localization more easily. m M y (m) m (a) The schematic of the measurement with the taken path indicated. x (m) (b) The resulting image. Figure 4.3: The result of free-path measurement. (a) shows the route of the measurement with schematic of the environment. The route consists of 4 linear sections marked between the crosses. At each separate section the antenna was set perpendicular to the movement direction. (b) shows the result of the free-path algorithm applied on the collected data. The route is shown as white dashed line. The origin of the coordinate system was set to the first measurement, m 1.

An Intuitive Explanation of Fourier Theory

An Intuitive Explanation of Fourier Theory An Intuitive Explanation of Fourier Theory Steven Lehar slehar@cns.bu.edu Fourier theory is pretty complicated mathematically. But there are some beautifully simple holistic concepts behind Fourier theory

More information

Chapter 18. Geometric Operations

Chapter 18. Geometric Operations Chapter 18 Geometric Operations To this point, the image processing operations have computed the gray value (digital count) of the output image pixel based on the gray values of one or more input pixels;

More information

PHY 222 Lab 11 Interference and Diffraction Patterns Investigating interference and diffraction of light waves

PHY 222 Lab 11 Interference and Diffraction Patterns Investigating interference and diffraction of light waves PHY 222 Lab 11 Interference and Diffraction Patterns Investigating interference and diffraction of light waves Print Your Name Print Your Partners' Names Instructions April 17, 2015 Before lab, read the

More information

Image Sampling and Quantisation

Image Sampling and Quantisation Image Sampling and Quantisation Introduction to Signal and Image Processing Prof. Dr. Philippe Cattin MIAC, University of Basel 1 of 46 22.02.2016 09:17 Contents Contents 1 Motivation 2 Sampling Introduction

More information

Image Sampling & Quantisation

Image Sampling & Quantisation Image Sampling & Quantisation Biomedical Image Analysis Prof. Dr. Philippe Cattin MIAC, University of Basel Contents 1 Motivation 2 Sampling Introduction and Motivation Sampling Example Quantisation Example

More information

Development and Applications of an Interferometric Ground-Based SAR System

Development and Applications of an Interferometric Ground-Based SAR System Development and Applications of an Interferometric Ground-Based SAR System Tadashi Hamasaki (1), Zheng-Shu Zhou (2), Motoyuki Sato (2) (1) Graduate School of Environmental Studies, Tohoku University Aramaki

More information

New Results on the Omega-K Algorithm for Processing Synthetic Aperture Radar Data

New Results on the Omega-K Algorithm for Processing Synthetic Aperture Radar Data New Results on the Omega-K Algorithm for Processing Synthetic Aperture Radar Data Matthew A. Tolman and David G. Long Electrical and Computer Engineering Dept. Brigham Young University, 459 CB, Provo,

More information

Lecture 17: Recursive Ray Tracing. Where is the way where light dwelleth? Job 38:19

Lecture 17: Recursive Ray Tracing. Where is the way where light dwelleth? Job 38:19 Lecture 17: Recursive Ray Tracing Where is the way where light dwelleth? Job 38:19 1. Raster Graphics Typical graphics terminals today are raster displays. A raster display renders a picture scan line

More information

Compression of RADARSAT Data with Block Adaptive Wavelets Abstract: 1. Introduction

Compression of RADARSAT Data with Block Adaptive Wavelets Abstract: 1. Introduction Compression of RADARSAT Data with Block Adaptive Wavelets Ian Cumming and Jing Wang Department of Electrical and Computer Engineering The University of British Columbia 2356 Main Mall, Vancouver, BC, Canada

More information

ECE-161C Cameras. Nuno Vasconcelos ECE Department, UCSD

ECE-161C Cameras. Nuno Vasconcelos ECE Department, UCSD ECE-161C Cameras Nuno Vasconcelos ECE Department, UCSD Image formation all image understanding starts with understanding of image formation: projection of a scene from 3D world into image on 2D plane 2

More information

FRED Slit Diffraction Application Note

FRED Slit Diffraction Application Note FRED Slit Diffraction Application Note The classic problem of diffraction through a slit finds one of its chief applications in spectrometers. The wave nature of these phenomena can be modeled quite accurately

More information

Vivekananda. Collegee of Engineering & Technology. Question and Answers on 10CS762 /10IS762 UNIT- 5 : IMAGE ENHANCEMENT.

Vivekananda. Collegee of Engineering & Technology. Question and Answers on 10CS762 /10IS762 UNIT- 5 : IMAGE ENHANCEMENT. Vivekananda Collegee of Engineering & Technology Question and Answers on 10CS762 /10IS762 UNIT- 5 : IMAGE ENHANCEMENT Dept. Prepared by Harivinod N Assistant Professor, of Computer Science and Engineering,

More information

Optical Sensors: Key Technology for the Autonomous Car

Optical Sensors: Key Technology for the Autonomous Car Optical Sensors: Key Technology for the Autonomous Car Rajeev Thakur, P.E., Product Marketing Manager, Infrared Business Unit, Osram Opto Semiconductors Autonomously driven cars will combine a variety

More information

(Refer Slide Time: 00:02:00)

(Refer Slide Time: 00:02:00) Computer Graphics Prof. Sukhendu Das Dept. of Computer Science and Engineering Indian Institute of Technology, Madras Lecture - 18 Polyfill - Scan Conversion of a Polygon Today we will discuss the concepts

More information

Image Transformation Techniques Dr. Rajeev Srivastava Dept. of Computer Engineering, ITBHU, Varanasi

Image Transformation Techniques Dr. Rajeev Srivastava Dept. of Computer Engineering, ITBHU, Varanasi Image Transformation Techniques Dr. Rajeev Srivastava Dept. of Computer Engineering, ITBHU, Varanasi 1. Introduction The choice of a particular transform in a given application depends on the amount of

More information

Lecture 7 Notes: 07 / 11. Reflection and refraction

Lecture 7 Notes: 07 / 11. Reflection and refraction Lecture 7 Notes: 07 / 11 Reflection and refraction When an electromagnetic wave, such as light, encounters the surface of a medium, some of it is reflected off the surface, while some crosses the boundary

More information

Understanding Fraunhofer Diffraction

Understanding Fraunhofer Diffraction [ Assignment View ] [ Eðlisfræði 2, vor 2007 36. Diffraction Assignment is due at 2:00am on Wednesday, January 17, 2007 Credit for problems submitted late will decrease to 0% after the deadline has passed.

More information

Image Reconstruction from Multiple Projections ECE 6258 Class project

Image Reconstruction from Multiple Projections ECE 6258 Class project Image Reconstruction from Multiple Projections ECE 658 Class project Introduction: The ability to reconstruct an object from multiple angular projections is a powerful tool. What this procedure gives people

More information

Physics 11. Unit 8 Geometric Optics Part 1

Physics 11. Unit 8 Geometric Optics Part 1 Physics 11 Unit 8 Geometric Optics Part 1 1.Review of waves In the previous section, we have investigated the nature and behaviors of waves in general. We know that all waves possess the following characteristics:

More information

MOVING OBJECT DETECTION USING BACKGROUND SUBTRACTION ALGORITHM USING SIMULINK

MOVING OBJECT DETECTION USING BACKGROUND SUBTRACTION ALGORITHM USING SIMULINK MOVING OBJECT DETECTION USING BACKGROUND SUBTRACTION ALGORITHM USING SIMULINK Mahamuni P. D 1, R. P. Patil 2, H.S. Thakar 3 1 PG Student, E & TC Department, SKNCOE, Vadgaon Bk, Pune, India 2 Asst. Professor,

More information

To Measure a Constant Velocity. Enter.

To Measure a Constant Velocity. Enter. To Measure a Constant Velocity Apparatus calculator, black lead, calculator based ranger (cbr, shown), Physics application this text, the use of the program becomes second nature. At the Vernier Software

More information

DESIGNER S NOTEBOOK Proximity Detection and Link Budget By Tom Dunn July 2011

DESIGNER S NOTEBOOK Proximity Detection and Link Budget By Tom Dunn July 2011 INTELLIGENT OPTO SENSOR Number 38 DESIGNER S NOTEBOOK Proximity Detection and Link Budget By Tom Dunn July 2011 Overview TAOS proximity sensors operate by flashing an infrared (IR) light towards a surface

More information

specular diffuse reflection.

specular diffuse reflection. Lesson 8 Light and Optics The Nature of Light Properties of Light: Reflection Refraction Interference Diffraction Polarization Dispersion and Prisms Total Internal Reflection Huygens s Principle The Nature

More information

PHYSICS. Chapter 33 Lecture FOR SCIENTISTS AND ENGINEERS A STRATEGIC APPROACH 4/E RANDALL D. KNIGHT

PHYSICS. Chapter 33 Lecture FOR SCIENTISTS AND ENGINEERS A STRATEGIC APPROACH 4/E RANDALL D. KNIGHT PHYSICS FOR SCIENTISTS AND ENGINEERS A STRATEGIC APPROACH 4/E Chapter 33 Lecture RANDALL D. KNIGHT Chapter 33 Wave Optics IN THIS CHAPTER, you will learn about and apply the wave model of light. Slide

More information

Memorandum. Clint Slatton Prof. Brian Evans Term project idea for Multidimensional Signal Processing (EE381k)

Memorandum. Clint Slatton Prof. Brian Evans Term project idea for Multidimensional Signal Processing (EE381k) Memorandum From: To: Subject: Date : Clint Slatton Prof. Brian Evans Term project idea for Multidimensional Signal Processing (EE381k) 16-Sep-98 Project title: Minimizing segmentation discontinuities in

More information

Ultrasonic Multi-Skip Tomography for Pipe Inspection

Ultrasonic Multi-Skip Tomography for Pipe Inspection 18 th World Conference on Non destructive Testing, 16-2 April 212, Durban, South Africa Ultrasonic Multi-Skip Tomography for Pipe Inspection Arno VOLKER 1, Rik VOS 1 Alan HUNTER 1 1 TNO, Stieltjesweg 1,

More information

Diffraction. Single-slit diffraction. Diffraction by a circular aperture. Chapter 38. In the forward direction, the intensity is maximal.

Diffraction. Single-slit diffraction. Diffraction by a circular aperture. Chapter 38. In the forward direction, the intensity is maximal. Diffraction Chapter 38 Huygens construction may be used to find the wave observed on the downstream side of an aperture of any shape. Diffraction The interference pattern encodes the shape as a Fourier

More information

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision What Happened Last Time? Human 3D perception (3D cinema) Computational stereo Intuitive explanation of what is meant by disparity Stereo matching

More information

Dynamic Reconstruction for Coded Aperture Imaging Draft Unpublished work please do not cite or distribute.

Dynamic Reconstruction for Coded Aperture Imaging Draft Unpublished work please do not cite or distribute. Dynamic Reconstruction for Coded Aperture Imaging Draft 1.0.1 Berthold K.P. Horn 2007 September 30. Unpublished work please do not cite or distribute. The dynamic reconstruction technique makes it possible

More information

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22)

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22) Digital Image Processing Prof. P. K. Biswas Department of Electronics and Electrical Communications Engineering Indian Institute of Technology, Kharagpur Module Number 01 Lecture Number 02 Application

More information

Adaptive Waveform Inversion: Theory Mike Warner*, Imperial College London, and Lluís Guasch, Sub Salt Solutions Limited

Adaptive Waveform Inversion: Theory Mike Warner*, Imperial College London, and Lluís Guasch, Sub Salt Solutions Limited Adaptive Waveform Inversion: Theory Mike Warner*, Imperial College London, and Lluís Guasch, Sub Salt Solutions Limited Summary We present a new method for performing full-waveform inversion that appears

More information

Edge and local feature detection - 2. Importance of edge detection in computer vision

Edge and local feature detection - 2. Importance of edge detection in computer vision Edge and local feature detection Gradient based edge detection Edge detection by function fitting Second derivative edge detectors Edge linking and the construction of the chain graph Edge and local feature

More information

25 The vibration spiral

25 The vibration spiral 25 The vibration spiral Contents 25.1 The vibration spiral 25.1.1 Zone Plates............................... 10 25.1.2 Circular obstacle and Poisson spot.................. 13 Keywords: Fresnel Diffraction,

More information

CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION

CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION 6.1 INTRODUCTION Fuzzy logic based computational techniques are becoming increasingly important in the medical image analysis arena. The significant

More information

6.801/866. Segmentation and Line Fitting. T. Darrell

6.801/866. Segmentation and Line Fitting. T. Darrell 6.801/866 Segmentation and Line Fitting T. Darrell Segmentation and Line Fitting Gestalt grouping Background subtraction K-Means Graph cuts Hough transform Iterative fitting (Next time: Probabilistic segmentation)

More information

Sensor Modalities. Sensor modality: Different modalities:

Sensor Modalities. Sensor modality: Different modalities: Sensor Modalities Sensor modality: Sensors which measure same form of energy and process it in similar ways Modality refers to the raw input used by the sensors Different modalities: Sound Pressure Temperature

More information

MICHELSON S INTERFEROMETER

MICHELSON S INTERFEROMETER MICHELSON S INTERFEROMETER Objectives: 1. Alignment of Michelson s Interferometer using He-Ne laser to observe concentric circular fringes 2. Measurement of the wavelength of He-Ne Laser and Na lamp using

More information

Scanner Parameter Estimation Using Bilevel Scans of Star Charts

Scanner Parameter Estimation Using Bilevel Scans of Star Charts ICDAR, Seattle WA September Scanner Parameter Estimation Using Bilevel Scans of Star Charts Elisa H. Barney Smith Electrical and Computer Engineering Department Boise State University, Boise, Idaho 8375

More information

Pedestrian Detection Using Correlated Lidar and Image Data EECS442 Final Project Fall 2016

Pedestrian Detection Using Correlated Lidar and Image Data EECS442 Final Project Fall 2016 edestrian Detection Using Correlated Lidar and Image Data EECS442 Final roject Fall 2016 Samuel Rohrer University of Michigan rohrer@umich.edu Ian Lin University of Michigan tiannis@umich.edu Abstract

More information

2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into

2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into 2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into the viewport of the current application window. A pixel

More information

Plane Wave Imaging Using Phased Array Arno Volker 1

Plane Wave Imaging Using Phased Array Arno Volker 1 11th European Conference on Non-Destructive Testing (ECNDT 2014), October 6-10, 2014, Prague, Czech Republic More Info at Open Access Database www.ndt.net/?id=16409 Plane Wave Imaging Using Phased Array

More information

Computer Graphics Prof. Sukhendu Das Dept. of Computer Science and Engineering Indian Institute of Technology, Madras Lecture - 14

Computer Graphics Prof. Sukhendu Das Dept. of Computer Science and Engineering Indian Institute of Technology, Madras Lecture - 14 Computer Graphics Prof. Sukhendu Das Dept. of Computer Science and Engineering Indian Institute of Technology, Madras Lecture - 14 Scan Converting Lines, Circles and Ellipses Hello everybody, welcome again

More information

Mapping Distance and Density

Mapping Distance and Density Mapping Distance and Density Distance functions allow you to determine the nearest location of something or the least-cost path to a particular destination. Density functions, on the other hand, allow

More information

Raycasting. Chapter Raycasting foundations. When you look at an object, like the ball in the picture to the left, what do

Raycasting. Chapter Raycasting foundations. When you look at an object, like the ball in the picture to the left, what do Chapter 4 Raycasting 4. Raycasting foundations When you look at an, like the ball in the picture to the left, what do lamp you see? You do not actually see the ball itself. Instead, what you see is the

More information

Looming Motion Segmentation in Vehicle Tracking System using Wavelet Transforms

Looming Motion Segmentation in Vehicle Tracking System using Wavelet Transforms Looming Motion Segmentation in Vehicle Tracking System using Wavelet Transforms K. SUBRAMANIAM, S. SHUKLA, S.S. DLAY and F.C. RIND Department of Electrical and Electronic Engineering University of Newcastle-Upon-Tyne

More information

Mathematics Curriculum

Mathematics Curriculum 6 G R A D E Mathematics Curriculum GRADE 6 5 Table of Contents 1... 1 Topic A: Area of Triangles, Quadrilaterals, and Polygons (6.G.A.1)... 11 Lesson 1: The Area of Parallelograms Through Rectangle Facts...

More information

Physics I : Oscillations and Waves Prof. S Bharadwaj Department of Physics & Meteorology Indian Institute of Technology, Kharagpur

Physics I : Oscillations and Waves Prof. S Bharadwaj Department of Physics & Meteorology Indian Institute of Technology, Kharagpur Physics I : Oscillations and Waves Prof. S Bharadwaj Department of Physics & Meteorology Indian Institute of Technology, Kharagpur Lecture - 20 Diffraction - I We have been discussing interference, the

More information

What is it? How does it work? How do we use it?

What is it? How does it work? How do we use it? What is it? How does it work? How do we use it? Dual Nature http://www.youtube.com/watch?v=dfpeprq7ogc o Electromagnetic Waves display wave behavior o Created by oscillating electric and magnetic fields

More information

INTRODUCTION REFLECTION AND REFRACTION AT BOUNDARIES. Introduction. Reflection and refraction at boundaries. Reflection at a single surface

INTRODUCTION REFLECTION AND REFRACTION AT BOUNDARIES. Introduction. Reflection and refraction at boundaries. Reflection at a single surface Chapter 8 GEOMETRICAL OPTICS Introduction Reflection and refraction at boundaries. Reflection at a single surface Refraction at a single boundary Dispersion Summary INTRODUCTION It has been shown that

More information

Metallic Transmission Screen for Sub-wavelength Focusing

Metallic Transmission Screen for Sub-wavelength Focusing Metallic Transmission Screen for Sub-wavelength Focusing A.M.H. Wong, C.D. Sarris and G.V. leftheriades Abstract: A simple metallic transmission screen is proposed that is capable of focusing an incident

More information

CSE528 Computer Graphics: Theory, Algorithms, and Applications

CSE528 Computer Graphics: Theory, Algorithms, and Applications CSE528 Computer Graphics: Theory, Algorithms, and Applications Hong Qin Stony Brook University (SUNY at Stony Brook) Stony Brook, New York 11794-2424 Tel: (631)632-845; Fax: (631)632-8334 qin@cs.stonybrook.edu

More information

Chapter 24. Wave Optics

Chapter 24. Wave Optics Chapter 24 Wave Optics Wave Optics The wave nature of light is needed to explain various phenomena Interference Diffraction Polarization The particle nature of light was the basis for ray (geometric) optics

More information

Writing Kirchhoff migration/modelling in a matrix form

Writing Kirchhoff migration/modelling in a matrix form Writing Kirchhoff migration/modelling in a matrix form Abdolnaser Yousefzadeh and John C. Bancroft Kirchhoff migration matrix ABSTRACT Kirchhoff prestack migration and modelling are linear operators. Therefore,

More information

Digital Image Processing. Prof. P. K. Biswas. Department of Electronic & Electrical Communication Engineering

Digital Image Processing. Prof. P. K. Biswas. Department of Electronic & Electrical Communication Engineering Digital Image Processing Prof. P. K. Biswas Department of Electronic & Electrical Communication Engineering Indian Institute of Technology, Kharagpur Lecture - 21 Image Enhancement Frequency Domain Processing

More information

IMAGE DE-NOISING IN WAVELET DOMAIN

IMAGE DE-NOISING IN WAVELET DOMAIN IMAGE DE-NOISING IN WAVELET DOMAIN Aaditya Verma a, Shrey Agarwal a a Department of Civil Engineering, Indian Institute of Technology, Kanpur, India - (aaditya, ashrey)@iitk.ac.in KEY WORDS: Wavelets,

More information

Implemented by Valsamis Douskos Laboratoty of Photogrammetry, Dept. of Surveying, National Tehnical University of Athens

Implemented by Valsamis Douskos Laboratoty of Photogrammetry, Dept. of Surveying, National Tehnical University of Athens An open-source toolbox in Matlab for fully automatic calibration of close-range digital cameras based on images of chess-boards FAUCCAL (Fully Automatic Camera Calibration) Implemented by Valsamis Douskos

More information

CSE328 Fundamentals of Computer Graphics

CSE328 Fundamentals of Computer Graphics CSE328 Fundamentals of Computer Graphics Hong Qin State University of New York at Stony Brook (Stony Brook University) Stony Brook, New York 794--44 Tel: (63)632-845; Fax: (63)632-8334 qin@cs.sunysb.edu

More information

L16. Scan Matching and Image Formation

L16. Scan Matching and Image Formation EECS568 Mobile Robotics: Methods and Principles Prof. Edwin Olson L16. Scan Matching and Image Formation Scan Matching Before After 2 Scan Matching Before After 2 Map matching has to be fast 14 robots

More information

Reflection and Image Formation by Mirrors

Reflection and Image Formation by Mirrors Purpose Theory a. To study the reflection of light Reflection and Image Formation by Mirrors b. To study the formation and characteristics of images formed by different types of mirrors. When light (wave)

More information

Reverberation prediction using the Image Source Method

Reverberation prediction using the Image Source Method Reverberation prediction using Image Source Method DESC9115 Lab Report 2 Alex Tu 307174123 Introduction The image source model technique (ISM) is a commonly utilised tool in acoustics and signal processing

More information

Lab 9. Julia Janicki. Introduction

Lab 9. Julia Janicki. Introduction Lab 9 Julia Janicki Introduction My goal for this project is to map a general land cover in the area of Alexandria in Egypt using supervised classification, specifically the Maximum Likelihood and Support

More information

Topic 6 Representation and Description

Topic 6 Representation and Description Topic 6 Representation and Description Background Segmentation divides the image into regions Each region should be represented and described in a form suitable for further processing/decision-making Representation

More information

Reconstructing Images of Bar Codes for Construction Site Object Recognition 1

Reconstructing Images of Bar Codes for Construction Site Object Recognition 1 Reconstructing Images of Bar Codes for Construction Site Object Recognition 1 by David E. Gilsinn 2, Geraldine S. Cheok 3, Dianne P. O Leary 4 ABSTRACT: This paper discusses a general approach to reconstructing

More information

SPECIAL TECHNIQUES-II

SPECIAL TECHNIQUES-II SPECIAL TECHNIQUES-II Lecture 19: Electromagnetic Theory Professor D. K. Ghosh, Physics Department, I.I.T., Bombay Method of Images for a spherical conductor Example :A dipole near aconducting sphere The

More information

Improving Segmented Interferometric Synthetic Aperture Radar Processing Using Presumming. by: K. Clint Slatton. Final Report.

Improving Segmented Interferometric Synthetic Aperture Radar Processing Using Presumming. by: K. Clint Slatton. Final Report. Improving Segmented Interferometric Synthetic Aperture Radar Processing Using Presumming by: K. Clint Slatton Final Report Submitted to Professor Brian Evans EE381K Multidimensional Digital Signal Processing

More information

AUTOMATIC INTERPRETATION OF HIGH RESOLUTION SAR IMAGES: FIRST RESULTS OF SAR IMAGE SIMULATION FOR SINGLE BUILDINGS

AUTOMATIC INTERPRETATION OF HIGH RESOLUTION SAR IMAGES: FIRST RESULTS OF SAR IMAGE SIMULATION FOR SINGLE BUILDINGS AUTOMATIC INTERPRETATION OF HIGH RESOLUTION SAR IMAGES: FIRST RESULTS OF SAR IMAGE SIMULATION FOR SINGLE BUILDINGS J. Tao *, G. Palubinskas, P. Reinartz German Aerospace Center DLR, 82234 Oberpfaffenhofen,

More information

Chapter 33 The Nature and Propagation of Light by C.-R. Hu

Chapter 33 The Nature and Propagation of Light by C.-R. Hu Chapter 33 The Nature and Propagation of Light by C.-R. Hu Light is a transverse wave of the electromagnetic field. In 1873, James C. Maxwell predicted it from the Maxwell equations. The speed of all electromagnetic

More information

Optimised corrections for finite-difference modelling in two dimensions

Optimised corrections for finite-difference modelling in two dimensions Optimized corrections for 2D FD modelling Optimised corrections for finite-difference modelling in two dimensions Peter M. Manning and Gary F. Margrave ABSTRACT Finite-difference two-dimensional correction

More information

Real-time Detection of Illegally Parked Vehicles Using 1-D Transformation

Real-time Detection of Illegally Parked Vehicles Using 1-D Transformation Real-time Detection of Illegally Parked Vehicles Using 1-D Transformation Jong Taek Lee, M. S. Ryoo, Matthew Riley, and J. K. Aggarwal Computer & Vision Research Center Dept. of Electrical & Computer Engineering,

More information

Indirect Microwave Holographic Imaging of Concealed Ordnance for Airport Security Imaging Systems

Indirect Microwave Holographic Imaging of Concealed Ordnance for Airport Security Imaging Systems Progress In Electromagnetics Research, Vol. 146, 7 13, 2014 Indirect Microwave Holographic Imaging of Concealed Ordnance for Airport Security Imaging Systems Okan Yurduseven 1, 2, * Abstract In this paper,

More information

Buried Cylinders Geometric Parameters Measurement by Means of GPR

Buried Cylinders Geometric Parameters Measurement by Means of GPR Progress In Electromagnetics Research Symposium 006, Cambridge, USA, March 6-9 187 Buried Cylinders Geometric Parameters Measurement by Means of GPR B. A. Yufryakov and O. N. Linnikov Scientific and Technical

More information

Stereo imaging ideal geometry

Stereo imaging ideal geometry Stereo imaging ideal geometry (X,Y,Z) Z f (x L,y L ) f (x R,y R ) Optical axes are parallel Optical axes separated by baseline, b. Line connecting lens centers is perpendicular to the optical axis, and

More information

Available online at ScienceDirect. Energy Procedia 69 (2015 )

Available online at   ScienceDirect. Energy Procedia 69 (2015 ) Available online at www.sciencedirect.com ScienceDirect Energy Procedia 69 (2015 ) 1885 1894 International Conference on Concentrating Solar Power and Chemical Energy Systems, SolarPACES 2014 Heliostat

More information

Exterior Orientation Parameters

Exterior Orientation Parameters Exterior Orientation Parameters PERS 12/2001 pp 1321-1332 Karsten Jacobsen, Institute for Photogrammetry and GeoInformation, University of Hannover, Germany The georeference of any photogrammetric product

More information

Character Recognition

Character Recognition Character Recognition 5.1 INTRODUCTION Recognition is one of the important steps in image processing. There are different methods such as Histogram method, Hough transformation, Neural computing approaches

More information

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION Mr.V.SRINIVASA RAO 1 Prof.A.SATYA KALYAN 2 DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING PRASAD V POTLURI SIDDHARTHA

More information

Machine vision. Summary # 6: Shape descriptors

Machine vision. Summary # 6: Shape descriptors Machine vision Summary # : Shape descriptors SHAPE DESCRIPTORS Objects in an image are a collection of pixels. In order to describe an object or distinguish between objects, we need to understand the properties

More information

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract

More information

13. Learning Ballistic Movementsof a Robot Arm 212

13. Learning Ballistic Movementsof a Robot Arm 212 13. Learning Ballistic Movementsof a Robot Arm 212 13. LEARNING BALLISTIC MOVEMENTS OF A ROBOT ARM 13.1 Problem and Model Approach After a sufficiently long training phase, the network described in the

More information

PHYS 202 Notes, Week 8

PHYS 202 Notes, Week 8 PHYS 202 Notes, Week 8 Greg Christian March 8 & 10, 2016 Last updated: 03/10/2016 at 12:30:44 This week we learn about electromagnetic waves and optics. Electromagnetic Waves So far, we ve learned about

More information

Image Processing

Image Processing Image Processing 159.731 Canny Edge Detection Report Syed Irfanullah, Azeezullah 00297844 Danh Anh Huynh 02136047 1 Canny Edge Detection INTRODUCTION Edges Edges characterize boundaries and are therefore

More information

Chapter 7: Geometrical Optics. The branch of physics which studies the properties of light using the ray model of light.

Chapter 7: Geometrical Optics. The branch of physics which studies the properties of light using the ray model of light. Chapter 7: Geometrical Optics The branch of physics which studies the properties of light using the ray model of light. Overview Geometrical Optics Spherical Mirror Refraction Thin Lens f u v r and f 2

More information

Epipolar geometry-based ego-localization using an in-vehicle monocular camera

Epipolar geometry-based ego-localization using an in-vehicle monocular camera Epipolar geometry-based ego-localization using an in-vehicle monocular camera Haruya Kyutoku 1, Yasutomo Kawanishi 1, Daisuke Deguchi 1, Ichiro Ide 1, Hiroshi Murase 1 1 : Nagoya University, Japan E-mail:

More information

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier Computer Vision 2 SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung Computer Vision 2 Dr. Benjamin Guthier 1. IMAGE PROCESSING Computer Vision 2 Dr. Benjamin Guthier Content of this Chapter Non-linear

More information

Aberrations in Holography

Aberrations in Holography Aberrations in Holography D Padiyar, J Padiyar 1070 Commerce St suite A, San Marcos, CA 92078 dinesh@triple-take.com joy@triple-take.com Abstract. The Seidel aberrations are described as they apply to

More information

SUPPLEMENTARY INFORMATION

SUPPLEMENTARY INFORMATION DOI: 1.138/NPHOTON.234 Detection and tracking of moving objects hidden from view Genevieve Gariepy, 1 Francesco Tonolini, 1 Robert Henderson, 2 Jonathan Leach 1 and Daniele Faccio 1 1 Institute of Photonics

More information

Figure 1 - Refraction

Figure 1 - Refraction Geometrical optics Introduction Refraction When light crosses the interface between two media having different refractive indices (e.g. between water and air) a light ray will appear to change its direction

More information

CONTRIBUTION TO THE INVESTIGATION OF STOPPING SIGHT DISTANCE IN THREE-DIMENSIONAL SPACE

CONTRIBUTION TO THE INVESTIGATION OF STOPPING SIGHT DISTANCE IN THREE-DIMENSIONAL SPACE National Technical University of Athens School of Civil Engineering Department of Transportation Planning and Engineering Doctoral Dissertation CONTRIBUTION TO THE INVESTIGATION OF STOPPING SIGHT DISTANCE

More information

University of Technology Building & Construction Department / Remote Sensing & GIS lecture

University of Technology Building & Construction Department / Remote Sensing & GIS lecture 5. Corrections 5.1 Introduction 5.2 Radiometric Correction 5.3 Geometric corrections 5.3.1 Systematic distortions 5.3.2 Nonsystematic distortions 5.4 Image Rectification 5.5 Ground Control Points (GCPs)

More information

Chapter 26 Geometrical Optics

Chapter 26 Geometrical Optics Chapter 26 Geometrical Optics 26.1 The Reflection of Light 26.2 Forming Images With a Plane Mirror 26.3 Spherical Mirrors 26.4 Ray Tracing and the Mirror Equation 26.5 The Refraction of Light 26.6 Ray

More information

Synthetic Aperture Imaging Using a Randomly Steered Spotlight

Synthetic Aperture Imaging Using a Randomly Steered Spotlight MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Synthetic Aperture Imaging Using a Randomly Steered Spotlight Liu, D.; Boufounos, P.T. TR013-070 July 013 Abstract In this paper, we develop

More information

(Refer Slide Time: 00:01:26)

(Refer Slide Time: 00:01:26) Computer Graphics Prof. Sukhendu Das Dept. of Computer Science and Engineering Indian Institute of Technology, Madras Lecture - 9 Three Dimensional Graphics Welcome back everybody to the lecture on computer

More information

Autonomous Vehicle Navigation Using Stereoscopic Imaging

Autonomous Vehicle Navigation Using Stereoscopic Imaging Autonomous Vehicle Navigation Using Stereoscopic Imaging Project Proposal By: Beach Wlaznik Advisors: Dr. Huggins Dr. Stewart December 7, 2006 I. Introduction The objective of the Autonomous Vehicle Navigation

More information

Advanced Image Processing, TNM034 Optical Music Recognition

Advanced Image Processing, TNM034 Optical Music Recognition Advanced Image Processing, TNM034 Optical Music Recognition Linköping University By: Jimmy Liikala, jimli570 Emanuel Winblad, emawi895 Toms Vulfs, tomvu491 Jenny Yu, jenyu080 1 Table of Contents Optical

More information

Mobile Human Detection Systems based on Sliding Windows Approach-A Review

Mobile Human Detection Systems based on Sliding Windows Approach-A Review Mobile Human Detection Systems based on Sliding Windows Approach-A Review Seminar: Mobile Human detection systems Njieutcheu Tassi cedrique Rovile Department of Computer Engineering University of Heidelberg

More information

(Refer Slide Time: 00:02:24 min)

(Refer Slide Time: 00:02:24 min) CAD / CAM Prof. Dr. P. V. Madhusudhan Rao Department of Mechanical Engineering Indian Institute of Technology, Delhi Lecture No. # 9 Parametric Surfaces II So these days, we are discussing the subject

More information

Midterm Exam Solutions

Midterm Exam Solutions Midterm Exam Solutions Computer Vision (J. Košecká) October 27, 2009 HONOR SYSTEM: This examination is strictly individual. You are not allowed to talk, discuss, exchange solutions, etc., with other fellow

More information

SketchUp. SketchUp. Google SketchUp. Using SketchUp. The Tool Set

SketchUp. SketchUp. Google SketchUp. Using SketchUp. The Tool Set Google Google is a 3D Modelling program which specialises in making computer generated representations of real-world objects, especially architectural, mechanical and building components, such as windows,

More information

Diffraction through a single slit

Diffraction through a single slit Diffraction through a single slit Waves diffract when they encounter obstacles. Why does this happen? If we apply Huygens principle it becomes clear. Think about a wavefront impinging on a barrier with

More information

Intelligent Robotics

Intelligent Robotics 64-424 Intelligent Robotics 64-424 Intelligent Robotics http://tams.informatik.uni-hamburg.de/ lectures/2013ws/vorlesung/ir Jianwei Zhang / Eugen Richter Faculty of Mathematics, Informatics and Natural

More information