A Probabilistic Approach to Conceptual Sensor Modeling

Size: px
Start display at page:

Download "A Probabilistic Approach to Conceptual Sensor Modeling"

Transcription

1 A Probabilistic Approach to Conceptual Sensor Modeling Examensarbete utfört i Bildbehandling vid Totalförsvarets forskningsinstitut och Tekniska Högskolan i Linköping av Mattias Sonesson Reg nr: LiTH-ISY-EX-3428 Linköping 2004

2

3 A Probabilistic Approach to Conceptual Sensor Modeling Examensarbete utfört i Bildbehandling vid Totalförsvarets forskningsinstitut och Tekniska Högskolan i Linköping av Mattias Sonesson Reg nr: LiTH-ISY-EX-3428 Supervisor: Lars Forssell Examiner: Klas Nordberg Linköping 18th May 2004.

4

5 Avdelning, Institution Division, Department Datum Date Institutionen för systemteknik LINKÖPING Språk Language Svenska/Swedish X Engelska/English Rapporttyp Report category Licentiatavhandling X Examensarbete ISBN ISRN LITH-ISY-EX C-uppsats D-uppsats Övrig rapport Serietitel och serienummer Title of series, numbering ISSN URL för elektronisk version Titel Title A Probabilistic Approach to Conceptual Sensor Modeling Författare Author Mattias Sonesson Sammanfattning Abstract This report develops a method for probabilistic conceptual sensor modeling. The idea is to generate probabilities for detection, recognition and identification based on a few simple factors. The focus lies on FLIR sensors and thermal radiation, even if discussions of other wavelength bands are made. The model can be used as a hole or some or several parts can be used to create a simpler model. The core of the model is based on the Johnson criteria that uses resolution as the input parameter. Some extensions that models other factors are also implemented. In the end a short discussion of the possibility to use this model for other sensors than FLIR is made. Nyckelord Keyword Johnson criteria, probabilistic, conceptual, sensor model

6 Abstract This report develops a method for probabilistic conceptual sensor modeling. The idea is to generate probabilities for detection, recognition and identification based on a few simple factors. The focus lies on FLIR sensors and thermal radiation, even if discussions of other wavelength bands are made. The model can be used as a hole or some or several parts can be used to create a simpler model. The core of the model is based on the Johnson criteria that uses resolution as the input parameter. Some extensions that models other factors are also implemented. In the end a short discussion of the possibility to use this model for other sensors than FLIR is made. Keywords: Johnson criteria, probabilistic, conceptual, sensor model i

7 ii

8 Acknowledgment This thesis was done at the Autonomous Systems department, Division of Systems Technology, at the Swedish Defence Research Agency, to whichs staff I would like to express my gratitude. Specially I would like to thank my supervisor Lars Forssell, for showing me how things are done and boosting my self esteem. Thanks to Leif Hagstedt for his help with practical details, computers and other stuff. Peter Strömbäck for cheering me up when motivation was low, with software and many fruitful discussions. Finally, I express my gratitude to all the guys still fighting the battles in Rissnehallen. iii

9 Abbreviations and acronyms C2NVEO CCD Cycle FIR FLIR FOV GPS Hot spot LOS LP LP/TGT NIR OSIC RL SCR SNR TIR UAV UV United States Army Command Center for Night Vision and Electro-Optics Charge Coupled Device Two adjacent lines of pixels, used in target to probability transforms Far Infra Red Forward Looking Infra Red Field Of View Global Positioning System Strong signal source on a uniform background Line Of Sight Line Pair, equivalent to cycle Line Pair per target Near Infra Red Optimal Sensor Integrated Control Reinforced Learning Signal to Clutter Ratio Signal to Noise Ratio Thermal Infra Red Unmanned Aerial Vehicle Ultra Violette iv

10 Contents 1 Introduction Background Objectives Limitations Outline Modeling General concepts Classic sensor modeling Preliminary list of contributing factors Conceptual sensor modeling Hypothesis Decomposing the system Initial approach Noise is not always noise Background Detection Recognition Identification Refining the approach Necessary factors for detection Signature characteristics Decision making System decomposition Size does matter Inverse square and cube laws Sampling History The Johnson criteria Extensions to the Johnson criteria Adding another factor to the equation Are the results only applicable for detection probabilities? v

11 vi Contents 3.9 Hot Spots Summary Development of a model for the effects of noise Introduction Preparations Conducting the experiments Results Suggestions for further studies Sensors TV/Video sensors History of IR sensors Reticles Pinhole camera Advanced camera model Rayleigh criteria Further enhancement of the sensor model Signal generation Introduction Blackbody radiation Interaction with materials Optical windows Thermal variation Calculating the clutter Vegetation model Further enhancements Atmosphere Introduction Scattering Absorption Merging absorption and scattering Rain Snow Dust Noise Information and data fusion Introduction Different kinds of fusion Fusion modeling

12 Contents vii 9 Summary and conclusions Summary Discussion Conclusion Future work Other sensors Visual Radar Laser radar List of Figures 3.1 Principals of angles in the FOV A scene with a box Target to LP transformation Target equivalent bar patterns Visualization of the C2NVEO table Example of cells in a scene with two black targets Detection probabilities for different clutter levels Scenes used in acquisition experiments Examples of images used in the acquisition experiments Layout of pinhole camera The radiation from a blackbody of different temperatures An examples of scattering rays Different fusion methods List of Tables 3.1 The Johnson table of cycles Resolution per level of acquisition The C2NVEO LP to probability transform Probability of detection in clutter Detection probabilities for different clutter levels P det [15][11] Reflectance for some materials Extinction Coefficient of Atmospheric Obscurants (low altitude) Explanations to Table

13 viii Contents

14 Chapter 1 Introduction 1.1 Background This work is done as a Master Thesis at the System Technology Division of The Swedish Defence Research Agency (FOI). The background of the thesis originate from the needs in two different research projects, Collaborating Missiles and Precision Engagement, conducted on behalf of the Swedish Armed Forces at the Autonomous Systems Department. Within the two projects, the need for models of different types of target seeking sensors have been foreseen, primary focused toward the image generating type, i.e. IR/TV-sensors. Some of the future planned activities within the projects are research within the field of Optimal Sensor Integrated Control (OSIC), which can be described as designing controllers that enable the sensors to get a optimal view of the target, or studies on artificial intelligence algorithms like Reinforced Learning (RL). Common for all these activities is the need of a accurate sensor model, suitable for use in conjunction with dynamical models of different types of vehicles, in technical and or tactical simulation emphasizing the beneficial that are possible from a system perspective. Physical validated models of target seekers are unfortunately very complex, detailed and computational expensive and therefore of little practical use in simulations on a higher system level. An alternative approach is the one used within the war gaming community, see [4], based on probabilistic modeling. This, approach, however does not have enough fidelity to be used in research in the field of control systems for autonomous vehicles. There is therefore an urge of a novel approach based on a conceptual approach and on physical insight in the detailed behavior of advanced image generating sensors and the signal processing associated with the target detection algorithms. This master thesis is a first step in a development plan which aims to be able to model most of the existing types of sensors, both passive and active, in different frequency bands. The thesis is aimed at investigation of the possibility to enhance probabilistic sensor models in order to get an increased accuracy without increasing the computational complexity associated with execution of the model. 1

15 2 Introduction 1.2 Objectives The objective in this thesis is to propose a methodology for developing conceptual models and produce an model for different kinds of sensors on a system level. Many models exist today but almost all are very low level and based directly on the laws of physics. This means that they become very detailed and complex. Therefore it is vital that a new sensor model is produced that is not so detailed and complex as the near-physics models, and not as blunt as the existing nearsystem models. Hopefully this IR-sensor model can lead the way for a new package of models for passive sensors, ranging from UV 1 to IR 2, and active sensors like radar and laser radar. 1.3 Limitations The first attempt was to construct a model of a FLIR 3 sensor mounted on an aerial vessel looking down on stationary ground objects. But it soon turned out that the concepts used are valid for many other types of sensors, mostly other TVlike sensors, looking in other directions than down towards the ground. It could also be used to look at objects in the sky from an observer ground based (the opposite of what was initially planned). Only stationary objects are considered in this report, since it seems as if moving objects are always detected in IR. The recognition and identification phases does, on the other hand, not gain much information from the fact that an object is moving, and some suggestions are made on how to model moving objects. Although all necessary factors might not be included in the model, sometimes intentionally, sometimes not, hopefully enough factors to make the model valid in context are included. The basic idea is to make it simple, but still with a higher fidelity than the existing probabilistic sensor models used today. The intention is not to calculate all factors separately, but rather try to show the possibility of merging factors into other descriptive units. The entire model is based on human acquisition models. This is of course a limitation and a degrading factor of its validity. However, as stated below, the problem of modeling the signal processing algorithms leads to the choice of human acquisition as the foundation. Modeling sensors must take its start somewhere, and this might be it. Other models should be compared to this one in order for it to really reach its full potential. In that way, the conceptual model could be tuned (by changing a few parameters) to model certain sensors better. 1 Ultra Violette, short wavelenght light 2 Infra Red, long wavelength light 3 Forward Looking Infra Red

16 1.4 Outline Outline The initial chapters analyzes sensor modeling, then in the following chapters the different properties are modeled. The basics rules of resolution should always be implemented. The validity of the model can then increased by adding the effects of another chapter. The structure of the report is: Chapter 2 (Modeling) is an introduction to modeling and efforts are made to show some of the difficulties in modeling, as well as setting up the rules of the game and define the concept of conceptual modeling. Chapter 3 (System decomposition) analyzes the capabilities of detection, recognition and identification, and how they are affected by the background. Chapter 4 (Development of a model for the effects of noise) discusses and attempts to develop a model for the effects of noise in human acquisition. The attempted model is coherent with the models developed in earlier chapters. Detection, recognition and identification of objects are investigated through experiments, to complete the overall model. The following chapters tries to analyze the physics involved in the sensor as well as draws conclusions of what to model and how. Chapter 5 (Sensors) starts calculating one of the descriptive features discussed in Chapter 3, namely the resolution. This would be the basics that should be implemented into any model. Chapter 6 (Signal generation) continues with the descriptive features. The signal strength and some geometric properties of the object are modeled and calculated in the signal to clutter ratio. This chapter can be included to build a more sophisticated model. Chapter 7 (Atmosphere) constructs a model for how the signal is effected by the atmosphere. Rain, fog and snow are some of the situations modeled. To complete the model the effects of the atmosphere should be implemented. Chapter 8 (Information and data fusion) suggests a model for data and information fusion that could be used for multiple sensors. Chapter 9 (Summary and conclusions) takes the experiences and summarizes them in an attempt to build a conceptual sensor model. A discussion of how to model other sensors completes the chapter.

17 4 Introduction

18 Chapter 2 Modeling This chapter contains some basics of modeling and introduces an initial approach for the conceptual sensor model. 2.1 General concepts To make a model of a physical object one has to divide it into subsystems or smaller parts, for example shape and function. Each of these smaller parts can in them selves be considered as objects to model and it is therefore possible to divide them further into smaller parts. At some level it is no longer meaningful to divide the parts into smaller units. In this stage it is time to concentrate on the function of each part. The models based directly on the laws of physics, the near physics models, are constructed in this way. First a geometrical model is made, in an artificial 3D environment. To make it useful some kind of discrimination is made to the geometrical model, and this is where the real modeling starts. Objects that are by some standard considered to be unimportant are left out, this might for instance be the inside of a car or details to small to make a difference. What can be considered unimportant is of course dependent on the situation, naturally a more detailed description of the real world can be more exact than a crude model. At some level it is not meaningful to increase the complexity of the model because it does not add any extra information. This is dependent on the governing algorithms of the simulation. A simulation of the trajectory of free falling spherical objects can for instance be described by the shape (spherical with radius r), the mass (m), the material (wood, rubber, plastic,...), the color and so on. But as Galileo stated objects of different mass fall with the same speed. So the mass can be ignored in our simulation, as well as the color. Does the shape of the object matter? Well, yes! Different shapes makes the air stream behave differently around the object. But does this affect our example simulation? It depends on whether the air is a part of the model. Is 5

19 6 Modeling the governing algorithm given by Eq. (2.1)? In that case the air is not a part of the system and therefor the shape does not matter. Eq. (2.1) limits the necessary factors to the initial height, gravity and time (h 0, g, t). h(t) = h 0 g t2 2 (2.1) Another model might however take the air stream into account, and give a better result, and then the shape of the object is of interest. In designing a model of a physical object the overall question is for what purpose it should be used. 2.2 Classic sensor modeling Traditionally sensor modeling has been made either very simple, or very complex. They have been modeled to determine whether an object is present and what it is from real data. One way is to make use of the real signal processing algorithms and try to create an artificial input signal that resembles the real world as closely as possible. This is what we call near physics modeling. Creating the signal can be made by some kind of ray tracing, or equivalent technique, for every frame. The frames can then be joined to form a sequence that can be used as an input signal or the frames can be used individually. This is a very good technique for trying out new signal processing algorithms and for validating a system. The problems are that details in the input signal, that are important for testing the performance of the algorithms, are expensive in terms of time and computational complexity. For simulations where the signal processing algorithms are not to be tested the demand for speed is crucial. In simulation of cooperative components, such as sensors and flight mechanics of an UAV, the importance is not whether a certain object, say a landing field, is determined exactly right at every given moment. What is interesting might be whether it is possible at all to see the landing strip, and approximately when it is detected. A simpler form of sensor modeling can then be used. Such simpler sensor models are used today, but they are too simple. In fact some known sensor models, are so simple that they check only the distance to the target, and when the distance to the target is smaller than the operating radius of the device, the target is detected. All objects within this limit will be found. This model is for some purposes useful, but it certainly has a lot of disadvantages Preliminary list of contributing factors To decide what factors that should be included in the model is a large part of the work. A first glimpse of the problem of modeling forward looking sensors of TV type and a brainstorming phase will give some ideas of what factors could be used. Some of these factors are:

20 2.3 Conceptual sensor modeling 7 range to target humidity attitude of sensor light conditions sun reflexes line of sight to target target temperature surrounding temperature (air, ground) weather area of target sensor specifics... Soon there are so many parameters in the equation that it starts to get out of hand. All these are important, as a lot more are. But how should they be combined? Calculating rays and making a pseudo image requires a signal processing algorithm do draw conclusions from it. This problem will be addressed later. 2.3 Conceptual sensor modeling Conceptual modeling is more about trying to fake a system that operates in approximately the same way as the real system, but is a lot cheaper in terms of time and complexity. Conceptual modeling is to find the essence of a system, and model it with as few parameters as possible. The output of such a sensor model is not to determine the answer to a question with a direct answer, but instead try to give a probability of whether the question could be answered. The decisions are left to a higher level to make. 2.4 Hypothesis What is interesting in a signal? How can this interest be quantified? From information theory we know [14] that information is defined as the deviation from a known behavior. How can the information in a scene be quantified in a way so that conclusions of detection, recognition and identification can be drawn. It is time to start thinking in new patterns. How does the interesting part in a signal differ from the rest of the signal? Can that be measured and modeled?

21 8 Modeling Decomposing the system An estimate of how well a system works depends not only of certain physical factors, but also of what kind of algorithm is used for the signal processing. This leads up to a scenario where not only physical factors but also algorithms must be evaluated. If it is possible to determine some kind of measurement on how well an algorithm works due to some basic descriptive factors, for instance SNR 1 and some other kind of signature characteristics, the sensor model could be reorganized. A suggestion for a conceptual model could then be... range humidity. { SNR signature characteritics } P robability Detection algorithm characteristics where the implication arrows would be the part to concentrate on. The first arrow representing a transformation of the physical properties to some descriptive factors. The second arrow is the calculation of the probability based on the descriptive factors and properties of a detection algorithm. But what are signal characteristics and how should it be measured and modeled? 2.5 Initial approach To start with the signal characteristics are ignored and all efforts are concentrated on deciding the SNR. But what is SNR? The signal to noise ratio is the quotient between the energy of the signal (S) and the noise (n 1 ), see Eq. (2.12). The signal is defined as in Eq. (2.2), where x and y represents the pixel coordinates in the image. Usually the SNR is measured in decibel, a logarithmic scale. S(x, y) = o(x, y) + n 1 (x, y) (2.2) S 2 SNR = 10log n 2 (2.3) 1 1 Signal to noise ratio

22 2.6 Noise is not always noise Noise is not always noise In Eq. (2.2) the signal is divided into two parts. One part containing the information we are looking for (object) and the other containing the rest (noise). This could be modeled as suggested in section 2.4.1, but a more thorough investigation of the noise property could be worthwhile. In [15] a distinction between clutter and noise is made. It is stated that temporal noise is stationary, ergodic and Gaussian, as well as time dependent. Clutter on the other hand is non-stationary, non-ergodic and non-gaussian. The n 1 (x, y) term is made from the the static background and surroundings of the object of interest, as well as a distortion term of both the signal and clutter. In principle the object signal (o) has no values where the clutter is defined, and vice versa. Based on the statements above lets define noise as temporal and clutter as spatial. Different measures can be used to eliminate their effects. This leads to a modified version of Eq. (2.2), where o(x, y) still is the signal of interest, c(x, y) is the clutter and n(x, y) is the noise. S(x, y) = o(x, y) + c(x, y) + n(x, y) (2.4) 2.7 Background In the specification a model based on three independent probabilities of target acquisitions was mentioned, the idea is that these relatively easy can be estimated one by one and joined together as can be seen in Eq. (2.5) Detection P effect = P detection P recognition P identification (2.5) The general principle of detection is to determine if there are any objects in the scene that could be separated from the noise and clutter. If there are, these will be investigated further in terms of recognition and identification Recognition If the detection phase indicates that there is something in the scene it is now up to the recognition (or classification) stage to determine what it is. Is it an aeroplane, a car, a house? Here the categorie of the object should be determined Identification Identification is the process in which to determine the object to the limit of the systems knowledge, a refinement of the recognition. Then we also know whether the object detected and classified is a friend or an enemy. This stage is highly dependent on the recognition.

23 10 Modeling 2.8 Refining the approach In section a general layout was presented and refined in Eq. (2.4). Now it is time to refine that model further by dividing it into three different parts, one for each probability. Some of the factors that may contribute to one of the different probabilities might not be interesting for the others Necessary factors for detection What is relevant for detection of an object in a scene? Some might say colors, others are perfectly happy with a gray scale image, yet others want clearly visible edges, but what is more important is the energy of the signal, relative to the noise. The most important factor is if the object is above the noise or not in the signal. What is the SNR of the signal, 40 db or -1 db? What is the minimum difference before an algorithm can decide whether there is something in a scene or not? The answer is that it depends, for instance on if it is a single image or if it is sequence of images. But this is not enough. The resolution is of utter importance. It might be obvious that you have to see the object before you can find it, but when you are modeling a sensor that has to be taken into account. If the object produces a strong signal compared to the rest of the scene, it might gain a good SNR even if it is only one pixel large. From that pixel it is nearly impossible do decide anything, it might just be a deviation in the CCD-array, or a neutrino decay changing a part of the memory. But if it is seen for a period of time and determined not to be a deviation in the CCD it can be recognized as a detected object. A larger (or a closer) object will of course be easier to detect even from one single image. The conclusion is then that if the object is small the energy from the object needs to be higher, and if the object is bigger a smaller SNR can be accepted and still produce a good result Signature characteristics The possibility to classify an object is also dependent on the SNR as well as the resolution, or the size of the object measured in pixels. The requirements of the resolution (size) are stronger, a single pixel can not lead to the classification of an object however strong it might be. What is then the minimum size of the object, measured in pixels? That should depend on the objects physical size compared to the size in pixels, the resolution. In image and signal processing different characteristics of the signal are calculated in order to classify an object. Different distinctive features can be calculated in order to determine the properties of an object. Below there are a few common algorithms presented, some of them are dependent on the distance to the object they are evaluating and some are not. The idea is that perhaps some of these

24 2.8 Refining the approach 11 features can be used to help model a sensor or a signal processing algorithm, for further information on them and other algorithms see for example [1], [2] and [7]. If a rectangle is put around the object, the smallest possible rectangle still containing the object, the long side of the rectangle is called d max, and the short side d min. d min d max (2.6) Standard deviation over the target silhouette is another measurement, where σ I is the standard deviation of the intensity of the target silhouette and I mean is the mean value of the target area (defined in 2.11) σ I I mean (2.7) To determine the shape of the object the relationship between the area of the object and the perimeter can be translated to a specific measurement. Where A is the area of the object, P the perimeter, Z(A) is a factor for normalization dependent on the area. A P 2 Z(A) (2.8) Mean edge strength can be measured as below, where E k = 1 if pixel is declared as edge and 0 if not and k is defined in the target region. k E k (2.9) A Yet another descriptive factor is the FORM-factor [2], this one is not dependent on the resolution and calculates the ratio between area and mean shortest distance to the perimeter. σ A d 2 (2.10) Some of the feature measurements are dependent on the distance to the object. For instance the mean intensity, where A is measured in number of pixels, is very sensitive to absolute intensity. k I mean = I k (2.11) A As mentioned earlier the ratio between object signal and noise is interesting. The SNR has been defined earlier but could also be defined more appropriate to the situation as below. B median is the median of the surrounding background and σ denotes the corresponding variances. This quantity relates the objects intensity and fluctuations in intensity to the background and will later on be defined as the clutter.

25 12 Modeling I mean B median σ 2 I + σ 2 B + ɛ (2.12) The two quantities directly dependent on the distance (not possible to use without exact knowledge of distance) are d max (as defined in Eq. (2.6)) and the area, measured in pixels Decision making The quantities defined above are calculated and the results are used, one by one as well as together, and compared to existing information on targets in a database or information central. This database or information central can be made up from trained back propagation neural networks, comparisons with a database and the closest (by some standard) target entry is chosen as candidate, if it is close enough. Bayes rules can be used as one of many possible algorithms in the determination. Here in the conceptual model we have prior information of the object. In fact we know exactly what it is. The model will use the probability of a correct detection, recognition and identification of an object. No incorrect classifications will be made by the system. There are exemples in the literature [3] of false target detections but not mentioned further in this report.

26 Chapter 3 System decomposition In this chapter a model for describing the signal emitted by the objects in the FOV 1, S(x, y), is presented in the first part. It is then in the latter part of the chapter extended to take the clutter in to consideration. 3.1 Size does matter A truck standing a few meters in front of you a sunny day, is not a very difficult object to detect. As the truck is moved further away it becomes increasingly difficult to detect, and at some point the truck will no longer be possible to see. If the truck was doubled in size, it would be possible to see it at an even greater distance. We can therefore conclude that size and distance are important factors when detecting an object. A digital camera working with a two-dimensional array of detector elements will depict the truck reasonably correct at short distance. As the distance to the truck from the camera grows, the truck will be depicted with fewer and fewer picture elements (pixels). At some point there will be only one pixel representing the entire truck. If it is moved even further away only half the pixel will describe the truck and the other half will be describing the surroundings of the truck, the two halves will be weighted together to produce an output. It is now not possible to determine what it is the pixel is representing. But first lets take a few steps back. 3.2 Inverse square and cube laws The resolution of a camera, independently of what frequency range is used (measured in pixels) is constant, but the size of the projected FOV onto the image-plane in the camera will be dependent on the distance to the objects in the FOV. If the angular size of the FOV is held constant the depicted scene will increase in size 1 field of view 13

27 14 System decomposition with the square of the distance to the observed objects. This leads to the fact that the resolution of an object will decrease with the square of the distance. Consider an airborne sensor at height h over the ground and at distance r from a scene (as seen in Fig. 3.1). Let the object of interest be represented by a box of a size so that the object is circumscribed, with the area ab = A T. If the FOV, 2α 2β, is small then it is possible to write Eq. (3.1) when considering the similar triangles in Fig β α α h s b s c a r a Figure 3.1. Principals of angles in the FOV h s = c a 2α = arcsin( c s ) c s (3.1) (3.2) And similarly for β. 2β = arcsin( b s ) b s (3.3) The solid angle covering the object is approximately given by φ. φ = cb 4πs 2 = s2αs2β 4πs 2 = αβ π From the Eq. (3.1)-(3.3) it follows that (3.4) φ = 1 c b π s s = ha T πs 3 = ha T π(h 2 + r 2 ) 3 2 (3.5) Furthermore, if the probability of detection is proportional to the solid angle as suggested by [13], we get ha T P det = k 1. (3.6) π(h 2 + r 2 ) 3 2 Now k 1 represents all the other factors that has influence on probability of detection, besides resolution. The question is, how can it be determined?

28 3.2 Inverse square and cube laws 15 Ay b A x a Figure 3.2. A scene with a box. For r h the probability is approximately proportional to the inverse cube of r, and for h r it is approximately proportional to the inverse of the square of h. It is easily understood that the resolution of the sensor is of great importance. The proportionality k 1, should then be dependent of the resolution of the sensor array, i.e. the n m pixels. Lets define resolution, ρ, as number of pixels per meter in the scene, or angular resolution as pixels per radians. The resolution can the be defined as Eq. (3.7). ρ = ρ x ρ y = n c m b (3.7) nm k 1 = k 2 ρ = k 2 (3.8) cb Now let the FOV grow so that it covers more than the object of interest in the scene (as seen in Fig. 3.2), i.e. 2sα 2sβ = cb > A T = A x A y. ha T P det = k 1 π(h 2 + r 2 ) 3 2 = k 2 nm s2α s2β = k 2 nm cb ha T π(h 2 + r 2 ) 3 2 ha T π(h 2 + r 2 ) 3 2 = k 2 nma T 4αβ = (3.9) h π(h 2 + r 2 ) 7 2 (3.10) All objects with the same area, A T, will not be detected with the same probability. Some objects may be thin and long, so that they do not cover a whole pixel in height and will therefore be difficult to detect, compared that to a square object it will probably be harder to detect. Hence the area of object (as seen by the sensor) is not the most important factor. Therefor the shape of the object has influence on the proportionality constant. There are a few different ways of measuring form as mentioned in section (P2A or FORM for instance), see [2]. Another, easier, way would be to regard only the smallest side of the object towards the sensor. Let a rectangle circumscribe the object, as shown in Fig The largest side of the rectangle corresponds to the easiest detectable dimension, A y, but this will not be a true representation of the objects shape. Let instead the smallest side of the rectangle be the decision factor to use, A x.

29 16 System decomposition For most type of objects the smallest side of a circumscribing rectangle will be a good measure, however for extreme concave objects this might no be the best solution. Ways of correcting this include letting parts of the object fall outside the rectangle and letting the rectangle be placed within the object. Let the smallest side of the rectangle be represented by A x, Eq. (3.9) then becomes P det = k 2 na x 2α 1 (h 2 + r 2 ) 1 2 ma y 2β 1 (h 2 + r 2 ) 1 2 h π(h 2 + r 2 ) 1 2 (3.11) It is here assumed that the y-dimension is much easier detectable and in fact the difficulty lies in detecting the x-dimension. Note that the sensor angular resolution is what is actually regarded. 3.3 Sampling According to the sample theorem a signal should be band limited in order for a sampled signal to be accurately reconstructed. It has to limit the bandwidth to maximum half that of the sample frequency (f s ). That is, f s /2 f f s /2 is required, otherwise a reduction of information will be the result. An image can be viewed as a two-dimensional signal. When taking a photo a sampling of the world is done. For the sampled image, spatial frequencies can be calculated (compare with the frequencies for an audio signal which can be said to be temporal), and they do not differ in any significant way from ordinary (temporal) frequencies. Since no band limiting filters can 2 be used, a camera will never satisfy the conditions of the sampling theorem. Aliasing will be the result, which is the same thing as what happened when the truck and its surroundings was represented by only one pixel (in the example above). All details to small to be noticed in the sampled reality (the image) will still be represented by the pixel, however averaged with the rest of the details not noticeable. When the truck is moved further away, each pixel will cover a larger area of the truck, corresponding to a lower sample rate. This effectively corresponds to a low pass filtering of the image since the sampling is performed with rectangular functions. When an image is low pass filtered details tend to disappear, leading to the fact that it will be harder to recognize the depicted object. Even if it is possible to detect an object in the image, it might still be difficult to determine what it is, i.e. classify it. Recognition (or classification) can be seen as a special case of detection; the ability to detect whether an object is a member of group A or B. The same goes for identification. 2 Band limited filters can be applied, but requires sets of gitters, lenses and apertures

30 3.4 History History During the Second World War a new discipline in mathematics was founded, Operations Research. To take advantage of Operations Research during and after the war mathematical models for soldiers, tanks and army divisions were needed, and also developed. Questions like When will a ship be spotted? needed a mathematical answer. One of the first who tried to determine statistical detection models of targets was H.R. Blackwell. In an article from 1946 he presented a study of circular targets on uniform background. He tried to find factors like contrast, size and brightness to describe possibility of target acquisition. In the research he let 19 women make more than 200,000 observations. Later on Blackwell and Taylor determined contrast limits for acquisition. 3.5 The Johnson criteria In 1958 John Johnson presented a method for determining whether an object would be detected when using image intensifiers [10]. Johnson transformed the objects to be represented by bar patterns. Each object, its distance, and orientation was represented with the number of black and white bars (a pair of a black and a white bar is named a cycle), covering the objects minimum dimension towards the viewer. The size of the bars will be a result of the resolution of the image enhancer, the distance to and size of the object. φ θ Figure 3.3. Illustration of target to LP transformation. φ angular size of one pixel-row (0.5 LP). θ angular size of targets (maximum) dimension. The object is 2.5 pixel-rows high and that transforms to 1.25 LP/TGT. Every line of pixels can be said to have an angular spreading, and that spreading in its turn can be represented by a size at some distance from the observer, close to or over the object. Johnson defined the resolution in the device used as the width of two adjacent lines, called a cycle. He then compared the angular size of the objects minimum dimension (towards the viewer) with the angular size of one cycle. This way he got the equivalent of the object as seen by the observer in cycles. By letting several

31 18 System decomposition Figure 3.4. Classic figure from [10] describing target equivalent bar patterns. TARGET RESOLUTION PER MINIMUM DIMENSION Broadside View Detection Orientation Recognition Identification Truck M48 Tank Stalin Tank Centurion Tank Half-track Jeep Command Car Soldier (Standing) Howitzer Average 1.0± ± ± ± 1.5 Table 3.1. The Johnson table of cycles, as presented in [10]. observers view scenes containing a bar target object a table could be made with the number of cycles necessary for detection, orientation, recognition and identification, see Table 3.1. The term cycle seems to have been replaced, more and more, by the equivalent term line pair (LP 3 ). The conclusion is that everything needed to determine if an object is visible, and to what extent, is its angular size, as seen from the observer. But what is the output of Table 3.1? In fact it is not clearly stated in the article, but it is considered to be the measure of the 50% probability of a correct decision, as mentioned in [17]. 3 One cycle or LP is two adjacent rows of pixels

32 3.6 Extensions to the Johnson criteria 19 Detection Orientation Recognition Identification Average number of cycles ± 25% ± 0.25 ± 0.35 ± 1.00 ± 1.60 Ratio compared to detection Table 3.2. Resolution per level of acquisition according to [10]. Number of resolvable lines across target Probability Detection Recognition Identification Table 3.3. C2NVEO s version of the Johnson Criteria, from [17]. As can be seen in Table 3.1 no target diverges substantially from the average in each category. All targets are within 25% of the average and we can conclude that the average in each category is a good approximation of decision boundaries. The result with respect to resolution states, the rather obvious, that to classify an object we need higher resolution than just to detect it. In fact the resolution has to be four times higher to classify, and six times higher to identify an object than to detect it, see Table Extensions to the Johnson criteria The Johnson Criteria is very good in its simple form to transform an object into a probability. However it can only give the 50% limit, a better model that can give probabilities in a few more ranges would be even more useful. From C2NVEO 4 a more advanced model can be found, see Table 3.3, with LP/TGT levels for different probabilities. It is states that the values from Johnson are pessimistic in [5] according to [11], and we will see later on that others have come to the same conclusion. According to [15] the initial (Johnson s) performance measure were conducted in significant clutter. What that means is unclear. Noticeable is that today SNR and SCR 5 is considered to be important factor in the decisions. 4 U.S. Army Command Center for Night Vision and Electro-Optics 5 signal to clutter ratio

33 20 System decomposition 1 C2NVEO Probability detection recognition identification Line Pairs over minimum dimension Figure 3.5. Visualization of the C2NVEO table. 3.7 Adding another factor to the equation It can be stated that the lack of understanding how noise and clutter effects the Johnson-like methods, is considered as a deficiency. Some research has been done to investigate what role clutter has to the detection performance. If clutter should work as an input parameter we have to define a mathematical measure for it. According to [15] there had been some attempts to define clutter, but nothing that really fitted to the Johnson like methods. So instead they defined there own clutter measure, see Eq. (3.12). The calculations are made over cells in the scene. The cells are square with the side approximately twice the hight of the targets. It is probably chosen because it is usually the targets minimum dimension, and it will produce high clutter values for objects of approximately the same size as the target. According to the authors, it seemed to give higher values for subjectively more cluttered scenes and it contains both spatial and intensity information. A center cell is placed so that the target is centered in it, the outer cells are placed around the center cell. Clutter is then defined as ( N σ 2 ) 1 i 2 clutter = N i=1 (3.12) where N is the number of contiguous cells and σ i is the radiance standard deviation over the i th cell, it is probably an ordinary standard deviation calculated from the radiance property of the materials in every pixel in the cell, see Eq. (3.13).

34 3.7 Adding another factor to the equation 21 σ 2 i = x,y (g i (x, y) ḡ i ) 2 (3.13) In this case g i (x, y) is the intensity in pixel (x,y) in the i th cell and ḡ i is the mean of the cell. Figure 3.6. Example of cells in a scene with two black targets. The signal to clutter ratio (SCR) is then calculated as in Eq. (3.14) and for negative contrasts (i.e. when the signal from the surroundings is stronger than the interesting object) (3.15). g 0 represents the object of interest. SCR = g 0,max g mean clutter (3.14) SCR = g 0,min g mean (3.15) clutter Noticeable is that high clutter levels (low SCR) will result in detection probabilities lower than 0.9 (P det < 0.9), for all resolutions. For example 6 LP/TGT gives 0.8 possibility of recognition from Table 3.3, and a pretty good identification according to Table 3.1, but at high clutter (SCR = 0.33) detection possibility drop to So at high clutter levels detection levels are closing in on recognition levels. On the other hand for very low clutter (high SCR) detection probabilities stay close to 1, even for as low resolution as 0.25 LP/TGT. It can be noticed that this is better than the usual 0.5 LP/TGT for Hot Spot -targets. Further, it does seem as if clutter plays a more important role for resolutions lower than 2.0 LP/TGT. The conclusion, made by the authors of [15], is that clutter

35 22 System decomposition Number of resolvable lines across target SCR Table 3.4. P det from observations with different resolutions and SCR levels, from [15]. Number of line pairs across target Detection Low Clutter Medium Clutter High Clutter probability (SCR > 10) (1 SCR 10) (SCR < 1) Table 3.5. Detection probabilities for different clutter levels P det [15][11]. can be divided into three distinct regions. The first, for high clutter levels (SCR 1) a strong increase in resolution is needed even for a little decrease in SCR. The second region, for medium clutter levels (1 < SCR 10), a doubling of resolution is needed when SCR drops from 10 to 1, to maintain the same level of probability. In the third region, low clutter levels (10 < SCR) it is assumed that the probability reaches one, asymptotically, as the SCR reaches infinity. Resolution seems to play a minor role here. Since there are three homogeneous regions where resolution does not vary much in order to get the same level of probability, SCR can be calculated without greater precision, just to determine what region it belongs to is enough. This is good since determining the clutter level is a hard and tiresome work. It seems as if it is possible to avoid the problems of defining a correct and exact clutter measure. 3.8 Are the results only applicable for detection probabilities? In [15] the influence of clutter is investigated for detection. A suggestion of how to calculate clutter levels is proposed, and the procedure is said to be possible to

36 3.9 Hot Spots low clutter medium clutter high clutter Schmieder Weathersby Probability Line Pairs over minimum dimension Figure 3.7. Visualization of Table 3.5. simplify. Maybe it is not necessary to further expand the model with recognition and identification in clutter? If an object has been detected in clutter, it has been discriminated from false targets already, and therefor recognition and identification is not dependent on clutter, but only on noise. But if detection only reports signal sources and it is up to the recognition to determine weather it is an interesting object or not, a clutter extension is necessary for improvement. So the best (easiest) way to handle this is in the detection phase, by using Table Hot Spots Hot spot is the name for strong signal sources on a uniform background, with a high target to background ratio. An aeroplane in the sky for instance. They are generally easier to detect than other objects, usually 0.5 LP/TGT is used, but as seen above even lower resolutions are possible. They could therefor be modeled as objects without clutter Summary The model presented in Eq. (2.4) has now been modeled to some extent. A description of how to model s and c has been presented. The model for the object signal is concentrated on the resolution of the object, using the same models presented by Johnson and C2NVEO. From the list of different characteristics in section 2.8.2

37 24 System decomposition we can see that Eq. (2.6), (2.7), (2.11) and (2.12) are all used. Eq. (2.8), (2.9) and (2.10) has not been used in the model so far. The background and nearby objects has been taken into consideration, as well as intensities and contrasts. Just because the objects position relative to the sensor is known, it does not automatically mean that it can be detected using the model presented above. The line of sight to the object must be considered. The object might be occluded by another object, for instance a car parked partly behind a bus. Is the object at all visible from the sensors position and to what extent? All of it or just a fraction? By letting the bounding box vary in size so that it fits to the objects circumstances, this problem can be handled. This is only possible for small adjustments to the size of the bounding box. The model has so far not taken into consideration how the signal is corrupted on its way from the objects in the scene to the sensor. The signal should be effected by the atmosphere, dependent on humidity, rain, fog and other factors. How to model this will be considered in the following chapters. A conversion table or function correspoding to the Schmieder-Weathersby (as seen in Fig. 3.7) for clutter would be nice for noise. However no such models has been found and an investigation presented in the next chapter attempted to find such a transform.

38 Chapter 4 Development of a model for the effects of noise In this chapter the experiences from an investigation of human acquisition in noisy imagery are discussed and suggestions for further studies are made. 4.1 Introduction As stated earlier in this report, a transformation from resolution to probability is used in the this model. This basic transform is enhanced with the introduction of the SCR, which also adds additional functionality. The effects of noise in perception and target acquisition is however important. In order to find a transform resembling the Johnson criteria, when noise is present, a series of experiments was conducted. All the transforms provided in this report so far, are based on human performance. The attempts to produce a transform here will therefore also model human behavior. 4.2 Preparations To start with, four images were selected, all showing different scenes, see Fig The images contained vehicles and humans, considered as interesting objects, in a natural environment. From the images several, or all, of the interesting objects were measured in terms of the number of pixels in the short side of a bounding box. The scenes contain from one to nine different interesting measured objects. Since the dependence between resolution and probability of acquisition is wanted for different noise levels, a resolution pyramid 1 for every scene is produced. Different objects in the scene will have different resolution due to their size and therefore 1 A collection of the same image in multiple resolutions 25

39 26 Development of a model for the effects of noise many different resolutions can be obtained from one picture in the resolution pyramid. Resolutions from 0.2 to 56 LP/TGT were obtained. The clutter and the noise are considered to be independent of each other and can therefore be investigated independently. Note, that two objects in the same scene does not necessarily have the same clutter values. To each scene noise is added before the resolution is changed, in order to get disturbances of the images in the same resolution as the object. Four different sets of images were compiled, containing 36 images. Nine of each scene, with different resolutions and noise levels, ordered so that the higher resolutions were shown last. Two examples can be seen in Fig Conducting the experiments Each set was shown to a group of people with varying age, background, education and sex. The viewers had unlimited time, and were instructed to mark the part of the image they thought contained a vehicle or a person. If they thought they knew what type of object it was they were instructed to write that down, and when the object presented was identified it was marked in the same way. Nearly 40 persons were used, making over 6500 observations. 4.4 Results The hypothesis is that the probability of detection, recognition and identification of objects would decrease with increasing noise. The idea was to produce a transform, similar the one for clutter [15]. The analysis of the collected data revealed no significant changes for the probabilities investigated that could be directly liked to the noise. The initial hypothesis seems to be correct, but no numbers for a transform can be presented. It seems as if the wrong noise interval was investigated. There are some differences observed in the data but not enough to produce a transform based on resolution and SNR. Initially it seemed as if the transform for recognition moved closer to the one for identification, which would have been expected, but this could not be verified. It simply seems as if more test subjects (viewers) should be used and that lower SNR levels should be investigated. The fluctuations in the data are large, and can not be determined to result only from different SNR and resolutions. The conclusions from the test must be that it seems as if there are differences due to noise, and that it should be investigated further. 4.5 Suggestions for further studies When the noise was added before the change in resolution, the effect was that the noise was low pass filtered, resulting in a lower degree of noise. A different way

40 4.5 Suggestions for further studies 27 (a) Jeep traveling downhill. (b) Four-wheeled motorcycles. (c) Israeli checkpoint. (d) Tanks on a field. Figure 4.1. Scenes used in acquisition experiments. of doing this would be to add noise with a spatial size in the same order as the wanted resolution. The noise would then be clearer to the viewer. Now the viewer did not understand that many of the images shown were altered with additional noise. It could not be seen except when comparing noiseless and noisy images. For the images of high resolution it was still possible however. But when trying to add larger noise, it can not have the effect of altering the clutter levels. More viewers could also be used in order to get more stable data. The fluctuations in the data should then be less and the correlations to clutter and resolution should be more significant. The number of images in the resolution pyramid could probably have been fewer. It seems as if nine was too many, since there were objects of different size, and therefore resolution, in the scenes. Extra images could instead be used to introduce more scenes. The most important factor however is the noise levels used. It seems as if SNR

41 28 Development of a model for the effects of noise (a) Israeli checkpoint, with added noise. (b) Tanks on a field, with added noise. Figure 4.2. Examples of images used in the acquisition experiments. below 10 db should be in focus for further investigations.

FACILITATING INFRARED SEEKER PERFORMANCE TRADE STUDIES USING DESIGN SHEET

FACILITATING INFRARED SEEKER PERFORMANCE TRADE STUDIES USING DESIGN SHEET FACILITATING INFRARED SEEKER PERFORMANCE TRADE STUDIES USING DESIGN SHEET Sudhakar Y. Reddy and Kenneth W. Fertig Rockwell Science Center, Palo Alto Laboratory Palo Alto, California and Anne Hemingway

More information

GEOG 4110/5100 Advanced Remote Sensing Lecture 2

GEOG 4110/5100 Advanced Remote Sensing Lecture 2 GEOG 4110/5100 Advanced Remote Sensing Lecture 2 Data Quality Radiometric Distortion Radiometric Error Correction Relevant reading: Richards, sections 2.1 2.8; 2.10.1 2.10.3 Data Quality/Resolution Spatial

More information

Sensor Modalities. Sensor modality: Different modalities:

Sensor Modalities. Sensor modality: Different modalities: Sensor Modalities Sensor modality: Sensors which measure same form of energy and process it in similar ways Modality refers to the raw input used by the sensors Different modalities: Sound Pressure Temperature

More information

specular diffuse reflection.

specular diffuse reflection. Lesson 8 Light and Optics The Nature of Light Properties of Light: Reflection Refraction Interference Diffraction Polarization Dispersion and Prisms Total Internal Reflection Huygens s Principle The Nature

More information

Physics 11. Unit 8 Geometric Optics Part 1

Physics 11. Unit 8 Geometric Optics Part 1 Physics 11 Unit 8 Geometric Optics Part 1 1.Review of waves In the previous section, we have investigated the nature and behaviors of waves in general. We know that all waves possess the following characteristics:

More information

Terrain Rendering using Multiple Optimally Adapting Meshes (MOAM)

Terrain Rendering using Multiple Optimally Adapting Meshes (MOAM) Examensarbete LITH-ITN-MT-EX--04/018--SE Terrain Rendering using Multiple Optimally Adapting Meshes (MOAM) Mårten Larsson 2004-02-23 Department of Science and Technology Linköpings Universitet SE-601 74

More information

Understanding Variability

Understanding Variability Understanding Variability Why so different? Light and Optics Pinhole camera model Perspective projection Thin lens model Fundamental equation Distortion: spherical & chromatic aberration, radial distortion

More information

Lecture Notes (Reflection & Mirrors)

Lecture Notes (Reflection & Mirrors) Lecture Notes (Reflection & Mirrors) Intro: - plane mirrors are flat, smooth surfaces from which light is reflected by regular reflection - light rays are reflected with equal angles of incidence and reflection

More information

The ARCUS Planning Framework for UAV Surveillance with EO/IR Sensors

The ARCUS Planning Framework for UAV Surveillance with EO/IR Sensors Technical report from Automatic Control at Linköpings universitet The ARCUS Planning Framework for UAV Surveillance with EO/IR Sensors Per Skoglar Division of Automatic Control E-mail: skoglar@isy.liu.se

More information

TEAMS National Competition Middle School Version Photometry Solution Manual 25 Questions

TEAMS National Competition Middle School Version Photometry Solution Manual 25 Questions TEAMS National Competition Middle School Version Photometry Solution Manual 25 Questions Page 1 of 14 Photometry Questions 1. When an upright object is placed between the focal point of a lens and a converging

More information

Thermal and Optical Cameras. By Philip Smerkovitz TeleEye South Africa

Thermal and Optical Cameras. By Philip Smerkovitz TeleEye South Africa Thermal and Optical Cameras By Philip Smerkovitz TeleEye South Africa phil@teleeye.co.za OPTICAL CAMERAS OVERVIEW Traditional CCTV Camera s (IP and Analog, many form factors). Colour and Black and White

More information

Lecture 17: Recursive Ray Tracing. Where is the way where light dwelleth? Job 38:19

Lecture 17: Recursive Ray Tracing. Where is the way where light dwelleth? Job 38:19 Lecture 17: Recursive Ray Tracing Where is the way where light dwelleth? Job 38:19 1. Raster Graphics Typical graphics terminals today are raster displays. A raster display renders a picture scan line

More information

Chapter 12 Notes: Optics

Chapter 12 Notes: Optics Chapter 12 Notes: Optics How can the paths traveled by light rays be rearranged in order to form images? In this chapter we will consider just one form of electromagnetic wave: visible light. We will be

More information

Scanner Parameter Estimation Using Bilevel Scans of Star Charts

Scanner Parameter Estimation Using Bilevel Scans of Star Charts ICDAR, Seattle WA September Scanner Parameter Estimation Using Bilevel Scans of Star Charts Elisa H. Barney Smith Electrical and Computer Engineering Department Boise State University, Boise, Idaho 8375

More information

Lab 9. Julia Janicki. Introduction

Lab 9. Julia Janicki. Introduction Lab 9 Julia Janicki Introduction My goal for this project is to map a general land cover in the area of Alexandria in Egypt using supervised classification, specifically the Maximum Likelihood and Support

More information

Lab on MODIS Cloud spectral properties, Cloud Mask, NDVI and Fire Detection

Lab on MODIS Cloud spectral properties, Cloud Mask, NDVI and Fire Detection MODIS and AIRS Workshop 5 April 2006 Pretoria, South Africa 5/2/2006 10:54 AM LAB 2 Lab on MODIS Cloud spectral properties, Cloud Mask, NDVI and Fire Detection This Lab was prepared to provide practical

More information

Lecture 6: Edge Detection

Lecture 6: Edge Detection #1 Lecture 6: Edge Detection Saad J Bedros sbedros@umn.edu Review From Last Lecture Options for Image Representation Introduced the concept of different representation or transformation Fourier Transform

More information

Information page for written examinations at Linköping University TER2

Information page for written examinations at Linköping University TER2 Information page for written examinations at Linköping University Examination date 2016-08-19 Room (1) TER2 Time 8-12 Course code Exam code Course name Exam name Department Number of questions in the examination

More information

This paper describes an analytical approach to the parametric analysis of target/decoy

This paper describes an analytical approach to the parametric analysis of target/decoy Parametric analysis of target/decoy performance1 John P. Kerekes Lincoln Laboratory, Massachusetts Institute of Technology 244 Wood Street Lexington, Massachusetts 02173 ABSTRACT As infrared sensing technology

More information

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22)

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22) Digital Image Processing Prof. P. K. Biswas Department of Electronics and Electrical Communications Engineering Indian Institute of Technology, Kharagpur Module Number 01 Lecture Number 02 Application

More information

TEAMS National Competition High School Version Photometry Solution Manual 25 Questions

TEAMS National Competition High School Version Photometry Solution Manual 25 Questions TEAMS National Competition High School Version Photometry Solution Manual 25 Questions Page 1 of 15 Photometry Questions 1. When an upright object is placed between the focal point of a lens and a converging

More information

Direct Variations DIRECT AND INVERSE VARIATIONS 19. Name

Direct Variations DIRECT AND INVERSE VARIATIONS 19. Name DIRECT AND INVERSE VARIATIONS 19 Direct Variations Name Of the many relationships that two variables can have, one category is called a direct variation. Use the description and example of direct variation

More information

Chapter 15. Light Waves

Chapter 15. Light Waves Chapter 15 Light Waves Chapter 15 is finished, but is not in camera-ready format. All diagrams are missing, but here are some excerpts from the text with omissions indicated by... After 15.1, read 15.2

More information

Institutionen för systemteknik

Institutionen för systemteknik Institutionen för systemteknik Department of Electrical Engineering Examensarbete 3D Position Estimation of a Person of Interest in Multiple Video Sequences: Person of Interest Recognition Examensarbete

More information

Physics I : Oscillations and Waves Prof. S Bharadwaj Department of Physics & Meteorology Indian Institute of Technology, Kharagpur

Physics I : Oscillations and Waves Prof. S Bharadwaj Department of Physics & Meteorology Indian Institute of Technology, Kharagpur Physics I : Oscillations and Waves Prof. S Bharadwaj Department of Physics & Meteorology Indian Institute of Technology, Kharagpur Lecture - 20 Diffraction - I We have been discussing interference, the

More information

Cycle Criteria for Detection of Camouflaged Targets

Cycle Criteria for Detection of Camouflaged Targets Barbara L. O Kane, Ph.D. US Army RDECOM CERDEC NVESD Ft. Belvoir, VA 22060-5806 UNITED STATES OF AMERICA Email: okane@nvl.army.mil Gary L. Page Booz Allen Hamilton Arlington, VA 22203 David L. Wilson,

More information

Optics. a- Before the beginning of the nineteenth century, light was considered to be a stream of particles.

Optics. a- Before the beginning of the nineteenth century, light was considered to be a stream of particles. Optics 1- Light Nature: a- Before the beginning of the nineteenth century, light was considered to be a stream of particles. The particles were either emitted by the object being viewed or emanated from

More information

PHYSICS. Chapter 33 Lecture FOR SCIENTISTS AND ENGINEERS A STRATEGIC APPROACH 4/E RANDALL D. KNIGHT

PHYSICS. Chapter 33 Lecture FOR SCIENTISTS AND ENGINEERS A STRATEGIC APPROACH 4/E RANDALL D. KNIGHT PHYSICS FOR SCIENTISTS AND ENGINEERS A STRATEGIC APPROACH 4/E Chapter 33 Lecture RANDALL D. KNIGHT Chapter 33 Wave Optics IN THIS CHAPTER, you will learn about and apply the wave model of light. Slide

More information

OBJECT detection in general has many applications

OBJECT detection in general has many applications 1 Implementing Rectangle Detection using Windowed Hough Transform Akhil Singh, Music Engineering, University of Miami Abstract This paper implements Jung and Schramm s method to use Hough Transform for

More information

Engineered Diffusers Intensity vs Irradiance

Engineered Diffusers Intensity vs Irradiance Engineered Diffusers Intensity vs Irradiance Engineered Diffusers are specified by their divergence angle and intensity profile. The divergence angle usually is given as the width of the intensity distribution

More information

LIGHT: Two-slit Interference

LIGHT: Two-slit Interference LIGHT: Two-slit Interference Objective: To study interference of light waves and verify the wave nature of light. Apparatus: Two red lasers (wavelength, λ = 633 nm); two orange lasers (λ = 612 nm); two

More information

ECE-161C Cameras. Nuno Vasconcelos ECE Department, UCSD

ECE-161C Cameras. Nuno Vasconcelos ECE Department, UCSD ECE-161C Cameras Nuno Vasconcelos ECE Department, UCSD Image formation all image understanding starts with understanding of image formation: projection of a scene from 3D world into image on 2D plane 2

More information

CHAPTER 3. Single-view Geometry. 1. Consequences of Projection

CHAPTER 3. Single-view Geometry. 1. Consequences of Projection CHAPTER 3 Single-view Geometry When we open an eye or take a photograph, we see only a flattened, two-dimensional projection of the physical underlying scene. The consequences are numerous and startling.

More information

On-line mission planning based on Model Predictive Control

On-line mission planning based on Model Predictive Control On-line mission planning based on Model Predictive Control Zoran Sjanic LiTH-ISY-EX-3221-2001 2001-12-05 On-line mission planning based on Model Predictive Control Master thesis Division of Automatic

More information

Pedestrian Detection Using Correlated Lidar and Image Data EECS442 Final Project Fall 2016

Pedestrian Detection Using Correlated Lidar and Image Data EECS442 Final Project Fall 2016 edestrian Detection Using Correlated Lidar and Image Data EECS442 Final roject Fall 2016 Samuel Rohrer University of Michigan rohrer@umich.edu Ian Lin University of Michigan tiannis@umich.edu Abstract

More information

Scattering/Wave Terminology A few terms show up throughout the discussion of electron microscopy:

Scattering/Wave Terminology A few terms show up throughout the discussion of electron microscopy: 1. Scattering and Diffraction Scattering/Wave Terology A few terms show up throughout the discussion of electron microscopy: First, what do we mean by the terms elastic and inelastic? These are both related

More information

Chapter 5. Projections and Rendering

Chapter 5. Projections and Rendering Chapter 5 Projections and Rendering Topics: Perspective Projections The rendering pipeline In order to view manipulate and view a graphics object we must find ways of storing it a computer-compatible way.

More information

Available online at ScienceDirect. Energy Procedia 69 (2015 )

Available online at   ScienceDirect. Energy Procedia 69 (2015 ) Available online at www.sciencedirect.com ScienceDirect Energy Procedia 69 (2015 ) 1885 1894 International Conference on Concentrating Solar Power and Chemical Energy Systems, SolarPACES 2014 Heliostat

More information

Figure 1 - Refraction

Figure 1 - Refraction Geometrical optics Introduction Refraction When light crosses the interface between two media having different refractive indices (e.g. between water and air) a light ray will appear to change its direction

More information

Institutionen för systemteknik

Institutionen för systemteknik Institutionen för systemteknik Department of Electrical Engineering Examensarbete Machine Learning for detection of barcodes and OCR Examensarbete utfört i Datorseende vid Tekniska högskolan vid Linköpings

More information

Lecture 8 Object Descriptors

Lecture 8 Object Descriptors Lecture 8 Object Descriptors Azadeh Fakhrzadeh Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University 2 Reading instructions Chapter 11.1 11.4 in G-W Azadeh Fakhrzadeh

More information

Towards the completion of assignment 1

Towards the completion of assignment 1 Towards the completion of assignment 1 What to do for calibration What to do for point matching What to do for tracking What to do for GUI COMPSCI 773 Feature Point Detection Why study feature point detection?

More information

A New Protocol of CSI For The Royal Canadian Mounted Police

A New Protocol of CSI For The Royal Canadian Mounted Police A New Protocol of CSI For The Royal Canadian Mounted Police I. Introduction The Royal Canadian Mounted Police started using Unmanned Aerial Vehicles to help them with their work on collision and crime

More information

UNIT VI OPTICS ALL THE POSSIBLE FORMULAE

UNIT VI OPTICS ALL THE POSSIBLE FORMULAE 58 UNIT VI OPTICS ALL THE POSSIBLE FORMULAE Relation between focal length and radius of curvature of a mirror/lens, f = R/2 Mirror formula: Magnification produced by a mirror: m = - = - Snell s law: 1

More information

A Survey of Modelling and Rendering of the Earth s Atmosphere

A Survey of Modelling and Rendering of the Earth s Atmosphere Spring Conference on Computer Graphics 00 A Survey of Modelling and Rendering of the Earth s Atmosphere Jaroslav Sloup Department of Computer Science and Engineering Czech Technical University in Prague

More information

Digital Image Processing. Prof. P. K. Biswas. Department of Electronic & Electrical Communication Engineering

Digital Image Processing. Prof. P. K. Biswas. Department of Electronic & Electrical Communication Engineering Digital Image Processing Prof. P. K. Biswas Department of Electronic & Electrical Communication Engineering Indian Institute of Technology, Kharagpur Lecture - 21 Image Enhancement Frequency Domain Processing

More information

Chapter 34. Images. In this chapter we define and classify images, and then classify several basic ways in which they can be produced.

Chapter 34. Images. In this chapter we define and classify images, and then classify several basic ways in which they can be produced. Chapter 34 Images One of the most important uses of the basic laws governing light is the production of images. Images are critical to a variety of fields and industries ranging from entertainment, security,

More information

Interactive comment on Quantification and mitigation of the impact of scene inhomogeneity on Sentinel-4 UVN UV-VIS retrievals by S. Noël et al.

Interactive comment on Quantification and mitigation of the impact of scene inhomogeneity on Sentinel-4 UVN UV-VIS retrievals by S. Noël et al. Atmos. Meas. Tech. Discuss., www.atmos-meas-tech-discuss.net/5/c741/2012/ Author(s) 2012. This work is distributed under the Creative Commons Attribute 3.0 License. Atmospheric Measurement Techniques Discussions

More information

Interactive comment on Quantification and mitigation of the impact of scene inhomogeneity on Sentinel-4 UVN UV-VIS retrievals by S. Noël et al.

Interactive comment on Quantification and mitigation of the impact of scene inhomogeneity on Sentinel-4 UVN UV-VIS retrievals by S. Noël et al. Atmos. Meas. Tech. Discuss., 5, C741 C750, 2012 www.atmos-meas-tech-discuss.net/5/c741/2012/ Author(s) 2012. This work is distributed under the Creative Commons Attribute 3.0 License. Atmospheric Measurement

More information

Introduction to Computer Vision. Introduction CMPSCI 591A/691A CMPSCI 570/670. Image Formation

Introduction to Computer Vision. Introduction CMPSCI 591A/691A CMPSCI 570/670. Image Formation Introduction CMPSCI 591A/691A CMPSCI 570/670 Image Formation Lecture Outline Light and Optics Pinhole camera model Perspective projection Thin lens model Fundamental equation Distortion: spherical & chromatic

More information

Optics INTRODUCTION DISCUSSION OF PRINCIPLES. Reflection by a Plane Mirror

Optics INTRODUCTION DISCUSSION OF PRINCIPLES. Reflection by a Plane Mirror Optics INTRODUCTION Geometric optics is one of the oldest branches of physics, dealing with the laws of reflection and refraction. Reflection takes place on the surface of an object, and refraction occurs

More information

Understanding Fraunhofer Diffraction

Understanding Fraunhofer Diffraction [ Assignment View ] [ Eðlisfræði 2, vor 2007 36. Diffraction Assignment is due at 2:00am on Wednesday, January 17, 2007 Credit for problems submitted late will decrease to 0% after the deadline has passed.

More information

Chapter 23. Geometrical Optics (lecture 1: mirrors) Dr. Armen Kocharian

Chapter 23. Geometrical Optics (lecture 1: mirrors) Dr. Armen Kocharian Chapter 23 Geometrical Optics (lecture 1: mirrors) Dr. Armen Kocharian Reflection and Refraction at a Plane Surface The light radiate from a point object in all directions The light reflected from a plane

More information

SPECIAL TECHNIQUES-II

SPECIAL TECHNIQUES-II SPECIAL TECHNIQUES-II Lecture 19: Electromagnetic Theory Professor D. K. Ghosh, Physics Department, I.I.T., Bombay Method of Images for a spherical conductor Example :A dipole near aconducting sphere The

More information

2/26/2016. Chapter 23 Ray Optics. Chapter 23 Preview. Chapter 23 Preview

2/26/2016. Chapter 23 Ray Optics. Chapter 23 Preview. Chapter 23 Preview Chapter 23 Ray Optics Chapter Goal: To understand and apply the ray model of light. Slide 23-2 Chapter 23 Preview Slide 23-3 Chapter 23 Preview Slide 23-4 1 Chapter 23 Preview Slide 23-5 Chapter 23 Preview

More information

Today. Lecture 4: Last time. The EM algorithm. We examine clustering in a little more detail; we went over it a somewhat quickly last time

Today. Lecture 4: Last time. The EM algorithm. We examine clustering in a little more detail; we went over it a somewhat quickly last time Today Lecture 4: We examine clustering in a little more detail; we went over it a somewhat quickly last time The CAD data will return and give us an opportunity to work with curves (!) We then examine

More information

Missile Simulation in Support of Research, Development, Test Evaluation and Acquisition

Missile Simulation in Support of Research, Development, Test Evaluation and Acquisition NDIA 2012 Missile Simulation in Support of Research, Development, Test Evaluation and Acquisition 15 May 2012 Briefed by: Stephanie Brown Reitmeier United States Army Aviation and Missile Research, Development,

More information

f. (5.3.1) So, the higher frequency means the lower wavelength. Visible part of light spectrum covers the range of wavelengths from

f. (5.3.1) So, the higher frequency means the lower wavelength. Visible part of light spectrum covers the range of wavelengths from Lecture 5-3 Interference and Diffraction of EM Waves During our previous lectures we have been talking about electromagnetic (EM) waves. As we know, harmonic waves of any type represent periodic process

More information

Human Detection. A state-of-the-art survey. Mohammad Dorgham. University of Hamburg

Human Detection. A state-of-the-art survey. Mohammad Dorgham. University of Hamburg Human Detection A state-of-the-art survey Mohammad Dorgham University of Hamburg Presentation outline Motivation Applications Overview of approaches (categorized) Approaches details References Motivation

More information

FRESNEL EQUATION RECIPROCAL POLARIZATION METHOD

FRESNEL EQUATION RECIPROCAL POLARIZATION METHOD FRESNEL EQUATION RECIPROCAL POLARIZATION METHOD BY DAVID MAKER, PH.D. PHOTON RESEARCH ASSOCIATES, INC. SEPTEMBER 006 Abstract The Hyperspectral H V Polarization Inverse Correlation technique incorporates

More information

Chapter 7: Geometrical Optics. The branch of physics which studies the properties of light using the ray model of light.

Chapter 7: Geometrical Optics. The branch of physics which studies the properties of light using the ray model of light. Chapter 7: Geometrical Optics The branch of physics which studies the properties of light using the ray model of light. Overview Geometrical Optics Spherical Mirror Refraction Thin Lens f u v r and f 2

More information

Robotics Programming Laboratory

Robotics Programming Laboratory Chair of Software Engineering Robotics Programming Laboratory Bertrand Meyer Jiwon Shin Lecture 8: Robot Perception Perception http://pascallin.ecs.soton.ac.uk/challenges/voc/databases.html#caltech car

More information

Matthew Schwartz Lecture 19: Diffraction and resolution

Matthew Schwartz Lecture 19: Diffraction and resolution Matthew Schwartz Lecture 19: Diffraction and resolution 1 Huygens principle Diffraction refers to what happens to a wave when it hits an obstacle. The key to understanding diffraction is a very simple

More information

CHAPTER 6 PERCEPTUAL ORGANIZATION BASED ON TEMPORAL DYNAMICS

CHAPTER 6 PERCEPTUAL ORGANIZATION BASED ON TEMPORAL DYNAMICS CHAPTER 6 PERCEPTUAL ORGANIZATION BASED ON TEMPORAL DYNAMICS This chapter presents a computational model for perceptual organization. A figure-ground segregation network is proposed based on a novel boundary

More information

Physically-Based Laser Simulation

Physically-Based Laser Simulation Physically-Based Laser Simulation Greg Reshko Carnegie Mellon University reshko@cs.cmu.edu Dave Mowatt Carnegie Mellon University dmowatt@andrew.cmu.edu Abstract In this paper, we describe our work on

More information

Digital Image Processing

Digital Image Processing Digital Image Processing Third Edition Rafael C. Gonzalez University of Tennessee Richard E. Woods MedData Interactive PEARSON Prentice Hall Pearson Education International Contents Preface xv Acknowledgments

More information

Cursor Design Considerations For the Pointer-based Television

Cursor Design Considerations For the Pointer-based Television Hillcrest Labs Design Note Cursor Design Considerations For the Pointer-based Television Designing the cursor for a pointing-based television must consider factors that differ from the implementation of

More information

Dynamic Reconstruction for Coded Aperture Imaging Draft Unpublished work please do not cite or distribute.

Dynamic Reconstruction for Coded Aperture Imaging Draft Unpublished work please do not cite or distribute. Dynamic Reconstruction for Coded Aperture Imaging Draft 1.0.1 Berthold K.P. Horn 2007 September 30. Unpublished work please do not cite or distribute. The dynamic reconstruction technique makes it possible

More information

ELEC Dr Reji Mathew Electrical Engineering UNSW

ELEC Dr Reji Mathew Electrical Engineering UNSW ELEC 4622 Dr Reji Mathew Electrical Engineering UNSW Review of Motion Modelling and Estimation Introduction to Motion Modelling & Estimation Forward Motion Backward Motion Block Motion Estimation Motion

More information

Advanced Processing Techniques and Classification of Full-waveform Airborne Laser...

Advanced Processing Techniques and Classification of Full-waveform Airborne Laser... f j y = f( x) = f ( x) n j= 1 j Advanced Processing Techniques and Classification of Full-waveform Airborne Laser... 89 A summary of the proposed methods is presented below: Stilla et al. propose a method

More information

Wallace Hall Academy

Wallace Hall Academy Wallace Hall Academy CfE Higher Physics Unit 2 - Waves Notes Name 1 Waves Revision You will remember the following equations related to Waves from National 5. d = vt f = n/t v = f T=1/f They form an integral

More information

Reconstructing Images of Bar Codes for Construction Site Object Recognition 1

Reconstructing Images of Bar Codes for Construction Site Object Recognition 1 Reconstructing Images of Bar Codes for Construction Site Object Recognition 1 by David E. Gilsinn 2, Geraldine S. Cheok 3, Dianne P. O Leary 4 ABSTRACT: This paper discusses a general approach to reconstructing

More information

2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into

2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into 2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into the viewport of the current application window. A pixel

More information

Chapter 9 Object Tracking an Overview

Chapter 9 Object Tracking an Overview Chapter 9 Object Tracking an Overview The output of the background subtraction algorithm, described in the previous chapter, is a classification (segmentation) of pixels into foreground pixels (those belonging

More information

Section A1: Gradients of straight lines

Section A1: Gradients of straight lines Time To study this unit will take you about 10 hours. Trying out and evaluating the activities with your pupils in the class will be spread over the weeks you have planned to cover the topic. 31 Section

More information

Big Mathematical Ideas and Understandings

Big Mathematical Ideas and Understandings Big Mathematical Ideas and Understandings A Big Idea is a statement of an idea that is central to the learning of mathematics, one that links numerous mathematical understandings into a coherent whole.

More information

Physics 1CL WAVE OPTICS: INTERFERENCE AND DIFFRACTION Fall 2009

Physics 1CL WAVE OPTICS: INTERFERENCE AND DIFFRACTION Fall 2009 Introduction An important property of waves is interference. You are familiar with some simple examples of interference of sound waves. This interference effect produces positions having large amplitude

More information

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm Group 1: Mina A. Makar Stanford University mamakar@stanford.edu Abstract In this report, we investigate the application of the Scale-Invariant

More information

( ) = First Bessel function, x = π Dθ

( ) = First Bessel function, x = π Dθ Observational Astronomy Image formation Complex Pupil Function (CPF): (3.3.1) CPF = P( r,ϕ )e ( ) ikw r,ϕ P( r,ϕ ) = Transmittance of the aperture (unobscured P = 1, obscured P = 0 ) k = π λ = Wave number

More information

mywbut.com Diffraction

mywbut.com Diffraction Diffraction If an opaque obstacle (or aperture) is placed between a source of light and screen, a sufficiently distinct shadow of opaque (or an illuminated aperture) is obtained on the screen.this shows

More information

Color Characterization and Calibration of an External Display

Color Characterization and Calibration of an External Display Color Characterization and Calibration of an External Display Andrew Crocker, Austin Martin, Jon Sandness Department of Math, Statistics, and Computer Science St. Olaf College 1500 St. Olaf Avenue, Northfield,

More information

SIMULATED LIDAR WAVEFORMS FOR THE ANALYSIS OF LIGHT PROPAGATION THROUGH A TREE CANOPY

SIMULATED LIDAR WAVEFORMS FOR THE ANALYSIS OF LIGHT PROPAGATION THROUGH A TREE CANOPY SIMULATED LIDAR WAVEFORMS FOR THE ANALYSIS OF LIGHT PROPAGATION THROUGH A TREE CANOPY Angela M. Kim and Richard C. Olsen Remote Sensing Center Naval Postgraduate School 1 University Circle Monterey, CA

More information

4. A bulb has a luminous flux of 2400 lm. What is the luminous intensity of the bulb?

4. A bulb has a luminous flux of 2400 lm. What is the luminous intensity of the bulb? 1. Match the physical quantities (first column) with the units (second column). 4. A bulb has a luminous flux of 2400 lm. What is the luminous intensity of the bulb? (π=3.) Luminous flux A. candela Radiant

More information

Light: Geometric Optics

Light: Geometric Optics Light: Geometric Optics Regular and Diffuse Reflection Sections 23-1 to 23-2. How We See Weseebecauselightreachesoureyes. There are two ways, therefore, in which we see: (1) light from a luminous object

More information

index of refraction-light speed

index of refraction-light speed AP Physics Study Guide Chapters 22, 23, 24 Reflection, Refraction and Interference Name Write each of the equations specified below, include units for all quantities. Law of Reflection Lens-Mirror Equation

More information

Mathematics Curriculum

Mathematics Curriculum 6 G R A D E Mathematics Curriculum GRADE 6 5 Table of Contents 1... 1 Topic A: Area of Triangles, Quadrilaterals, and Polygons (6.G.A.1)... 11 Lesson 1: The Area of Parallelograms Through Rectangle Facts...

More information

Homework Set 3 Due Thursday, 07/14

Homework Set 3 Due Thursday, 07/14 Homework Set 3 Due Thursday, 07/14 Problem 1 A room contains two parallel wall mirrors, on opposite walls 5 meters apart. The mirrors are 8 meters long. Suppose that one person stands in a doorway, in

More information

COHERENCE AND INTERFERENCE

COHERENCE AND INTERFERENCE COHERENCE AND INTERFERENCE - An interference experiment makes use of coherent waves. The phase shift (Δφ tot ) between the two coherent waves that interfere at any point of screen (where one observes the

More information

3.2 Level 1 Processing

3.2 Level 1 Processing SENSOR AND DATA FUSION ARCHITECTURES AND ALGORITHMS 57 3.2 Level 1 Processing Level 1 processing is the low-level processing that results in target state estimation and target discrimination. 9 The term

More information

Character Recognition

Character Recognition Character Recognition 5.1 INTRODUCTION Recognition is one of the important steps in image processing. There are different methods such as Histogram method, Hough transformation, Neural computing approaches

More information

Computer Graphics Prof. Sukhendu Das Dept. of Computer Science and Engineering Indian Institute of Technology, Madras Lecture - 24 Solid Modelling

Computer Graphics Prof. Sukhendu Das Dept. of Computer Science and Engineering Indian Institute of Technology, Madras Lecture - 24 Solid Modelling Computer Graphics Prof. Sukhendu Das Dept. of Computer Science and Engineering Indian Institute of Technology, Madras Lecture - 24 Solid Modelling Welcome to the lectures on computer graphics. We have

More information

Light and the Properties of Reflection & Refraction

Light and the Properties of Reflection & Refraction Light and the Properties of Reflection & Refraction OBJECTIVE To study the imaging properties of a plane mirror. To prove the law of reflection from the previous imaging study. To study the refraction

More information

New Opportunities for 3D SPI

New Opportunities for 3D SPI New Opportunities for 3D SPI Jean-Marc PEALLAT Vi Technology St Egrève, France jmpeallat@vitechnology.com Abstract For some years many process engineers and quality managers have been questioning the benefits

More information

Exterior Orientation Parameters

Exterior Orientation Parameters Exterior Orientation Parameters PERS 12/2001 pp 1321-1332 Karsten Jacobsen, Institute for Photogrammetry and GeoInformation, University of Hannover, Germany The georeference of any photogrammetric product

More information

CHAPTER 5 MOTION DETECTION AND ANALYSIS

CHAPTER 5 MOTION DETECTION AND ANALYSIS CHAPTER 5 MOTION DETECTION AND ANALYSIS 5.1. Introduction: Motion processing is gaining an intense attention from the researchers with the progress in motion studies and processing competence. A series

More information

Discover how to solve this problem in this chapter.

Discover how to solve this problem in this chapter. A 2 cm tall object is 12 cm in front of a spherical mirror. A 1.2 cm tall erect image is then obtained. What kind of mirror is used (concave, plane or convex) and what is its focal length? www.totalsafes.co.uk/interior-convex-mirror-900mm.html

More information

25-1 Interference from Two Sources

25-1 Interference from Two Sources 25-1 Interference from Two Sources In this chapter, our focus will be on the wave behavior of light, and on how two or more light waves interfere. However, the same concepts apply to sound waves, and other

More information

UNIT-2 IMAGE REPRESENTATION IMAGE REPRESENTATION IMAGE SENSORS IMAGE SENSORS- FLEX CIRCUIT ASSEMBLY

UNIT-2 IMAGE REPRESENTATION IMAGE REPRESENTATION IMAGE SENSORS IMAGE SENSORS- FLEX CIRCUIT ASSEMBLY 18-08-2016 UNIT-2 In the following slides we will consider what is involved in capturing a digital image of a real-world scene Image sensing and representation Image Acquisition Sampling and quantisation

More information

An Intuitive Explanation of Fourier Theory

An Intuitive Explanation of Fourier Theory An Intuitive Explanation of Fourier Theory Steven Lehar slehar@cns.bu.edu Fourier theory is pretty complicated mathematically. But there are some beautifully simple holistic concepts behind Fourier theory

More information

8.B. The result of Regiomontanus on tetrahedra

8.B. The result of Regiomontanus on tetrahedra 8.B. The result of Regiomontanus on tetrahedra We have already mentioned that Plato s theory that the five regular polyhedra represent the fundamental elements of nature, and in supplement (3.D) to the

More information