Neil U Desai. HUSMS INS MASSACHUSETTS INSTITUTE OF TECHNOLOGY F OF TECHNOLOGY JUNE 2005 j2

Size: px
Start display at page:

Download "Neil U Desai. HUSMS INS MASSACHUSETTS INSTITUTE OF TECHNOLOGY F OF TECHNOLOGY JUNE 2005 j2"

Transcription

1 Source Localization of MEG Generation Using Spatio-temporal Kalman Filter by Neil U Desai B.S. Computer Science Massachusetts Institute of Technology, 2004 SUBMITTED TO THE DEPARTMENT OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCE IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF ENGINEERING IN ELECTRICAL ENGINEERING AND COMPUTER SCIENCE AT THE HUSMS INS MASSACHUSETTS INSTITUTE OF TECHNOLOGY F OF TECHNOLOGY JUNE 2005 j2 JU Copyright Neil U Desai. All rights reserved. LIBRARIES The author hereby grants to MIT permission to reproduce and to distribute publicly paper and electronic copies of this thesis document in whole or in part. Signature of Author Depahtment of Eletrical Engineering and Computer Science Certified by Dr. Emery Brown irector, Neuroscience Statistics Research Laboratory, MGH Thesis Supervisor Certified by Harvard-MIT Di Dr. Stephen Bums of Health Sciences and Technology Thesis Advisor Oce ted by Professor AiThur C. Smith Chairman, Department Committee on Graduate Theses BARKER

2 Source Localization for Magnetoencephalography By Spatio-Temporal Kalman Filtering and Fixed-Interval Smoothing by Neil U Desai Submitted to the Department of Electrical Engineering and Computer Science on May 19, 2005, in partial fulfillment of the requirements for the Degree of Master of Engineering in Electrical Engineering and Computer Science ABSTRACT The inverse problem for magnetoencephalography (MEG) involves estimating the magnitude and location of sources inside the brain that give rise to the magnetic field recorded on the scalp as subjects execute cognitive, motor and/or sensory tasks. Given a forward model which describes how the signals emanate from the brain sources, a standard approach for estimating the MEG sources from scalp measurements is to use regularized least squares approaches such as LORETA, MNE, VARETA. These regularization methods impose a spatial constraint on the MEG inverse solution yet, they do not consider the temporal dynamics inherent to the biophysics of the problem. To address these issues, we present a state-space formulation of the MEG inverse problem by specifying a state equation that describes temporal dynamics of the MEG sources. Using a standard forward model system as the observation equation, we derive spatio-temporal Kalman filter and fixed-interval smoothing algorithms for MEG source localization. To compare the methods analytically, we present a Bayesian derivation of the regularized least squares and Kalman filtering methods. This analysis reveals that the estimates computed from the static methods bias the location of the sources toward zero. We compare the static, Kalman filter and fixed-interval smoothing methods in a simulated study of MEG data designed to emulate somatosensory MEG sources with different signal-to-noise ratios (SNR) and mean offsets. The data were mixtures of sinusoids with SNR ranging from 1 to 10 and mean offset ranging from 0 to 20. With both decrease in SNR and increase in mean offset, the Kalman filter and the fixed interval smoothing methods gave uniformly more accurate estimates of source locations in terms of mean square error. Because the fixed interval smoothing estimates were based on all recorded measurements, they had uniformly lower mean-squared errors than the Kalman estimates. These results suggest that state-space models can offer a more accurate approach to localizing brain sources from MEG recordings and that this approach may enhance appreciably the use of MEG as a non-invasive tool for studying brain function. Thesis Supervisor: Emery N. Brown Title: Director, Neuroscience Statistics Research Laboratory, MGH Thesis Advisor: Stephen Bums Title: Senior Lecturer, Harvard-MIT Division of Health Sciences and Technology 2

3 Acknowledgements This research project was a collaborative effort. I would like to thank Dr. Emery Brown for giving me the opportunity to work in his laboratory and for providing guidance throughout the project. Dr Brown also gave me the opportunity to meet with other senior research scientists and attend many meetings with fellow colleagues. I would also like to thank Dr. Chris Long who helped me with the many signal processing and mathematical challenges I found along the way. Finally, I would like to thank Dr. Matti Himiiinen whose patience allowed me to understand the resources used to create realistic brain signals and forward solutions. 3

4 Table of Contents Introduction...5 Inverse M ethods Sim ulation R esults C onclusions R eferences

5 1. Introduction: 1.1 MEG Measurements of the electrical currents flowing within the neurons of the brain can be used to aid in the diagnosis of many disorders, as well as to study basic brain function. One way to observe these tiny electrical currents is to measure the magnetic fields that they produce. Because magnetic fields pass through the skull and scalp as if they were transparent, they can be non-invasively measured outside the head, using a technique called magnetoencephalography, or MEG. MEG has important advantages over other functional imaging techniques. First, the temporal resolution is excellent allowing researchers to monitor brain activity on the millisecond timescale. Furthermore, MEG is completely non-invasive, requiring neither strong external magnetic fields (e.g. fmri) nor injections of radioactive substances (e.g. PET). The magnetic fields that are evoked from neurons in the brain are very weak; they have strengths of 0.2 to 1 pt, less than a hundred millionth of the earth's magnetic field (Hamalainen, 1993). However, with the use of a sensitive magnetic-field sensor, the superconducting quantum interference device (SQUID), these small magnetic fields can be harnessed. The physics of how the SQUID works is not relevant for this research. Because of such small magnetic fields, all MEG recordings must be conducted in a special magnetically shielded room, to reduce the interference from larger magnetic fields. The patient sits in a chair and the MEG helmet is positioned directly above the patient's head, see figure 1. 5

6 Figure 1: Patient situated under the MEG Electroencephalography (EEG) is a similar non-invasive way to measure the brain's electrical activity. EEG requires a cap in which electrodes are placed. The cap must be fitted securely on the patients head to ensure accurate measurements. There are two critical differences between EEG and MEG. First, EEG measures electrical field, while the MEG measures magnetic field. Secondly, MEG is insensitive to the conductivities of the scalp, skull, and brain-which can affect EEG measurements. Neurons communicate through generation of these electrical potentials, called action potentials. They travel from neuron to neuron by traversing down the length of the neuronal axon. The progress of these potentials depends on the flow of a small quantity of ions down the center of the axon. Since ions have electrical charge, their 6

7 flow in one direction qualifies as an electrical current. Though this current is small, it will generate a very small magnetic field looping around the neuronal axon. When a number of neurons are simultaneously active, the magnetic field generated by these neurons is strong enough to be recorded from the surface of the brain using the SQUID. Magnetic fields appear whenever an electrical current is generated. If the electrical current has a net flow along a line (like a DC electrical current down a wire), then circular magnetic fields will appear in loops around that line, following the 'righthand rule' in electromagnetic theory. The figure below shows a current flow, J,, and the associated magnetic field, B. B Figure 2: The current Jv produces a magnetic field, B, in loops around the current. The magnetic field is sensed on the surface by traveling through the skull and scalp without any distortion. Previous research (Hari and Salmelin, 1997) determined that the conductivity of the skull and scalp are uniform, thus allowing the magnetic field to propagate without needing to consider other parameters. By the nature of the electric field, radial currents (sources that travel perpendicular to the cortex) do not produce 7

8 detectable magnetic fields, while tangential sources (parallel to the cortex) will produce magnetic fields, see figure 3: cortex VV00 cortical sources Figure 3: Only currents that run perpendicular to the cortex produce a measurable magnetic field (B * 0) Where B is the induced magnetic field, and V is the electric field associated with a current source. The arrows parallel to the cortex will emanate a detectable magnetic field, while those that are perpendicular to the cortex will not. The MEG detects the magnetic field using the coils in the SQUID. The coils are configured into 306 MEG channels. This configuration contains 204 gradiometers and 102 magnetometers. The gradiometers have more accurate sensitivity than the magnetometers and when analyzing MEG data, the magnetometers are generally ignored. The coil configurations of the channels are shown below: 8

9 Figure 4: Configuration of the coils on the MEG Depending on how one samples the surface of the brain, there are thousands of sources within the brain that elicit these magnetic fields. These sources propagate to the surface using Maxwell's equations and Biot-Savart Law. The method of how fields propagate is explained by a suitable forward model. The forward problem in MEG source localization is to determine the location and strengths of the MEG scalp measurements (the coils) based on the active source inside the brain. The purpose of this inverse problem, then, is to estimate the location and intensity of these sources given simply the measurements produced from the scalp. The problem is ill-posed because the number of measurements is significantly smaller than the number of sources. 1.2 Forward Problem: Before discussing methods of inverse localization, it is important to understand the forward model. This modeling depicts the relationship between the current sources 9

10 and the data coming from the MEG. It uses Maxwell's equations to predict how the current sources inside the brain radiate to the scalp. As described above, the current sources in the brain generate a current flow, J(r'), where r' is the source that produces the current. The total current flow, J(r'), can be divided into primary J (r') and volume J '(r') currents. The primary currents reflect directly the activity of the corresponding neurons. These currents flows exist in a conductive medium, i.e. the brain tissue. The volume currents are produced in other media, such as the skull and scalp. Jv (r') results from the effect of the electric field on outsides charge carriers and represents currents of the macroscopic field (Williamson and Kaufman, 1987): Jv a(r')e(r') where E is the electric field and T(r') is the conductivity which we will here assume to be isotropic. By making approximations for Maxwell Equations: E=-VV where V is the electric potential. The primary current flow JP (r') is the driving force of the total current flow and can be defined at the macroscopic level as the current that does not include the volume current: J(r') = J (r') + J v(r') = JP (r') - a(r')v V(r') The total current density may not have any volume currents (for example, when a current is closed loop), but all total currents must include a component that is directly associated with the primary current. For MEG, it is the primary currents we are interested in since they represent areas of the brain that relate to sensory and motor processes (Mosher, et. Al, 1999). We 10

11 will modify our current models to stress the importance of the primary current density. If we assume a simple head model involving a single homogeneous sphere and consists of contiguous regions each of constant conductivity, then the relationship between a source inside the brain and a measurement point outside the brain is given as the following, from Biot-Savart Law: B(r) = g J(r') x (r - r')3 dr' 4z G 1 11 Where B is the magnetic field, r is a location at the measurement point, r' is the location of the current source, and G is the closed volume containing the source. After distinguishing between primary and secondary current sources, this equation can be rewritten as: B(r) = BO (r)+ 4 (J -. )JV(r') (r- r') xds 4z s r - r where the summation is over all boundaries, and Bo(r) is the magnetic field resulted from the primary currents. The equations above are used in a forward model assuming a spherical homogeneous head model. However, the forward problem must in general be solved numerically for arbitrary head shapes. In this research, a boundary element method (BEM) was used for solving the forward problem. The theory and mathematics involved are very similar to those for a homogeneous sphere model. In the BEM, the brain volume is partitioned into millions of tiny tetrahedral (see figure below). 11

12 Figure 5: Under the BEM the brain surface is divided into thousands on triangles. The mathematics calculating how a magnetic field is detected at the measurement point is applied as a general theory. The equations and relationships can be formed to fit any particular model, such as the BEM. There are several different head models, such as the 3 shell-sphere or the finite-element method (FEM), which take alternate parameters into consideration. For the purpose of this report, we need not go into the specific differences in head models. The BEM, the method used for this research, is calculated using the most current technology. 12

13 2. Inverse Methods: 2.1 Problem Formulation: Determining the strength and location of a neuronal current based on the MEG scalp measurement data is an ill-conditioned inverse problem. After defining a model for source activation, the measurement data is used to specify the parameters for the model, revealing information about the strength of current in different parts of the brain. The mathematical methods of estimation were first developed in the 1 8 th and early 1 9 th centuries. Bayesian statistics says that given the a priori probability distribution and a measurement value, one can get the a posteriori distribution describing the knowledge about possible values of the parameters. In 1777, Bernoulli described the method of maximum likelihood stating that a parameter value should be chosen that makes the obtained values more probable. Finally, Gauss invented the method of least-squares and current estimation methods solve inverse solutions using a regularized least-squares approach or by maximizing the a posteriori probability distribution. In the 1960s, Kalman published an effective method of updating parameter estimates given incoming measurement data, which is known as the Kalman Filter. The MEG inverse problem, as with many other inverse problems, is ill-posed, i.e., there may exist many combinations of parameters that produce the same results, thus having the problem of non-uniqueness. We will examine the evolution of methods used in source localization. Before we continue, it's important to set up the problem in terms of variables. 13

14 Since most of biomedical inverse solutions use linear algebraic formulations, a matrix setup is a good framework to follow. Assume that we call the measurement (observation) data vector Y, and the current source vector J. The vector Y encompasses all the measurements from the 204 channels: Y(t)= (y(l, t), y(2, t),..., y(nc, t))' t =1,..., T where ' denotes matrix transposition. The measurement vector has the dimension Nc x T, where Nc is the number of channels, and T is the number of time points. The vector J contains the number of sources which are known in this research as gray matter voxels. At each voxel, there is a local three-dimensional current vector: j(t) = (j (V, o), jy (V, o), j '(V, t))' where v is the voxel label. The column vector of all current vectors (for all voxels) is denoted by: JW )= (U(1, A),j2, 0)',...j(Nv, )) where Nv is the number of voxels. Thus, the dimension of J(t) is 3Nv x T. The problem is ill-posed because Nc << Nv. Depending on how the brain is divided into discrete segments, Nv can vary from 400 to 4,000, each voxel having an x-, y-, and z-current density. It is difficult to estimate the strength and location of all 3Nv voxels, given only the 204 channel measurements on the scalp. This leads to the nonuniqueness problem described above since Nc << 3Nv. The forward model, explained in section 1.2, contains all of its information about the source propagation and current density to a matrix known as the leadfield matrix K, which maps the intracranial neuronal currents to the extracranial MEG signals. Each row of K represents a channel, and the values in that row are the weights 14

15 that each voxel provides to the value of that channel. K also acts as an ordertransformation matrix effectively transferring the order of the problem from the source space (3Nv x T) to the measurement space (Nc x T), which is significantly smaller. Thus, K has dimensions of Nc x Nv. The linearity of the lead field matrix allows the follow relationship: Y, = KJ, The above model can be easily extended to multiple time samples, thus creating a spatio-temporal model. Additionally, there exists an observation noise, e, which is an additive random element. Thus, we evolve the equation above into: Y = KJ, +e, Having stated the problem in terms of variables, we can now understand inverse methods. The overdetermined and underdetermined dipole models are both instantaneous solutions, i.e., they do not vary with time and take the current densities at each instant of time. The dynamic solutions use previous time states and a new measurement to provide an a posteriori estimate of the state. This enables the estimate to have spatio-temporal characteristics. 2.1 Static models: Overdetermined dipole model (least-squares): The most basic model for MEG is the overdetermined dipole model. Here, the basic strategy is to fix the number of candidate sources and use a nonlinear estimation 15

16 algorithm to minimize the squared error between the actual data and the estimated value calculated from the forward model. The assumption is that the activated source area is small compared to the distance to the measurement channels, so the measured magnetic field resembles that generated by a current dipole. The setup for this situation is as follows: Y = K(r, j)j+ e Where K(rj) represents the signals generated by unit diploes with given locations r = (x1,y1,z,,...,xnv YNvZNv) and orientations = (Ijx'l'IZ''***'NvX' NvY' NvZ). The observation noise, c, has a zero-mean and a covariance matrix E. The maximum likelihood estimate of the dipole parameters (rj) can be calculated by minimizing the least-squares cost function: E(r, j, J) = Y - K(r, j)jj where the square of the Frobenius norm, x2 is the sum of the squares of the elements of the matrix. Taking into account the noise covariance, Ec, the least-squares cost function becomes E(r, q,q)= Xj (Y-K(r, q)j) and the solution to this form is: f(j)=(y -KJ) T -l(y -KJ) The MEG sensors are extremely sensitive, intercepting noise from all active fields in the room. Thus the noise distribution of the measurement may include some non-gaussian contributions, but on average, the noise settles upon a normal distribution 16

17 The problem with this model is that true sources areas are not single, point-like entities. Previous research has shown that an area of 1cm2 or larger can produce detectable signals at recovery (Hamalainen, 1993). Thus, the estimated source localization will not be exactly similar to the true locations, but will be close in the general area of activation. In implementing the least-squares method, the most significant variable in this method is the number of voxels to use, Nv. If one uses too many voxels, then the sources can be made to fit any data set, regardless of quality. However, if too few voxels are used, the intensity of the current dipole might not be strong enough to be detected at the sensor locations. Furthermore, as the number of voxels increases, the cost function may result in getting trapped in local minima, which might be undesirable. The solution to the problem is tractable because the number of dipole sources is much less than the number of channels. This is different when using the underdetermined dipole model in the next section Underdetermined dipole model (least squares with regularization): The previous least-squares solution is the simplest case and serves as a foundation for deriving more sophisticated inverse algorithms. In the underdetermined dipole model, the dipole locations are predefined over the volume where voxels can be found with decent accuracy. Here, the number of voxels, Nv, are greater than the number of measurements, Nc. If we observe the system at a single time point, we can set up a Gaussian linear model as follows: 17

18 Y =KJ +e (1) J, = Hp, + R, (2) where J contains the voxel strengths, and E is the measurement noise, distributed as a Gaussian random vector with mean zero and covariance matrix E. Similarly, J has an additive noise element R, which is zero mean and has a covariance matrix ER. The underdetermined problem results in minimizing the following cost function: f(j) = - KJ E +2 J - H RU112 (3) where X is a regularization parameter to trade-off the measurement error and prior information. If X is large, then the data, Y, must fit the prediction, KJ, so that the expression governing the fitness of the model, Y - KJ1, is minimal. Consequently, if k is small, then the model won't predict the data too well. Thus, k helps to penalize the fitness of the model. The corresponding least squares error function is: f(j)=(y -KJ) T X) (Y -KJ)+2(J - Hp)- 1 (J - Hp) The first term in this equation f(j)=(y --KJ) T -)(Y - KJ) is the same as the leastsquare solution depicted in the overdetermined method. In order to identify a solution, the second term A(J - Hp)Z- (J- Hp) has information about the mean of the prior and the covariance structure of the source noise. The solution to the error function is: where J is the estimate of J. J=HP+'ZRK(KXRK+X) 1 [Y-KIP] (4) 18

19 This form brings about some interesting insights. The solutions are instantaneous (static) because J is computed at a snapshot instance in time. This poses an issue because there is no continuity constraint on the temporal characteristics of the solution, i.e., the estimate of source localizations at time t = t' is only proportional to the distribution of the source localizations at time t = t'. Another insight is that both static least squares models are the special case where the prior distribution on the sources is the Gaussian distribution with Hp = 0. This reduces Equation 4 to J = VL RK'(K2X RK'+Y)-'Y (5) which is the standard form of the least squares solution with regularization. Currently, this solution is widely used and adapted to include other physiological constraints, but for the purpose of this paper we will compare all future estimates to this static or 'naive' estimate. A major drawback with this model is that since the prior guess of the mean at time t = T is 0, each solution resets whatever previous knowledge it had. The lack of continuity in time may prevent accurate estimation of the current state estimate. 2.2 Dynamic Models: State-Space Model: In order to develop the equations for the linear filter, a suitable state-space model for the system is presented as follows: 19

20 J, =FJt, + 7, (6) Y =KJ +e, (7) where mt is a source white noise with mean zero and covariance matrix Q = E(9rIT). Consider the autoregressive process AR(l) in equation 5. This equation models states that the estimate of a current vector at time t is equal to some constant multiplied by the current vector at time t-1. This adds the idea of temporal continuity to the system. If we let F = I3Nv, where Ix is an identity matrix of size x, Equation 6 becomes: J(v,t) =I 3 NJt + 7 (8) so that the current estimate of a voxel v at time t is equal to its value at t-j plus some random noise. An additional constraint can be placed so that the voxel at time t is equal to its value at t-1 and proportional to its neighbors at time t-1. To account for neighborhood interactions, we can rewrite equation 8 as: J(v,t)= AJ + B I J,+, (9) Ve Z(v) where A and B are constants and Z(v) contains the neighbors of voxel v. The summation is over all the neighbors of v. The neighbors can be easily found by defining a spatial radius and determining which voxels intersect that area. The resulting values for this voxel are scaled by a factor B which averages the values by dividing the total sum by the number of neighbors. A different method is used to select the parameter A. Since we want the values for voxel v at time t - 1 to be emphasized more than the values for the neighbors of voxel v at t - 1, we will set the A to be I3N, and then scale the identity so that the summation over the entire row for voxel v is 1. Consider the following space 2-d representation for a set of voxels. 20

21 Figure 6: Spatial map of 2-D cross-section of the brain. There are 40 voxles. The following example is assuming voxel 18 is being observed, with neighbors 11, 17, 19 and 26. Assume that is how the brain looks on a cross-section perpendicular to the Z-axis. There are a total of 40 voxels each numbered in the spatial configuration above. Let's assume we are looking at voxel 18 for the moment. The neighborhood of voxels, Z(1 8) is [11, 17, 19, 26], two in the x-direction and two in the y-direction. According to equation 8, row 18 of the J vector will have a component an J(1 8, t) = AIJ(1 8, t -1) + B(j(1 1, t -1) + j(1 7, t -1) + j(1 9, t -1) + J(26, t - 1)) + q(t) The parameter B will initially be 0.25 since there are 4 neighbors. The parameter A will be set to 1. However, the entire row will be normalized so the sum adds to 1. Thus, the parameter B will now be and the parameter A will be 0.5. This way, the sum is 21

22 0.5 times voxel 18's previous value, plus times voxels II's previous value, plus times voxel 17's previous value, and so on. Since we are adding up similar dimensions, it is possible to combine all the information about voxel v as well as its neighbors, Z(v), into a simple matrix, F. In the example provided above, the 1 8 th row of J, corresponding to voxel 18 is as follows: Z(v) *.. 26 HRn a /8 11/211/8 1/8 0 Figure 7: Demonstrating portion of the F matrix corresponding to voxel 18. This is an example of how F is configured. For all other columns in row 18, the values are 0 because there are no neighborhood interactions to consider. If you notice, the diagonals of F will always have a value because this refers to the voxel's own previous value at time t

23 V kvtnmml iijbi, = Figure 8: Image of F, dimensions of Nv x Nv, showing the neighborhood interactions between all voxels. On a full scale of Nv = 2697 voxels, the F matrix looks like the above image captured from Matlab. The diagonals are apparent, and the banded lines show the neighbors. There are smaller bands around the diagonals that cannot be seen in the figure, since voxel v generally will have as neighbors' v + 1 and v - 1. There are many ways to implement the state-space model. One could use an AR(2) which would look at voxels not only at t, and t - 1, but also t - 2. Additionally, we can expand the neighborhood space by increasing the radius of possible neighboring voxels. There are other physiological variables we might be able to model inside the generalized state-space model but this model serves as a good starting point to continue the dynamic model derivation Kalman Filtering: 23

24 In 1960, R.E. Kalman published his famous paper describing a set of mathematical equations that provides an efficient computational (recursive) method of estimating the state of a process. The Kalman filter is a powerful tool because it supports estimation of past, present, and even future states and can model these states without exact knowledge of the system. Since that paper was published, the Kalman filter has been applied to all filtering and estimating problems ranging from navigation control to forecasting with atmospheric models. In this paper, we develop a set of equations on the Kalman filter to be used for the purpose of estimating the location and strength of current sources in the brain. The Kalman filter can be derived directly from Bayes' Rule and a probabilistic approach to solving the filter equations are provided in the appendix. The Kalman filter is formulated as follows. Given the state-space model described in equations 5 and 6: J,= FJ 1 _ 1 +R, Y= KJ +e, in which the state noise, It, is a white Gaussian noise with a covariance matrix Q, and the measurement noise, ct, also is a white Gaussian noise with a covariance matrix R. We would like to formulate the estimation algorithm to satisfy the following statistical conditions: 24

25 1. The expected value of our estimate is equal to the expected value of the state; i.e., on average, our estimate of the state will equal the true state. 2. Versus all other estimation algorithms, this algorithm minimizes the expected value of the mean squared error; i.e., on average, the algorithm gives the smallest possible estimation error. The Kalman filter satisfies these two criteria. The filter estimates the state by using a feedback control. First the filter estimates the value of a state at time t, and then it receives feedback from the measurements (including the noise). The Kalman filter equations fall into two categories: time update (prediction) equations and measurement update (filter) equations. The time update equations use knowledge of the temporal continuity to project forward (in time) the current state and error covariance to give the a priori estimates for the next time step. The measurement update equations provide feedback so that the new measurement is incorporated to the estimation allowing the a priori estimate to improve to an a posteriori estimate. The sequence of events in Kalman filtering is as follows: start with the initial conditions Jo0o, V 010 where V is the error covariance of the state estimate J. The first step is to predict the next state for J 1. Predicting the next state Ji is actually predicting the state Ji given state Jo. Thus, the actual computation is estimating the state for Jilo. Then a new measurement, Yi is received, and is used in a filtering stage to compute the value of the state at time t = 1. The result is represented by Jill. The process is recursive, yielding J 211 in the next step followed by J 2 12 at the filtering stage. The iterations continue until JNVlNv is computed. The Kalman filter equations are presented below: 25

26 Prediction (Time-Update) J,,- 1 =FJ, 1 1 I (10) V,_- = FV_F T + Q (11) Notice how in the time-update equation 10 a new a priori estimate for the state is predicted: Jtit. 1. The prediction happens through the use of the F matrix that gives a new estimate of the state. The state error covariance, Vt, is also predicted forward from the previous time point to give an a priori estimate of Vtlt- 1. Filter (Measurement-Update) Gt=Vj,_K T T (KVjK +R)- 1 (12) J,,t = Jt,_ + G, (Y - KJt,, (13) V,= (I - GK)V,_j (14) The measurement equations first compute the Kalman gain, Gt. Equation 13 uses the calculated gain to get an a posteriori estimate of the state. The term, Y - KJj,_, are residuals, and measure the difference between the new incoming measurement, and the a priori estimate. The gain, then, is used to minimize the a posteriori error covariance, and it determines how much to weigh the residual. Finally Equation 14 determines the a posteriori error covariance. The Kalman filter can be viewed in the same framework used when working with the least-squares problem. Recall the solution to the least squares solution with regularization from Equation 4. J = Hp + 2'YRK'(KV R K'+,)-1 Y - KHp] 26

27 Recall that Hp was the mean of the state, and from the regularized least squares solution, we typically set this value to 0, implying that at every new time point, the a priori mean for the state is 0. Because the Kalman filter is a dynamic process, the mean of the a priori will be related by Equation 10. So if we were to assume that Hp = Jt-i, then we could set up a different cost minimizing function: and arrive at a new estimate: f (j) = Iy - KJ,- 1 I, + - FJ- 1 i, = J,- + V,_-K'(KV,_, K'+ 1 -[Y - KJ,_ we can identify this solution as the Kalman filter measurement update equation that calculates the a posteriori estimate. a priori estimate residual i, = J,_ + V,_ K'(KV_, K'+Y,) -1Y - KJ,,] Kalman gain Thus using the similar framework for static and dynamic solutions, it is easy to see how each solution differs Fixed Interval (backwards) smoother: 27

28 The Kalman Filter framework provides an accurate way to estimate a given state J(v,t) at time t. And since this is a recursive algorithm, the estimate at time t has taken all previous time points into consideration. Thus, an alternate way to write the estimate is J(v, t I t = 0... t- 1). In a MEG experiment, all the data of a single-trial is computed and averaged over the entire epoch. Thus data from the MEG measurements are acquired, so an estimate of a state could be better calculated not only using the previous time points, but future time point as well. This filter is known as the fixed-interval or backwards smoother. It provides an estimate of J(v, t I t = 0... T-1) where T is the number of time points. The backwards smoother can be derived using a statistical framework, which is given in the Appendix. The equations for the backwards smoother are as follows: Gain (Correction term) Smoothed estimate +A, =, (15) JtIT =JtIt +At,.t+IT - Jt+ 1, ) (16) VtIT = Vt t + A, (VIIr - V+, )A T, (17) The process is similar to the Kalman filter. The gain, At, is calculated which determines the weight placed on the residual J + I-T - J t+1 I t. The smoothed estimate Jt 1 adjusts the Kalman filter estimate Jtjt by adding or subtracting the new residual. This filter requires that all error covariance values must be saved for all time points, which can be a computational issue if the size of Nv grows very large. 28

29 In this research, we will test the three inverse solutions: Static solution: least squares: J(v, t I t = t) Dynamic solution: Kalman Filter: J(v, t I t = 0... t) Backwards smoother: J(v, t I t =... T) Simulated brain signals will be placed into an area of voxels in the brain and through the forward model, we generate the MEG measurements. Then, each three inverse solutions will recover the signal and a mean squared error will be calculated. 29

30 3. Simulations: 3.1 Region of Interest: In order to test the effectiveness of each inverse solution we decided to run models on simulated MEG data. Using proprietary software, we digitized the brain into voxels using a 7.5mm spacing. Then, we chose a region of interest. The region of interest for this simulation was this section of the left hemisphere of the brain: Figure 9: Region of interest on inflated surface This view of the brain is known as an inflated surface. The brain's cortex is highly convoluted with indentations in the brain which are known as gyrus (shallow) or sulcus (deep) regions. The inflated surface places all convolutions on the same plane 30

31 and inflates them to the surface. Thus, what is circled in green is a selection of voxels on the gyri and sulci of the brain (see figure below). Figure 10: Region of interest on normal surface The brain imaging software enables us to select a region of interest on the brain and compute a forward model on only those voxels which are in that region of interest. The entire cortex consists of more than 8,000 gray matter voxels that can emanate a magnetic field. By reducing the order of the problem to a region of interest, we can reduce the computational burden on the inverse solutions. There is no loss of information. The region of interest here selects 837 voxels. The forward model that is calculated automatically creates a lead-field matrix, K, for the region of interest. For this simulation, the number of sources are Nv = 837, 31

32 . and the number of measurement channels is Nc = 204. Thus, K has dimensions of 204 x Brain Signals: In each of the 837 voxels, we place a simulated brain signal. Two simulated signals were developed. The first signal is a sine wave with a frequency of 10Hz. We denote this signal as S1 = sin(2zrl0t) where t is an array of time points. The signal is depicted below: Figure 11: Signal S1 to be placed in all voxels to produce JTrue For this simulation, we have a 'True' array of voxels, which is called JTue. SI is placed into each voxel. 32

33 The other simulated signal is more physiologically accurate. The signal is a modulated sine wave with a mixture of 10+20Hz frequency. Additionally, there are bursts of activation followed by silence. Here S2 = sin(2ir10t)+sin(2c2ot). A graph of the signal S2 is provided below. Figure 12: Signal S2 to be placed in all voxels to produce JTrne. The JTrue signals for S1 and S2 have a mean of zero. Because of this fact, the leastsquares solution will do a great job of signal recovery since it assumes at each time point the mean of the estimate is 0. For this reason, we will add a DC offset to the JTre signals since the actual signals produced in the brain do not have zero means. Thus Jhrue will be Jrue + offset. We will vary the offset from 0 to

34 3.3 Selecting Regularization Parameter A If we recall from inverse methods, the static least square solution in an underdetermined dipole model has a regularization parameter X. This parameter is used to penalize the fitness of the model. Typically, values for lambda are between 0.01 and 1, however most values fall within 0.1 to 1. In order to find the approximate value for X, we varied k from 0.1 to 1 and compute the mean-squared error between the true signal SI and the least squares solution given by Equation 5. The resulting mean-squared error was smallest when X was equal to 0.5. For the remainder of the results, this will be the constant for X. The format of the simulation is as follows: 1. Place signal Si into all voxels to produce an Nv x T matrix JTrue. 2. Produce a measurement matrix, Y, using the forward model to produce an Nc x T matrix Y. 3. Calculate inverse solutions: a. Jstati for least squares with regularization solution. b. JKalman for Kalman filter solution. c. JSmooth for the fixed interval smoother. 4. Calculate the mean-squared error (MSE) between the three inverse solutions and JTrue. 5. Vary the signal-to-noise (SNR) ratios and offset values. 6. Perform the same analysis for signal Si. 34

35 4. Results The first set of experiments focused on changing the offset value in order to determine what effect a non-zero mean has on the recovered signals. For the signal Si, the recovery for all solutions are shown in the following figure: Figure 13: Inverse solution recovery for signal S1 at one voxel. Each plot shows a recovered signal for a given DC offset value. It is clear that as the offset occurs, the dynamic solutions (Kalman and smoother) can track and estimate the true signal better. The static solution is very noisy and fluctuates rapidly because of the prior assumption that the mean is 0. 35

36 The static solution does perform very well when there is no offset. As the offset gradually increase, you can see the static solution fluctuates rapidly. The dynamic solutions do a much better job at tracking the signal, despite the offset. This occurs because of the dynamic mean allows temporal continuity. For signal S2, a similar result occurs: Figure 14: Inverse solution recovery for signal S2 at one voxel. Similar results as S1, tracking improves using a dynamic inverse solution. 36

37 The SNR for the above simulations were at 5. This is an experimentally suitable SNR for MEG data. The MSE for each signal under the different offset values was calculated for SNR 5. The results are shown below. Signal S2: MSE with Variable Offset 3.5E OE Static -6- Kalman Smoother 2.5E-1 I wi C', 2.OE E-11 1.E-11 5.OE-12 O.OE+00 Offset Figure 15: MSE versus Offset for signal S2. The MSE for the Static solutions increase significantly as the offset is increased. The results above show that the non-zero mean has a significant effect on the recovery performance. This is an important point because true brain signals do not always have zero mean. Recovery for the static solution gets increasingly difficult as the mean departs from zero. The Kalman filter is significantly better than the static solution, and the smoother slightly improves the estimates from the Kalman computations. The SNRs were increased from 1 to 10 assuming a constant offset of 0. A graph of the effect of SNR on a recovered signal is shown below: 37

38 Signal S2: MSE with Variable SNR (offset = 0) 7.OE-12 6.OE-12-4-Static -*-Kalman Smoother 5.OE-12 wu C,, 4.OE-12 3.OE-12 2.OE-12 I.OE-12 O.OE+OO 1I 2 4 SNR 5 10 Figure 16: MSE versus SNR with 0 offset. The inverse solutions follow a downward trend as SNR is increased. At a high SNR (10), the static solution recovery is very similar to that of the dynamic solutions. MEG data is very noisy, a typical SNR for measurement data is around 5. At this SNR, the recovery still favors the dynamic solutions. If we assume an offset of 10, then the difference between the static and dynamic solutions are more significant. See the figure below. 38

39 Signal S2: MSE with Variable SNR (offset = 10) 3.OE E Static -M-Kalman Smoother 2.OE-1 1 w 1.5E-11 1.OE-11 5.OE-12 O.OE SNR Figure 17: MSE versus SNR with offset of 10. Demonstrating the importance of dynamic solutions. 39

40 5. Conclusions The purpose of this report is to serve as a proof of principle for future research. The preliminary research here provides two important findings. First, a dynamic solution to the inverse problem using spatio-temporal Kalman filtering results in a better overall recovery of the states. In neurophysiology, brain signals can be intractable, and the process is very spontaneous and fluctuating. In order to simulate this environment, an offset was added to our simulated signals. The offset is an important variable in the inverse solutions because it causes the static solution to constantly fluctuate from zero in order to find a solution. Secondly, the fixed interval smoother allows for more detailed signal recovery. It is not certain whether or not the small improvement in performance can be clinically justified, but for the purpose of this report, it should be looked upon further. In order to fully understand the fruits of dynamic inverse solutions, an analysis must be conducted on real data. Real data will provide the Y vector of measurements and then we can run the inverse solutions on the measurement data. This is the aim of our future research. 40

41 5. References Galka, A., Yamashita, 0., Ozaki, T., Biscay, R., Valde's-Sosa, P.A., A solution to the dynamical inverse problem of EEG generation using spatiotemporal Kalman filtering. Neurolmage 23: Mosher, J.C.; Leahy, R.M.; Lewis, P.S., "EEG and MEG: forward solutions for inverse methods", IEEE Transactions on Biomedical Engineering, Volume: 46 3, March 1999, Page(s): M. Hamalainen et al. (1993) Magnetoencephalography - theory, instrumentation, and applications to noninvasive studies of the working humans brain. Reviews of Modem Physics 65: PDF R. Hari and R. Salmelin (1997) Human cortical oscillations: a neuromagnetic view through the skull. Trends in Neuroscience 20:44-40 PDF S. J. Williamson, and L. Kaufman, "Analysis of neuromagnetic signals," in A. S. Gevins and A. Remond (Eds.), Handbook of Electroencephalography and Clinical Neurophysiology: Volume 1. Method of Analysis of Brain Electrical and Magnetic Signals (pp ). New York: Elsevier, 1987 G. Kitagawa and W. Gersch. Smoothness Priors Analysis of Time Series. Lecture Notes in Statistics #116, New York:Springer-Verlag

Theory: modeling, localization and imaging

Theory: modeling, localization and imaging Electromagnetic Brain Mapping with MEG/EEG Theory: modeling, localization and imaging Sylvain Baillet Imaging Group Cognitive Neuroscience & Brain Imaging Lab. Hôpital de la Salpêtrière CNRS UPR640 - LENA

More information

Analysis of Functional MRI Timeseries Data Using Signal Processing Techniques

Analysis of Functional MRI Timeseries Data Using Signal Processing Techniques Analysis of Functional MRI Timeseries Data Using Signal Processing Techniques Sea Chen Department of Biomedical Engineering Advisors: Dr. Charles A. Bouman and Dr. Mark J. Lowe S. Chen Final Exam October

More information

Modeling versus Accuracy in EEG and MEG Data

Modeling versus Accuracy in EEG and MEG Data Modeling versus Accuracy in EEG and MEG Data John C. Mosher, Richard M. Leahy, Ming-xiong Huang, Michael E. Spencer Los Alamos National Laboratory Group P- MS D Los Alamos, NM 7 mosher@lanl.gov, () -7

More information

Adaptive Waveform Inversion: Theory Mike Warner*, Imperial College London, and Lluís Guasch, Sub Salt Solutions Limited

Adaptive Waveform Inversion: Theory Mike Warner*, Imperial College London, and Lluís Guasch, Sub Salt Solutions Limited Adaptive Waveform Inversion: Theory Mike Warner*, Imperial College London, and Lluís Guasch, Sub Salt Solutions Limited Summary We present a new method for performing full-waveform inversion that appears

More information

The organization of the human cerebral cortex estimated by intrinsic functional connectivity

The organization of the human cerebral cortex estimated by intrinsic functional connectivity 1 The organization of the human cerebral cortex estimated by intrinsic functional connectivity Journal: Journal of Neurophysiology Author: B. T. Thomas Yeo, et al Link: https://www.ncbi.nlm.nih.gov/pubmed/21653723

More information

BESA Research. CE certified software package for comprehensive, fast, and user-friendly analysis of EEG and MEG

BESA Research. CE certified software package for comprehensive, fast, and user-friendly analysis of EEG and MEG BESA Research CE certified software package for comprehensive, fast, and user-friendly analysis of EEG and MEG BESA Research choose the best analysis tool for your EEG and MEG data BESA Research is the

More information

Physics I : Oscillations and Waves Prof. S Bharadwaj Department of Physics & Meteorology Indian Institute of Technology, Kharagpur

Physics I : Oscillations and Waves Prof. S Bharadwaj Department of Physics & Meteorology Indian Institute of Technology, Kharagpur Physics I : Oscillations and Waves Prof. S Bharadwaj Department of Physics & Meteorology Indian Institute of Technology, Kharagpur Lecture - 20 Diffraction - I We have been discussing interference, the

More information

A Simple Generative Model for Single-Trial EEG Classification

A Simple Generative Model for Single-Trial EEG Classification A Simple Generative Model for Single-Trial EEG Classification Jens Kohlmorgen and Benjamin Blankertz Fraunhofer FIRST.IDA Kekuléstr. 7, 12489 Berlin, Germany {jek, blanker}@first.fraunhofer.de http://ida.first.fraunhofer.de

More information

A comparison of minimum norm and MUSIC for a combined MEG/EEG sensor array

A comparison of minimum norm and MUSIC for a combined MEG/EEG sensor array Adv. Radio Sci., 10, 99 104, 2012 doi:10.5194/ars-10-99-2012 Author(s) 2012. CC Attribution 3.0 License. Advances in Radio Science A comparison of minimum norm and MUSIC for a combined MEG/EEG sensor array

More information

This chapter explains two techniques which are frequently used throughout

This chapter explains two techniques which are frequently used throughout Chapter 2 Basic Techniques This chapter explains two techniques which are frequently used throughout this thesis. First, we will introduce the concept of particle filters. A particle filter is a recursive

More information

Generalized Finite Sequence of Fuzzy Topographic Topological Mapping

Generalized Finite Sequence of Fuzzy Topographic Topological Mapping Journal of Mathematics and Statistics 6 (2): 151-156, 2010 ISSN 1549-3644 2010 Science Publications Generalized Finite Sequence of Fuzzy Topographic Topological Mapping 1,2 Tahir Ahmad, 2 Siti Suhana Jamian

More information

Press Release. Introduction. Prerequisites for LORETA

Press Release. Introduction. Prerequisites for LORETA Press Release Support & Tips A guided tour through LORETA Source localization in BrainVision Analyzer 2 by Dr.-Ing. Kidist Mideksa, Scientific Consultant at Brain Products Scientific Support What do I

More information

BioEST ver. 1.5beta User Manual Chang-Hwan Im (Ph.D.)

BioEST ver. 1.5beta User Manual Chang-Hwan Im (Ph.D.) BioEST ver. 1.5beta User Manual Chang-Hwan Im (Ph.D.) http://www.bioest.com E-mail: ichism@elecmech.snu.ac.kr imxxx010@umn.edu 1. What is BioEST? BioEST is a special edition of SNUEEG(http://www.bioinverse.com),

More information

TRAVELTIME TOMOGRAPHY (CONT)

TRAVELTIME TOMOGRAPHY (CONT) 30 March 005 MODE UNIQUENESS The forward model TRAVETIME TOMOGRAPHY (CONT di = A ik m k d = Am (1 Data vecto r Sensitivit y matrix Model vector states the linearized relationship between data and model

More information

Inverse Scattering. Brad Nelson 3/9/2012 Math 126 Final Project

Inverse Scattering. Brad Nelson 3/9/2012 Math 126 Final Project Inverse Scattering Brad Nelson /9/ Math 6 Final Project This paper investigates the ability to determine the boundary of an unknown object scattering waves using three different scenarios. In the first,

More information

7 Fractions. Number Sense and Numeration Measurement Geometry and Spatial Sense Patterning and Algebra Data Management and Probability

7 Fractions. Number Sense and Numeration Measurement Geometry and Spatial Sense Patterning and Algebra Data Management and Probability 7 Fractions GRADE 7 FRACTIONS continue to develop proficiency by using fractions in mental strategies and in selecting and justifying use; develop proficiency in adding and subtracting simple fractions;

More information

Electromagnetic migration of marine CSEM data in areas with rough bathymetry Michael S. Zhdanov and Martin Čuma*, University of Utah

Electromagnetic migration of marine CSEM data in areas with rough bathymetry Michael S. Zhdanov and Martin Čuma*, University of Utah Electromagnetic migration of marine CSEM data in areas with rough bathymetry Michael S. Zhdanov and Martin Čuma*, University of Utah Summary In this paper we present a new approach to the interpretation

More information

MR IMAGE SEGMENTATION

MR IMAGE SEGMENTATION MR IMAGE SEGMENTATION Prepared by : Monil Shah What is Segmentation? Partitioning a region or regions of interest in images such that each region corresponds to one or more anatomic structures Classification

More information

Homeomorphisms of Fuzzy Topographic Topological Mapping(FTTM)

Homeomorphisms of Fuzzy Topographic Topological Mapping(FTTM) MATEMATIKA, 2005, Jilid 21, Bil. 1, hlm. 35 42 c Jabatan Matematik, UTM. Homeomorphisms of Fuzzy Topographic Topological Mapping(FTTM) 1 Tahir Ahmad, 2 Rashdi Shah Ahmad, 1 Liau Li Yun 1 Fauziah Zakaria

More information

D020 Statics in Magnetotellurics - Shift or Model?

D020 Statics in Magnetotellurics - Shift or Model? D020 Statics in Magnetotellurics - Shift or Model? W. Soyer* (WesternGeco-Geosystem), S. Hallinan (WesternGeco- Geosystem), R.L. Mackie (WesternGeco-Geosystem) & W. Cumming (Cumming Geoscience) SUMMARY

More information

DEVELOPMENT OF REALISTIC HEAD MODELS FOR ELECTRO- MAGNETIC SOURCE IMAGING OF THE HUMAN BRAIN

DEVELOPMENT OF REALISTIC HEAD MODELS FOR ELECTRO- MAGNETIC SOURCE IMAGING OF THE HUMAN BRAIN DEVELOPMENT OF REALISTIC HEAD MODELS FOR ELECTRO- MAGNETIC SOURCE IMAGING OF THE HUMAN BRAIN Z. "! #$! Acar, N.G. Gençer Department of Electrical and Electronics Engineering, Middle East Technical University,

More information

INDEPENDENT COMPONENT ANALYSIS APPLIED TO fmri DATA: A GENERATIVE MODEL FOR VALIDATING RESULTS

INDEPENDENT COMPONENT ANALYSIS APPLIED TO fmri DATA: A GENERATIVE MODEL FOR VALIDATING RESULTS INDEPENDENT COMPONENT ANALYSIS APPLIED TO fmri DATA: A GENERATIVE MODEL FOR VALIDATING RESULTS V. Calhoun 1,2, T. Adali, 2 and G. Pearlson 1 1 Johns Hopkins University Division of Psychiatric Neuro-Imaging,

More information

f. (5.3.1) So, the higher frequency means the lower wavelength. Visible part of light spectrum covers the range of wavelengths from

f. (5.3.1) So, the higher frequency means the lower wavelength. Visible part of light spectrum covers the range of wavelengths from Lecture 5-3 Interference and Diffraction of EM Waves During our previous lectures we have been talking about electromagnetic (EM) waves. As we know, harmonic waves of any type represent periodic process

More information

Biomagnetic inverse problems:

Biomagnetic inverse problems: Biomagnetic inverse problems: Magnetic resonance electrical property tomography (MREPT) and magnetoencephalography (MEG) 2018 Aug. 16 The University of Tokyo Takaaki Nara 1 Contents Measurement What is

More information

Norbert Schuff VA Medical Center and UCSF

Norbert Schuff VA Medical Center and UCSF Norbert Schuff Medical Center and UCSF Norbert.schuff@ucsf.edu Medical Imaging Informatics N.Schuff Course # 170.03 Slide 1/67 Objective Learn the principle segmentation techniques Understand the role

More information

I How does the formulation (5) serve the purpose of the composite parameterization

I How does the formulation (5) serve the purpose of the composite parameterization Supplemental Material to Identifying Alzheimer s Disease-Related Brain Regions from Multi-Modality Neuroimaging Data using Sparse Composite Linear Discrimination Analysis I How does the formulation (5)

More information

3 Nonlinear Regression

3 Nonlinear Regression 3 Linear models are often insufficient to capture the real-world phenomena. That is, the relation between the inputs and the outputs we want to be able to predict are not linear. As a consequence, nonlinear

More information

Wavefront-based models for inverse electrocardiography. Alireza Ghodrati (Draeger Medical) Dana Brooks, Gilead Tadmor (Northeastern University)

Wavefront-based models for inverse electrocardiography. Alireza Ghodrati (Draeger Medical) Dana Brooks, Gilead Tadmor (Northeastern University) Wavefront-based models for inverse electrocardiography Alireza Ghodrati (Draeger Medical) Dana Brooks, Gilead Tadmor (Northeastern University) Rob MacLeod (University of Utah) Inverse ECG Basics Problem

More information

Development of a Maxwell Equation Solver for Application to Two Fluid Plasma Models. C. Aberle, A. Hakim, and U. Shumlak

Development of a Maxwell Equation Solver for Application to Two Fluid Plasma Models. C. Aberle, A. Hakim, and U. Shumlak Development of a Maxwell Equation Solver for Application to Two Fluid Plasma Models C. Aberle, A. Hakim, and U. Shumlak Aerospace and Astronautics University of Washington, Seattle American Physical Society

More information

Instantaneously trained neural networks with complex inputs

Instantaneously trained neural networks with complex inputs Louisiana State University LSU Digital Commons LSU Master's Theses Graduate School 2003 Instantaneously trained neural networks with complex inputs Pritam Rajagopal Louisiana State University and Agricultural

More information

All images are degraded

All images are degraded Lecture 7 Image Relaxation: Restoration and Feature Extraction ch. 6 of Machine Vision by Wesley E. Snyder & Hairong Qi Spring 2018 16-725 (CMU RI) : BioE 2630 (Pitt) Dr. John Galeotti The content of these

More information

- Graphical editing of user montages for convenient data review - Import of user-defined file formats using generic reader

- Graphical editing of user montages for convenient data review - Import of user-defined file formats using generic reader Data review and processing Source montages and 3D whole-head mapping Onset of epileptic seizure with 3D whole-head maps and hemispheric comparison of density spectral arrays (DSA) Graphical display of

More information

ITERATIVE COLLISION RESOLUTION IN WIRELESS NETWORKS

ITERATIVE COLLISION RESOLUTION IN WIRELESS NETWORKS ITERATIVE COLLISION RESOLUTION IN WIRELESS NETWORKS An Undergraduate Research Scholars Thesis by KATHERINE CHRISTINE STUCKMAN Submitted to Honors and Undergraduate Research Texas A&M University in partial

More information

A NEURAL NETWORK BASED IMAGING SYSTEM FOR fmri ANALYSIS IMPLEMENTING WAVELET METHOD

A NEURAL NETWORK BASED IMAGING SYSTEM FOR fmri ANALYSIS IMPLEMENTING WAVELET METHOD 6th WSEAS International Conference on CIRCUITS, SYSTEMS, ELECTRONICS,CONTROL & SIGNAL PROCESSING, Cairo, Egypt, Dec 29-31, 2007 454 A NEURAL NETWORK BASED IMAGING SYSTEM FOR fmri ANALYSIS IMPLEMENTING

More information

New Approaches for EEG Source Localization and Dipole Moment Estimation. Shun Chi Wu, Yuchen Yao, A. Lee Swindlehurst University of California Irvine

New Approaches for EEG Source Localization and Dipole Moment Estimation. Shun Chi Wu, Yuchen Yao, A. Lee Swindlehurst University of California Irvine New Approaches for EEG Source Localization and Dipole Moment Estimation Shun Chi Wu, Yuchen Yao, A. Lee Swindlehurst University of California Irvine Outline Motivation why EEG? Mathematical Model equivalent

More information

The Curse of Dimensionality

The Curse of Dimensionality The Curse of Dimensionality ACAS 2002 p1/66 Curse of Dimensionality The basic idea of the curse of dimensionality is that high dimensional data is difficult to work with for several reasons: Adding more

More information

3 Nonlinear Regression

3 Nonlinear Regression CSC 4 / CSC D / CSC C 3 Sometimes linear models are not sufficient to capture the real-world phenomena, and thus nonlinear models are necessary. In regression, all such models will have the same basic

More information

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) References: [1] http://homepages.inf.ed.ac.uk/rbf/hipr2/index.htm [2] http://www.cs.wisc.edu/~dyer/cs540/notes/vision.html

More information

Function approximation using RBF network. 10 basis functions and 25 data points.

Function approximation using RBF network. 10 basis functions and 25 data points. 1 Function approximation using RBF network F (x j ) = m 1 w i ϕ( x j t i ) i=1 j = 1... N, m 1 = 10, N = 25 10 basis functions and 25 data points. Basis function centers are plotted with circles and data

More information

ELEC Dr Reji Mathew Electrical Engineering UNSW

ELEC Dr Reji Mathew Electrical Engineering UNSW ELEC 4622 Dr Reji Mathew Electrical Engineering UNSW Review of Motion Modelling and Estimation Introduction to Motion Modelling & Estimation Forward Motion Backward Motion Block Motion Estimation Motion

More information

Optimizations and Lagrange Multiplier Method

Optimizations and Lagrange Multiplier Method Introduction Applications Goal and Objectives Reflection Questions Once an objective of any real world application is well specified as a function of its control variables, which may subject to a certain

More information

NIH Public Access Author Manuscript Proc IEEE Int Symp Biomed Imaging. Author manuscript; available in PMC 2012 June 9.

NIH Public Access Author Manuscript Proc IEEE Int Symp Biomed Imaging. Author manuscript; available in PMC 2012 June 9. NIH Public Access Author Manuscript Published in final edited form as: Proc IEEE Int Symp Biomed Imaging. 2011 June 9; 2011(March 30 2011-April 2 2011): 504 507. doi: 10.1109/ISBI.2011.5872455. DIFFERENTIAL

More information

Philip E. Plantz. Application Note. SL-AN-08 Revision B. Provided By: Microtrac, Inc. Particle Size Measuring Instrumentation

Philip E. Plantz. Application Note. SL-AN-08 Revision B. Provided By: Microtrac, Inc. Particle Size Measuring Instrumentation A Conceptual, Non Mathematical Explanation on the Use of Refractive Index in Laser Particle Size Measurement: (Understanding the concept of refractive index and Mie Scattering in Microtrac Instruments

More information

Spatial-temporal Source Reconstruction for Magnetoencephalography

Spatial-temporal Source Reconstruction for Magnetoencephalography Spatial-temporal Source Reconstruction for Magnetoencephalography Jing Kan Submitted for the degree of Doctor of Philosophy University of York Department of Computer Science March. 2011 Abstract Magnetoencephalography

More information

Exercise 16: Magnetostatics

Exercise 16: Magnetostatics Exercise 16: Magnetostatics Magnetostatics is part of the huge field of electrodynamics, founding on the well-known Maxwell-equations. Time-dependent terms are completely neglected in the computation of

More information

Information Processing for Remote Sensing Ed. Chen CH, World Scientific, New Jersey (1999)

Information Processing for Remote Sensing Ed. Chen CH, World Scientific, New Jersey (1999) Information Processing for Remote Sensing Ed. Chen CH, World Scientific, New Jersey (1999) DISCRIMINATION OF BURIED PLASTIC AND METAL OBJECTS IN SUBSURFACE SOIL D. C. CHIN, DR. R. SRINIVASAN, AND ROBERT

More information

Accuracy Analysis of Charged Particle Trajectory CAE Software

Accuracy Analysis of Charged Particle Trajectory CAE Software www.integratedsoft.com Accuracy Analysis of Charged Particle Trajectory CAE Software Content Executive Summary... 3 Overview of Charged Particle Beam Analysis... 3 Types of Field Distribution... 4 Simulating

More information

MINI-PAPER A Gentle Introduction to the Analysis of Sequential Data

MINI-PAPER A Gentle Introduction to the Analysis of Sequential Data MINI-PAPER by Rong Pan, Ph.D., Assistant Professor of Industrial Engineering, Arizona State University We, applied statisticians and manufacturing engineers, often need to deal with sequential data, which

More information

FMA901F: Machine Learning Lecture 3: Linear Models for Regression. Cristian Sminchisescu

FMA901F: Machine Learning Lecture 3: Linear Models for Regression. Cristian Sminchisescu FMA901F: Machine Learning Lecture 3: Linear Models for Regression Cristian Sminchisescu Machine Learning: Frequentist vs. Bayesian In the frequentist setting, we seek a fixed parameter (vector), with value(s)

More information

Knowledge-Based Segmentation of Brain MRI Scans Using the Insight Toolkit

Knowledge-Based Segmentation of Brain MRI Scans Using the Insight Toolkit Knowledge-Based Segmentation of Brain MRI Scans Using the Insight Toolkit John Melonakos 1, Ramsey Al-Hakim 1, James Fallon 2 and Allen Tannenbaum 1 1 Georgia Institute of Technology, Atlanta GA 30332,

More information

Artificial Intelligence for Robotics: A Brief Summary

Artificial Intelligence for Robotics: A Brief Summary Artificial Intelligence for Robotics: A Brief Summary This document provides a summary of the course, Artificial Intelligence for Robotics, and highlights main concepts. Lesson 1: Localization (using Histogram

More information

Source Reconstruction in MEG & EEG

Source Reconstruction in MEG & EEG Source Reconstruction in MEG & EEG ~ From Brain-Waves to Neural Sources ~ Workshop Karolinska Institutet June 16 th 2017 Program for today Intro Overview of a source reconstruction pipeline Overview of

More information

Data Acquisition. Chapter 2

Data Acquisition. Chapter 2 Data Acquisition Chapter 2 1 st step: get data Data Acquisition Usually data gathered by some geophysical device Most surveys are comprised of linear traverses or transects Typically constant data spacing

More information

Digital Image Processing. Prof. P. K. Biswas. Department of Electronic & Electrical Communication Engineering

Digital Image Processing. Prof. P. K. Biswas. Department of Electronic & Electrical Communication Engineering Digital Image Processing Prof. P. K. Biswas Department of Electronic & Electrical Communication Engineering Indian Institute of Technology, Kharagpur Lecture - 21 Image Enhancement Frequency Domain Processing

More information

PHYS 202 Notes, Week 8

PHYS 202 Notes, Week 8 PHYS 202 Notes, Week 8 Greg Christian March 8 & 10, 2016 Last updated: 03/10/2016 at 12:30:44 This week we learn about electromagnetic waves and optics. Electromagnetic Waves So far, we ve learned about

More information

[Programming Assignment] (1)

[Programming Assignment] (1) http://crcv.ucf.edu/people/faculty/bagci/ [Programming Assignment] (1) Computer Vision Dr. Ulas Bagci (Fall) 2015 University of Central Florida (UCF) Coding Standard and General Requirements Code for all

More information

Meeting 1 Introduction to Functions. Part 1 Graphing Points on a Plane (REVIEW) Part 2 What is a function?

Meeting 1 Introduction to Functions. Part 1 Graphing Points on a Plane (REVIEW) Part 2 What is a function? Meeting 1 Introduction to Functions Part 1 Graphing Points on a Plane (REVIEW) A plane is a flat, two-dimensional surface. We describe particular locations, or points, on a plane relative to two number

More information

COMPUTER AND ROBOT VISION

COMPUTER AND ROBOT VISION VOLUME COMPUTER AND ROBOT VISION Robert M. Haralick University of Washington Linda G. Shapiro University of Washington A^ ADDISON-WESLEY PUBLISHING COMPANY Reading, Massachusetts Menlo Park, California

More information

Section 9. Human Anatomy and Physiology

Section 9. Human Anatomy and Physiology Section 9. Human Anatomy and Physiology 9.1 MR Neuroimaging 9.2 Electroencephalography Overview As stated throughout, electrophysiology is the key tool in current systems neuroscience. However, single-

More information

First-level fmri modeling

First-level fmri modeling First-level fmri modeling Monday, Lecture 3 Jeanette Mumford University of Wisconsin - Madison What do we need to remember from the last lecture? What is the general structure of a t- statistic? How about

More information

Application of Finite Volume Method for Structural Analysis

Application of Finite Volume Method for Structural Analysis Application of Finite Volume Method for Structural Analysis Saeed-Reza Sabbagh-Yazdi and Milad Bayatlou Associate Professor, Civil Engineering Department of KNToosi University of Technology, PostGraduate

More information

Philip E. Plantz. Application Note. SL-AN-08 Revision C. Provided By: Microtrac, Inc. Particle Size Measuring Instrumentation

Philip E. Plantz. Application Note. SL-AN-08 Revision C. Provided By: Microtrac, Inc. Particle Size Measuring Instrumentation A Conceptual, Non-Mathematical Explanation on the Use of Refractive Index in Laser Particle Size Measurement (Understanding the concept of refractive index and Mie Scattering in Microtrac Instruments and

More information

MEG Laboratory Reference Manual for research use

MEG Laboratory Reference Manual for research use MEG Laboratory Reference Manual for research use for Ver. 1R007B Manual version 20040224 Index 1. File... 11 1.1 New... 11 1.2 Open... 11 1.3 Transfer...... 11 1.4 Suspended File List... 12 1.5 Save...

More information

Reconstructing visual experiences from brain activity evoked by natural movies

Reconstructing visual experiences from brain activity evoked by natural movies Reconstructing visual experiences from brain activity evoked by natural movies Shinji Nishimoto, An T. Vu, Thomas Naselaris, Yuval Benjamini, Bin Yu, and Jack L. Gallant, Current Biology, 2011 -Yi Gao,

More information

Effects of multi-scale velocity heterogeneities on wave-equation migration Yong Ma and Paul Sava, Center for Wave Phenomena, Colorado School of Mines

Effects of multi-scale velocity heterogeneities on wave-equation migration Yong Ma and Paul Sava, Center for Wave Phenomena, Colorado School of Mines Effects of multi-scale velocity heterogeneities on wave-equation migration Yong Ma and Paul Sava, Center for Wave Phenomena, Colorado School of Mines SUMMARY Velocity models used for wavefield-based seismic

More information

EKF Localization and EKF SLAM incorporating prior information

EKF Localization and EKF SLAM incorporating prior information EKF Localization and EKF SLAM incorporating prior information Final Report ME- Samuel Castaneda ID: 113155 1. Abstract In the context of mobile robotics, before any motion planning or navigation algorithm

More information

Computer Vision Group Prof. Daniel Cremers. 4. Probabilistic Graphical Models Directed Models

Computer Vision Group Prof. Daniel Cremers. 4. Probabilistic Graphical Models Directed Models Prof. Daniel Cremers 4. Probabilistic Graphical Models Directed Models The Bayes Filter (Rep.) (Bayes) (Markov) (Tot. prob.) (Markov) (Markov) 2 Graphical Representation (Rep.) We can describe the overall

More information

1.2 Numerical Solutions of Flow Problems

1.2 Numerical Solutions of Flow Problems 1.2 Numerical Solutions of Flow Problems DIFFERENTIAL EQUATIONS OF MOTION FOR A SIMPLIFIED FLOW PROBLEM Continuity equation for incompressible flow: 0 Momentum (Navier-Stokes) equations for a Newtonian

More information

A Computational Approach To Understanding The Response Properties Of Cells In The Visual System

A Computational Approach To Understanding The Response Properties Of Cells In The Visual System A Computational Approach To Understanding The Response Properties Of Cells In The Visual System David C. Noelle Assistant Professor of Computer Science and Psychology Vanderbilt University March 3, 2004

More information

Development of the Compliant Mooring Line Model for FLOW-3D

Development of the Compliant Mooring Line Model for FLOW-3D Flow Science Report 08-15 Development of the Compliant Mooring Line Model for FLOW-3D Gengsheng Wei Flow Science, Inc. October 2015 1. Introduction Mooring systems are common in offshore structures, ship

More information

Spatial Enhancement Definition

Spatial Enhancement Definition Spatial Enhancement Nickolas Faust The Electro- Optics, Environment, and Materials Laboratory Georgia Tech Research Institute Georgia Institute of Technology Definition Spectral enhancement relies on changing

More information

RECONSTRUCTION AND ENHANCEMENT OF CURRENT DISTRIBUTION ON CURVED SURFACES FROM BIOMAGNETIC FIELDS USING POCS

RECONSTRUCTION AND ENHANCEMENT OF CURRENT DISTRIBUTION ON CURVED SURFACES FROM BIOMAGNETIC FIELDS USING POCS DRAFT: October, 4: File: ramon-et-al pp.4 Page 4 Sheet of 8 CANADIAN APPLIED MATHEMATICS QUARTERLY Volume, Number, Summer RECONSTRUCTION AND ENHANCEMENT OF CURRENT DISTRIBUTION ON CURVED SURFACES FROM

More information

Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science Algorithms for Inference Fall 2014

Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science Algorithms for Inference Fall 2014 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.438 Algorithms for Inference Fall 2014 1 Course Overview This course is about performing inference in complex

More information

A TEMPORAL FREQUENCY DESCRIPTION OF THE SPATIAL CORRELATION BETWEEN VOXELS IN FMRI DUE TO SPATIAL PROCESSING. Mary C. Kociuba

A TEMPORAL FREQUENCY DESCRIPTION OF THE SPATIAL CORRELATION BETWEEN VOXELS IN FMRI DUE TO SPATIAL PROCESSING. Mary C. Kociuba A TEMPORAL FREQUENCY DESCRIPTION OF THE SPATIAL CORRELATION BETWEEN VOXELS IN FMRI DUE TO SPATIAL PROCESSING by Mary C. Kociuba A Thesis Submitted to the Faculty of the Graduate School, Marquette University,

More information

Extremal Graph Theory: Turán s Theorem

Extremal Graph Theory: Turán s Theorem Bridgewater State University Virtual Commons - Bridgewater State University Honors Program Theses and Projects Undergraduate Honors Program 5-9-07 Extremal Graph Theory: Turán s Theorem Vincent Vascimini

More information

Truncation Errors. Applied Numerical Methods with MATLAB for Engineers and Scientists, 2nd ed., Steven C. Chapra, McGraw Hill, 2008, Ch. 4.

Truncation Errors. Applied Numerical Methods with MATLAB for Engineers and Scientists, 2nd ed., Steven C. Chapra, McGraw Hill, 2008, Ch. 4. Chapter 4: Roundoff and Truncation Errors Applied Numerical Methods with MATLAB for Engineers and Scientists, 2nd ed., Steven C. Chapra, McGraw Hill, 2008, Ch. 4. 1 Outline Errors Accuracy and Precision

More information

This exercise uses one anatomical data set (ANAT1) and two functional data sets (FUNC1 and FUNC2).

This exercise uses one anatomical data set (ANAT1) and two functional data sets (FUNC1 and FUNC2). Exploring Brain Anatomy This week s exercises will let you explore the anatomical organization of the brain to learn some of its basic properties, as well as the location of different structures. The human

More information

Learning internal representations

Learning internal representations CHAPTER 4 Learning internal representations Introduction In the previous chapter, you trained a single-layered perceptron on the problems AND and OR using the delta rule. This architecture was incapable

More information

(x, y, z) m 2. (x, y, z) ...] T. m 2. m = [m 1. m 3. Φ = r T V 1 r + λ 1. m T Wm. m T L T Lm + λ 2. m T Hm + λ 3. t(x, y, z) = m 1

(x, y, z) m 2. (x, y, z) ...] T. m 2. m = [m 1. m 3. Φ = r T V 1 r + λ 1. m T Wm. m T L T Lm + λ 2. m T Hm + λ 3. t(x, y, z) = m 1 Class 1: Joint Geophysical Inversions Wed, December 1, 29 Invert multiple types of data residuals simultaneously Apply soft mutual constraints: empirical, physical, statistical Deal with data in the same

More information

Introduction to Neuroimaging Janaina Mourao-Miranda

Introduction to Neuroimaging Janaina Mourao-Miranda Introduction to Neuroimaging Janaina Mourao-Miranda Neuroimaging techniques have changed the way neuroscientists address questions about functional anatomy, especially in relation to behavior and clinical

More information

MODELING MIXED BOUNDARY PROBLEMS WITH THE COMPLEX VARIABLE BOUNDARY ELEMENT METHOD (CVBEM) USING MATLAB AND MATHEMATICA

MODELING MIXED BOUNDARY PROBLEMS WITH THE COMPLEX VARIABLE BOUNDARY ELEMENT METHOD (CVBEM) USING MATLAB AND MATHEMATICA A. N. Johnson et al., Int. J. Comp. Meth. and Exp. Meas., Vol. 3, No. 3 (2015) 269 278 MODELING MIXED BOUNDARY PROBLEMS WITH THE COMPLEX VARIABLE BOUNDARY ELEMENT METHOD (CVBEM) USING MATLAB AND MATHEMATICA

More information

Calibration of an Orthogonal Cluster of Magnetic Sensors

Calibration of an Orthogonal Cluster of Magnetic Sensors Calibration of an Orthogonal Cluster of Magnetic Sensors by Andrew A. Thompson ARL-TR-4868 July 2009 Approved for public release; distribution is unlimited. NOTICES Disclaimers The findings in this report

More information

Contrast Optimization: A faster and better technique for optimizing on MTF ABSTRACT Keywords: INTRODUCTION THEORY

Contrast Optimization: A faster and better technique for optimizing on MTF ABSTRACT Keywords: INTRODUCTION THEORY Contrast Optimization: A faster and better technique for optimizing on MTF Ken Moore, Erin Elliott, Mark Nicholson, Chris Normanshire, Shawn Gay, Jade Aiona Zemax, LLC ABSTRACT Our new Contrast Optimization

More information

CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION

CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION 6.1 INTRODUCTION Fuzzy logic based computational techniques are becoming increasingly important in the medical image analysis arena. The significant

More information

A Hybrid Magnetic Field Solver Using a Combined Finite Element/Boundary Element Field Solver

A Hybrid Magnetic Field Solver Using a Combined Finite Element/Boundary Element Field Solver A Hybrid Magnetic Field Solver Using a Combined Finite Element/Boundary Element Field Solver Abstract - The dominant method to solve magnetic field problems is the finite element method. It has been used

More information

Bayesian estimation of optical properties of the human head via 3D structural MRI p.1

Bayesian estimation of optical properties of the human head via 3D structural MRI p.1 Bayesian estimation of optical properties of the human head via 3D structural MRI June 23, 2003 at ECBO 2003 Alex Barnett Courant Institute, New York University Collaborators (NMR Center, Mass. Gen. Hosp.,

More information

Learning in Medical Image Databases. Cristian Sminchisescu. Department of Computer Science. Rutgers University, NJ

Learning in Medical Image Databases. Cristian Sminchisescu. Department of Computer Science. Rutgers University, NJ Learning in Medical Image Databases Cristian Sminchisescu Department of Computer Science Rutgers University, NJ 08854 email: crismin@paul.rutgers.edu December, 998 Abstract In this paper we present several

More information

SPECIAL TECHNIQUES-II

SPECIAL TECHNIQUES-II SPECIAL TECHNIQUES-II Lecture 19: Electromagnetic Theory Professor D. K. Ghosh, Physics Department, I.I.T., Bombay Method of Images for a spherical conductor Example :A dipole near aconducting sphere The

More information

Two-dimensional Totalistic Code 52

Two-dimensional Totalistic Code 52 Two-dimensional Totalistic Code 52 Todd Rowland Senior Research Associate, Wolfram Research, Inc. 100 Trade Center Drive, Champaign, IL The totalistic two-dimensional cellular automaton code 52 is capable

More information

Title. Author(s)Smolka, Bogdan. Issue Date Doc URL. Type. Note. File Information. Ranked-Based Vector Median Filter

Title. Author(s)Smolka, Bogdan. Issue Date Doc URL. Type. Note. File Information. Ranked-Based Vector Median Filter Title Ranked-Based Vector Median Filter Author(s)Smolka, Bogdan Proceedings : APSIPA ASC 2009 : Asia-Pacific Signal Citationand Conference: 254-257 Issue Date 2009-10-04 Doc URL http://hdl.handle.net/2115/39685

More information

Reconstruction of the Vibro-Acoustic Field on the Surface of the Refrigerator Compressor by Using the BEM-Based Acoustic Holography

Reconstruction of the Vibro-Acoustic Field on the Surface of the Refrigerator Compressor by Using the BEM-Based Acoustic Holography Purdue University Purdue e-pubs International Compressor Engineering Conference School of Mechanical Engineering 1998 Reconstruction of the Vibro-Acoustic Field on the Surface of the Refrigerator Compressor

More information

Neuron Selectivity as a Biologically Plausible Alternative to Backpropagation

Neuron Selectivity as a Biologically Plausible Alternative to Backpropagation Neuron Selectivity as a Biologically Plausible Alternative to Backpropagation C.J. Norsigian Department of Bioengineering cnorsigi@eng.ucsd.edu Vishwajith Ramesh Department of Bioengineering vramesh@eng.ucsd.edu

More information

May Brain Products Teaching Material. Introduction to Source Analysis Distributed Source Imaging using LORETA. BrainVision Analyzer 2 Webinar

May Brain Products Teaching Material. Introduction to Source Analysis Distributed Source Imaging using LORETA. BrainVision Analyzer 2 Webinar Introduction to Source Analysis Distributed Source Imaging using LORETA BrainVision Analyzer 2 Webinar Scientific Support Team Brain Products GmbH Webinar Team Presenter BP Moderator BP Tech Support Kidist

More information

CHAPTER 6 DETECTION OF MASS USING NOVEL SEGMENTATION, GLCM AND NEURAL NETWORKS

CHAPTER 6 DETECTION OF MASS USING NOVEL SEGMENTATION, GLCM AND NEURAL NETWORKS 130 CHAPTER 6 DETECTION OF MASS USING NOVEL SEGMENTATION, GLCM AND NEURAL NETWORKS A mass is defined as a space-occupying lesion seen in more than one projection and it is described by its shapes and margin

More information

1 2 (3 + x 3) x 2 = 1 3 (3 + x 1 2x 3 ) 1. 3 ( 1 x 2) (3 + x(0) 3 ) = 1 2 (3 + 0) = 3. 2 (3 + x(0) 1 2x (0) ( ) = 1 ( 1 x(0) 2 ) = 1 3 ) = 1 3

1 2 (3 + x 3) x 2 = 1 3 (3 + x 1 2x 3 ) 1. 3 ( 1 x 2) (3 + x(0) 3 ) = 1 2 (3 + 0) = 3. 2 (3 + x(0) 1 2x (0) ( ) = 1 ( 1 x(0) 2 ) = 1 3 ) = 1 3 6 Iterative Solvers Lab Objective: Many real-world problems of the form Ax = b have tens of thousands of parameters Solving such systems with Gaussian elimination or matrix factorizations could require

More information

Estimating Noise and Dimensionality in BCI Data Sets: Towards Illiteracy Comprehension

Estimating Noise and Dimensionality in BCI Data Sets: Towards Illiteracy Comprehension Estimating Noise and Dimensionality in BCI Data Sets: Towards Illiteracy Comprehension Claudia Sannelli, Mikio Braun, Michael Tangermann, Klaus-Robert Müller, Machine Learning Laboratory, Dept. Computer

More information

Computation of Three-Dimensional Electromagnetic Fields for an Augmented Reality Environment

Computation of Three-Dimensional Electromagnetic Fields for an Augmented Reality Environment Excerpt from the Proceedings of the COMSOL Conference 2008 Hannover Computation of Three-Dimensional Electromagnetic Fields for an Augmented Reality Environment André Buchau 1 * and Wolfgang M. Rucker

More information

Ohio Tutorials are designed specifically for the Ohio Learning Standards to prepare students for the Ohio State Tests and end-ofcourse

Ohio Tutorials are designed specifically for the Ohio Learning Standards to prepare students for the Ohio State Tests and end-ofcourse Tutorial Outline Ohio Tutorials are designed specifically for the Ohio Learning Standards to prepare students for the Ohio State Tests and end-ofcourse exams. Math Tutorials offer targeted instruction,

More information

RECONSTRUCTION AND ENHANCEMENT OF CURRENT DISTRIBUTION ON CURVED SURFACES FROM BIOMAGNETIC FIELDS USING POCS

RECONSTRUCTION AND ENHANCEMENT OF CURRENT DISTRIBUTION ON CURVED SURFACES FROM BIOMAGNETIC FIELDS USING POCS CANADIAN APPLIED MATHEMATICS QUARTERLY Volume 10, Number 2, Summer 2002 RECONSTRUCTION AND ENHANCEMENT OF CURRENT DISTRIBUTION ON CURVED SURFACES FROM BIOMAGNETIC FIELDS USING POCS Based on a presentation

More information

CCSSM Curriculum Analysis Project Tool 1 Interpreting Functions in Grades 9-12

CCSSM Curriculum Analysis Project Tool 1 Interpreting Functions in Grades 9-12 Tool 1: Standards for Mathematical ent: Interpreting Functions CCSSM Curriculum Analysis Project Tool 1 Interpreting Functions in Grades 9-12 Name of Reviewer School/District Date Name of Curriculum Materials:

More information