A New Algorithm for Autoregression Moving Average Model Parameter Estimation Using Group Method of Data Handling
|
|
- Kimberly Greer
- 5 years ago
- Views:
Transcription
1 Annals of Biomedical Engineering, Vol. 29, pp , 2001 Printed in the USA. All rights reserved /2001/29 1 /92/7/$15.00 Copyright 2001 Biomedical Engineering Society A New Algorithm for Autoregression Moving Average Model Parameter Estimation Using Group Method of Data Handling KI H. CHON and SHENG LU Department of Electrical Engineering and Center for Biomedical Engineering, City College of the City University of New York, New York, NY (Received 24 February 2000; accepted 2 November 2000) Abstract A new algorithm for autoregresive moving average ARMA parameter estimation is introduced. The algorithm is based on the group method of data handling GMDH first introduced by the Russian cyberneticist, A. G. Ivakhnenko, for solving high-order regression polynomials. The GMDH is heuristic in nature and self-organizes into a model of optimal complexity without any a priori knowledge about the system s inner workings. We modified the GMDH algorithm to solve for ARMA model parameters. Computer simulations have been performed to examine the efficacy of the GMDH and comparison of the GMDH is made to one of the most accurate and one of the most widely used algorithms, the fast orthogonal search FOS and the least-squares methods, respectively. The results show that in some cases with noise contamination and incorrect model order assumptions, the GMDH performs better than either the FOS or the least-squares methods in providing only the parameters that are associated with the true model terms Biomedical Engineering Society. DOI: / Keywords ARMA model, Group method of data handling, Parameter estimation, Dynamic noise, Additive noise, MA model, AR model, Model order selection. INTRODUCTION Address correspondence to Ki H. Chon, City College of NY, Dept. of Electrical Eng., Steinman Hall, Rm. 677, Convent Ave. at 138th Street, NY, NY Electronic mail: kichon@e .engr.ccny.cuny.edu 92 The ubiquity of autoregressive moving average ARMA models in physiological system identification is owed to their having been successfully demonstrated in recent years. 3 5,10,11 Their use has been greatly aided by advances in overcoming the holy-grail problem of model order determination during ARMA model parameter estimation. 4,10,11 One of the most notable and accurate algorithms introduced to determine model order selection efficiently is the novel work by Korenberg. 10,11 Korenberg s fast orthogonal search FOS algorithm has been shown to be robust in most cases in obtaining correct ARMA model parameters despite incorrect model order selection. This observation remains valid even with significant noise added to the system response. 4,10,11 The FOS algorithm which relies on the sequential search procedure to extract only the significant ARMA model terms, however, is suboptimal. It is suboptimal because the FOS does not test for the minimum error across all possible subsets of ARMA model candidate functions within the candidate function space. Thus, there are cases when the FOS is not able to identify correctly the true ARMA model parameters even with no noise present. To alleviate some of the aforementioned shortcomings of the FOS, we introduce a new algorithm which searches for all possible ARMA model candidate functions although not a global search of candidate functions and, thus, in the simulation examples considered, provides accurate estimation of the system parameters. The algorithm we present is a modification of the algorithm introduced by the Russian mathematician and cyberneticist, A. G. Ivakhnenko, in the mid 60s. The algorithm was used for estimating higher-order regression polynomials and is denoted the group method of data handling GMDH. The GMDH was designed to construct successively higher-order regression equations at each iteration from equations of the previous iteration while retaining only those that best approximate the given data set. In this way a high-order regression type model is evolved guided only by survival of the fittest. The main idea of the GMDH is to have the algorithm construct a model of optimal complexity based only on data; a self-organizing model that can be used to solve prediction, identification, control synthesis, and other system problems. 6 9 The GMDH algorithm is similar to the genetic algorithm approach to parameter estimation, in which the true candidate terms are created based on the principle of survival of the fittest. Note that in some genetic algorithms, only the fittest offspring survive based on the rank selection scheme. 2,13 The aim of the present study is to show that the GMDH, originally designed for estimating higher-order regression polynomials, can be modified to be one of the better algorithms for ARMA model parameter estima-
2 ARMA MODEL PARAMETER ESTIMATION 93 tion. To showcase the efficacy of the GMDH, comparisons between the GMDH, the FOS, and the least-squares methods are made using simulation examples. The FOS is chosen since it is one of the most accurate algorithms available for ARMA parameter estimation. Least squares using the Akaike information criteria for model order selection is chosen for comparison since it is still one of the most widely utilized methods for ARMA model parameter estimation, especially in the biomedical engineering community. METHODS GMDH Algorithm We describe a modification of the GMDH algorithm to estimate the parameters of the ARMA model, in this section. The GMDH was originally formulated to solve for higher-order regression polynomials and not the difference equation upon which ARMA models are based. Consider a time-invariant autoregressive moving average process of the form P y n i 1 Q a i y n i b j x n j, j 0 where P and Q represent the maximum autoregressive AR and moving average MA model orders, respectively. Note that Eq. 1 is in a control format rather than in the classical format known to statisticians. The parameters a(i) and b( j) represent to-be-estimated coefficients of the AR and MA terms, respectively. The basic steps of the newly developed algorithm to calculate ARMA model parameters are as follows. Step (1). Partition the data with the first nt rows designated as the training set and the remaining rows as the testing set. From the training set form a matrix of nt observations of ARMA model terms like the ones shown below. Note that the first nt rows and the last N nt denote training and testing data, respectively. The testing data set is only used in step 3 below, to measure the mean-square error of the model identified with the training data set 1 y 0 y 1 y 1 P x 1 x 0 x 1 Q y 1 training y 2 y 1 y 0 y 2 P x 2 x 1 x 2 Q data ] y nt testing ] data y N ] ] ] ] ] ] y nt 1 y nt 2 y nt P x nt x nt 1 x nt Q ] ] ] ] ] ] ] ] y N 1 y N 2 y N P x N x N 1 x N Q Matrix R. Y R 1 R 2 R m Step (2). Take all the variables m number of variables in the columns of the matrix R y(n 1),..,y(n P),x(n),x(n 1),x(n Q) two columns at a time and for each of these m(m 1)/2 combinations find the least-squares regression that best fits the observation vector y s. For example, for the first two combinations vectors (R 1,R 2 ) in matrix R, evaluate the least-squares polynomial of the form y A BR 1 CR 2 DR 1 2 ER 2 2 FR 1 R 2 at the data points y(0),y( 1), y(1),y(0),..., y(n 1),y(N 2) and store these N values in the first column of a new array Z. The remaining m(m 1)/2 1 columns are constructed in a similar manner. The array Z contains the new variables, some of which replace the original variables. The objective is to retain those z s that best estimate the output vector y and discard the insignificant variables. Step (3). To determine which columns of Z new variables replace the old variables in the matrix R, compute the least squares error, d j : d 2 j 1/ N nt N i nt 1 y i z ij j 1,2,..., 2 N 1/ N nt i nt 1 y i y 2, m 2 2 and order the columns of Z according to increasing least square error. Note that z ij in Eq. 2 represents an element of array Z, and y denotes the mean of the output signal y. For example, in Eq. 3, an element z 11 represents the output of a polynomial function evaluated in step 2 at the data points y(0),y( 2). One can place a restriction as to some prescribed number of new variables to replace the old variables in the matrix R, i.e.,
3 94 K. H. CHON and S. LU d j M, where M is some prescribed number. Thus, if a priori we choose P 10 and Q 10 terms and find that only the combination of the vectors y(n 1),y(n 3), y(n 1),y(n 5), and y(n 2),x(n) terms meet the leastsquares error criteria, then the new variables Z will have the following form: z 11 y 0,y 2 y 0,y 4 y 1,x 1 ] ] ] ]. 3 z N1 y N 1,y N 3 y N 1,y N 5 y N 2,x N Z 1 Z 2 Z 3 Step (4). From step 3 take the smallest of the d j.if the value of d j is smaller than the previous d j in the first iteration we assume this to be true go back and repeat steps 2 and 3. If the value of d j is greater than the previous d j we stop the process. It is our observation, however, that the value of d j becomes smaller than the previous d j as the number of iterations is increased. It is our experience that performing more iterations results in smaller mean-square error but at the expense of an increase in the number of model terms. Based on our simulations, we have determined that three iterations are sufficient to obtain accurate parameter estimates. Figure 1 illustrates the iterative process involved in the GMDH algorithm. The layers in Fig. 1 correspond to the number of iterations; for three iterations, we have three layers. Note that the first and second nodes in the first layer of Fig. 1 may represent terms y(n 1) and y(n 2), and y(n 1) and y(n 3), respectively, if these terms meet the criteria described in step 3. With increasing iteration, the number of layers will increase and the number of terms in the nodes at that layer may increase drastically, provided that terms pertaining to those nodes meet the criteria in step 3. Note that in the third layer, the nodes are ordered according to increasing least square error, i.e., node one is more significant than node two and so on. Step (5). The GMDH algorithm calls for the use of the least squares to find the coefficients of the retained variables. However, we have made a modification to the original GMDH algorithm at this step. Instead of computing the least squares of all the retained variables or all of the nodes in layer three, we compute the contribution of each of the nodes to reducing the mean-square error. If we observe either negligible decrease or increase in the mean-square error by adding an additional node, then those nodes are dropped from the model. Only those nodes that reduce the mean-square error are retained, and the coefficients pertaining to the retained nodes are estimated using the least-squares algorithm. Thus, the distinct terms from all the retained nodes are included in the ARMA model. The Simulation Results section discusses this step further see Fig. 2. The distinct advantage of the GMDH is more apparent when the number of variables and the degree of the polynomials are large. For example, with least squares, to find a regression polynomial with 4 variables of degree 4, 70 linear equations with 70 unknowns must be solved. This is not a daunting task with the present computing power, however, the equations are often illconditioned and consequently cannot be solved. The GMDH polynomials, however, can be obtained by solving far fewer linear systems of order six. The reason the GMDH requires fewer computations is that it retains only the information that is highly correlated with the testing data set. Consequently, this allows the computation to be more efficient and enables the system of normal equations to be well conditioned. It should be noted that although the GMDH searches for a full combination of candidate terms, there are cases when it will not necessarily provide correct structural model. The reason this might occur is that an output can be accurately described by as few as four ARMA terms, but none of these four terms when paired would produce a small error, as computed according to Eq. 2. Thus, in this case, these four terms would not be selected as the true terms and consequently one would miss the true FIGURE 1. Three-layer iterative process of the GMDH algorithm. Each layer corresponds to the number of iterations.
4 ARMA MODEL PARAMETER ESTIMATION 95 model. A simulation example portraying this scenario is presented in Table 3 of the Simulation Results section. Note that the method outlined is also applicable to subclasses of linear ARMA models such as AR and MA models. SIMULATION RESULTS In this section we demonstrate the effectiveness of the developed algorithm for estimating parameters of the ARMA model. We compare the GMDH results with parameters obtained via the methods of FOS and the least squares. We chose the FOS since, as far as we are aware, it is one of the most accurate algorithms available. The least squares is chosen since it is still one of the most widely used methods in practice. Note that the FOS employs its own robust automatic model order selection 12 whereas the least-squares method relies on the use of model order selection schemes such as the Akaike information criteria AIC. 1 For all simulation examples, we iterate the GMDH algorithm 3 times as discussed in the Methods section. In addition, for all simulation examples to follow, data was sectioned into a training and testing data set with testing data used only in step 3 earlier to measure the mean-square errors MSE value of the model obtained with the training data set. Effects of Incorrect Model Order Selection For the first simulation example, the following linear ARMA model was simulated with Gaussian white noise GWN as the input, x(n), so that the output, y(n), contained 1000 data points y n 0.54y n y n y n x n 0.43x n x n x n 3. The objective is, based on only the measured input signal, x(n) and the output signal, y(n), estimate the parameters of the earlier equation as accurately as possible. Although the true ARMA model order for the earlier process is three output lags y(n 1)...y(n 3) and three input lags x(n),x(n 1),..x(n 3), we purposely selected an incorrect model order of ten output lags y(n 1),,y(n 10) and ten input lags x(n),x(n 1),..,x(n 10) for all three methods. In real life settings, a true model order is unknown a priori, thus, the real test of an algorithm is to examine its efficacy with an incorrect model order. The model orders for the least-squares method were determined according to the AIC and were correctly determined to be three output lags and three input lags. Figure 2 shows the plot of the difference between mean-square error values for 4 FIGURE 2. Bar graph of the difference between MSE values for adjacent nodes as a function of the lower node numbers. adjacent nodes versus the lower node number as a venue for determining model order for the GMDH algorithm. As shown in Fig. 2, we decided to terminate the computation of the GMDH algorithm after six nodes since this corresponded to the minimum error value concomitant with a sufficient number of nodes to reflect inclusion of most of the candidate terms in the model. Adjacent node 2, although it has a small error value, was not chosen since it resulted in only a few model candidate terms and the normalized mean-square errors would have been greater than the choice of adjacent node 6. It should be pointed out that it is not crucial that we select the minimum error value as the slightly higher error values associated with four and five nodes all do result in the same correct ARMA model parameters. The reason we obtain the same model coefficients whether we choose four, five, or six nodes is that, for example, nodes 5 and 6 may contain model terms that are already contained in node 4. Thus, no new model terms from those of node 4 have been added in nodes 5 and 6, resulting in the same coefficients whether node 4 or node 6 is chosen. Comparison of the results based on the three methods is shown in Table 1. Only the new algorithm GMDH and the least squares are able to identify correctly the true parameters in the system model without concomitant estimation of the incorrect model terms. The leastsquares and GMDH model correctly found coefficients of zero for all the spurious model orders, so they are not shown in Table 1. The FOS, however, performs the worst for this simulation as exemplified by the incorrect estimation of parameters and spurious model terms. It should be pointed out that only in certain cases, such as this example, does the FOS not correctly identify the parameters. In many instances, the FOS does provide accurate parameter estimates despite inaccurate model order selections. 4,10,11
5 96 K. H. CHON and S. LU TABLE 1. Comparison of GMDH, FOS, and the least squares with an incorrect model order selection. Model terms y(n 1) y(n 2) y(n 3) y(n 5) x(n) x(n 1) x(n 2) x(n 3) x(n 5) True values GMDH FOS Least squares To further examine the efficacy of the GMDH, consider an ARMA process described by the equation y n 0.546y n y n y n y n x n 0.454x n x n x n x n 4, 5 where the input x(n) is the same input as used in Eq. 4. For all three methods compared, we chose an incorrect model order of AR 10 and MA 10. Table 2 shows the results. We observe that only the least-squares based on the AIC model order selection was not able to identify the correct parameters since the AIC resulted in model order of AR 3, MA 7. The GMDH provided the correct model terms except for the term y(n 4), which in turn resulted in incorrect parameter estimates. The FOS did not fare as well as the GMDH method. It not only missed some of the model terms but it incorrectly identified terms that are not in the true model. The normalized mean-square errors NMSE values for the GMDH and FOS are and , respectively. As shown in Table 2, the GMDH performed the best compared to the other two methods. This example is provided to show that the GMDH does not always provide accurate parameters, especially for a simple example such as this. The reason the GMDH does not necessarily provide correct parameter estimation in this example is quite simple. In Table 2, the term y(n 4) when paired with other candidate terms, produced a small reduction in error, thus was rejected and missed the true model. Thus, the GMDH is not to be considered a global optimal search method; an optimal search method would require testing for the minimum error across all possible subsets of candidate functions within the candidate function space. ARMA Model with Additive Noise and Incorrect Model Order Selection To examine the effectiveness of the GMDH in the case of noise contamination, the output signal of Eq. 4 was corrupted with two levels of additive GWN, resulting in signal-to-noise-ratios SNR of 10 and 0 db. The SNR was determined by the ratio of the variance of the signal divided by the variance of the noise signal. The model order was incorrectly selected to be ten output and ten input lags for all methods compared. This is a challenging and realistic scenario; not only is the model order purposely incorrectly chosen, but the output signal is heavily corrupted by noise. The results of additive noise for the GMDH, FOS, and the least squares are shown in Table 3. With the SNR of 10 db, the GMDH correctly provides parameters that are associated with only the true model terms but this is not the case for the FOS; the FOS missed a model term, y(n 3), and incorrectly picked an additional term, x(n 4). The leastsquares approach based on the AIC resulted in a model order selection of AR 7 and MA 7. Since the leastsquares method does not have automatic model order selection, it resulted in estimates of terms and coefficients not included in the actual system in Eq. 4 due to an overdetermined model order assumption. The NMSE for all methods are comparable, however, it should be noted that for the least-squares, the comparable NMSE was obtained with 15 parameters whereas for both the GMDH and FOS, the NMSE values were obtained with only 7 parameters. More qualitative differences between the methods compared emerge when the SNR is 0 db. The GMDH again provided only the correct model terms as was the case when the SNR was 10 db. The FOS missed three model terms, y(n 1), y(n 2), and y(n 3). In addition, the FOS incorrectly picked three additional model terms, y(n 5), x(n 4), and x(n 5). The AIC model order selection resulted in AR 0 and TABLE 2. Comparison of GMDH, FOS, and the least squares with an incorrect model order selection. Model terms y(n 1) y(n 2) y(n 3) y(n 4) y(n 6) y(n 7) x(n) x(n 1) x(n 2) x(n 3) x(n 4) x(n 6) x(n 7) True values GMDH FOS Least squares
6 ARMA MODEL PARAMETER ESTIMATION 97 TABLE 3. Comparison of GMDH and FOS with additive noise and an incorrect model order selection. Model terms y(n 1) y(n 2) y(n 3) y(n 5) x(n) x(n 1) x(n 2) x(n 3) x(n 4) x(n 5) True values GMDH (10 db) FOS (10 db) Least squares (10 db) GMDH (0 db) FOS (0 db) Least squares (0 db) MA 7, and the resulting coefficients via the leastsquares are provided in Table 3. With significant noise added, the least-squares based on the AIC completely missed the AR terms and then compensated by including more MA terms than necessary. Because the FOS was designed to reduce the NMSE, despite incorrect model terms, the NMSE value obtained for the FOS is comparable to the GMDH and the least squares. ARMA Model with Missing Terms The next example considers an ARMA process with missing terms and 1000 data points generated for the output y(n): y n 0.25y n 1 0.1y n 2 0.1y n 6 0.8x n 0.3x n x n 6. Although the true ARMA model order for the process of Eq. 6 is AR 6 and MA 6, we have again purposely selected an incorrect model order of ARMA 10,10 for all three methods. The Akaike information criteria correctly identified model order of ARMA 6,6. The FOS, due to its automatic model order search criterion, also identified only the correct parameters. The GMDH method provided correct parameter estimates for 4 9 nodes. Thus, similar to the previous example, it is not imperative that the exact minimum error value is used as a criterion in deciding when to terminate the GMDH algorithm. Thus, despite an incorrect model order assumption, all three methods provided only the correct parameters. This is clearly an impressive result by all of the methods compared considering the fact that a grossly incorrect model order was chosen a priori. 6 ARMA Model with Dynamic Noise and Incorrect Model Order Selection The second simulation discussed included additive noise. The noise source was added after the output sequence, y(n), had been generated. However, noise sources can also be of a dynamic nature. Dynamic noise refers to a feedback process where the noise source is recursively added to the output so that the current and future output values are dependent on the past states of the input and noise signals. Thus, to examine the effect of dynamic noise consider an ARMA process described by the difference equation y n 0.54y n y n y n x n 0.43x n x n x n e n 0.3e n 1. Both the input x(n) and the noise component terms e(n) were generated using 1000 Gaussian white noise data points that were independent from each other. The noise component e(n) was selected to give a SNR of 10.4 db. We chose an incorrect model order of AR 10 and MA 10 for all three methods compared. Table 4 shows the results of the coefficients estimated via the three methods. It is clear that only the GMDH provided all of the model terms correctly. The least-squares method completely ignored the AR terms and included a few extra MA model terms that are not in the true model, owing to the AIC model order selection of AR 0, MA 7. The FOS missed the y(n 3) term and provided two extra terms, x(n 4) and x(n 5). The NMSE values are about 9% for all three methods. 7 TABLE 4. Comparison of GMDH and FOS with dynamic noise and an incorrect model order selection. Model terms y(n 1) y(n 2) y(n 3) x(n) x(n 1) x(n 2) x(n 3) x(n 4) x(n 5) x(n 6) x(n 7) True values GMDH FOS Least squares
7 98 K. H. CHON and S. LU CONCLUSIONS We presented a new algorithm, the group method of data handling, for estimating the parameters of ARMA models. The algorithm can be extended to estimate subclasses of linear ARMA models such as AR MA models, as well as nonlinear ARMA models. In addition, the algorithm can be extended to model the prediction error terms which will significantly improve the model prediction. Various simulation examples have shown the efficacy of the GMDH method. In certain cases of noiseless systems, the least squares based on the AIC model order selection is superior to FOS. However, when the system output is confronted with noise either additive or dynamic, the GMDH method performed best followed by the FOS and the least squares in providing only the true model terms. If one is interested in obtaining the smallest mean-square error with the fastest computation time, then simulation examples suggest that the FOS would be the ideal method to be use. However, if obtaining the most accurate representation of the true model is the necessity, then the GMDH method should be the algorithm of choice, at least for the simulation examples considered above so far. As has been reported in the literature, the least-squares method based on the AIC model selection can work well if the system is noise free. However, the accuracy of the least-squares approach to ARMA model parameter estimation degrades considerably when the system output is corrupted with any type of noise source. For the simulation examples considered, the GMDH, unlike the other two methods compared, provides accurate model terms even in cases with significant noise perturbation as well as incorrect model order selection. ACKNOWLEDGMENT This work was supported by the Whitaker Foundation. REFERENCES 1 Akaike, H. Power spectrum estimation through autoregressive model fitting. Ann. Instrum. Stat. Math. 21: , Baker, J. E. Adaptive selection methods for genetic algorithms. In: Proc. First International Conf. Genetic Algorithms and Their Application, edited by J. J. Grefenstette, Hillsdale, NJ: Erlbaum, Chon, K. H., and R. J. Cohen. Linear and nonlinear ARMA model parameter estimation using artificial neural networks. IEEE Trans. Biomed. Eng. 44: , Chon, K. H., M. J. Korenberg, and N. H. Holstein-Rathlou. Application of fast orthogonal search to linear and nonlinear stochastic systems. Ann. Biomed. Eng. 25: , Chon, K. H., R. J. Cohen, and N. H. Holstein-Rathlou. Compact and accurate linear and nonlinear autoregressive moving average model parameter estimation using Laguerre functions. Ann. Biomed. Eng. 25: , Farlow, S. J. The GMDH algorithm of Ivakhnenko. Am. Stat. 35: , Ivakhnenko, A. G. Polynomial theory of complex systems. IEEE Trans. Syst. Man Cybern. 1: , Ivakhnenko, A. G. Group method of data handling A revival of the method of stochastic approximation. Sov. Automatic Control 13:43 71, Ivakhnenko, A. G., G. A. Ivakhnenko, and J. A. Muller. Self-organization of neuronets with active neurons. Pattern Recognition and Image Analysis 4: , Korenberg, M. J. Fast orthogonal identification of nonlinear difference equation and functional expansion models. Proc. Midwest Symp. Circuit Sys. 1: , Korenberg, M. J. A robust orthogonal algorithm for system identification and time series analysis. Biol. Cybern. 60: , Korenberg, M. J. Fast orthogonal algorithms for nonlinear system identification and time-series analysis. In: Advanced Methods of Physiological System Modeling, edited by V. Z. Marmarelis, Los Angeles: Biomedical Simulation Resource, 1989, Vol. 2, pp Mitchell, M. An Introduction to Genetic Algorithms. Cambridge, MA: MIT Press, 1998.
A Robust Time-Varying Identification Algorithm Using Basis Functions
Annals of Biomedical Engineering, Vol. 31, pp. 840 853, 2003 Printed in the USA. All rights reserved. 0090-6964/2003/31 7 /840/14/$20.00 Copyright 2003 Biomedical Engineering Society A Robust Time-Varying
More informationRobust Signal-Structure Reconstruction
Robust Signal-Structure Reconstruction V. Chetty 1, D. Hayden 2, J. Gonçalves 2, and S. Warnick 1 1 Information and Decision Algorithms Laboratories, Brigham Young University 2 Control Group, Department
More informationExploring Econometric Model Selection Using Sensitivity Analysis
Exploring Econometric Model Selection Using Sensitivity Analysis William Becker Paolo Paruolo Andrea Saltelli Nice, 2 nd July 2013 Outline What is the problem we are addressing? Past approaches Hoover
More informationRegularization of Evolving Polynomial Models
Regularization of Evolving Polynomial Models Pavel Kordík Dept. of Computer Science and Engineering, Karlovo nám. 13, 121 35 Praha 2, Czech Republic kordikp@fel.cvut.cz Abstract. Black box models such
More information1. Introduction. 2. Motivation and Problem Definition. Volume 8 Issue 2, February Susmita Mohapatra
Pattern Recall Analysis of the Hopfield Neural Network with a Genetic Algorithm Susmita Mohapatra Department of Computer Science, Utkal University, India Abstract: This paper is focused on the implementation
More informationInductive Sorting-Out GMDH Algorithms with Polynomial Complexity for Active Neurons of Neural Network
Inductive Sorting-Out GMDH Algorithms with Polynomial Complexity for Active eurons of eural etwor A.G. Ivahneno, D. Wunsch 2, G.A. Ivahneno 3 Cybernetic Center of ASU, Kyiv, Uraine, Gai@gmdh.iev.ua 2 Texas
More informationDesign of Navel Adaptive TDBLMS-based Wiener Parallel to TDBLMS Algorithm for Image Noise Cancellation
Design of Navel Adaptive TDBLMS-based Wiener Parallel to TDBLMS Algorithm for Image Noise Cancellation Dinesh Yadav 1, Ajay Boyat 2 1,2 Department of Electronics and Communication Medi-caps Institute of
More informationPerformance Analysis of Adaptive Filtering Algorithms for System Identification
International Journal of Electronics and Communication Engineering. ISSN 974-166 Volume, Number (1), pp. 7-17 International Research Publication House http://www.irphouse.com Performance Analysis of Adaptive
More informationPerformance of Error Normalized Step Size LMS and NLMS Algorithms: A Comparative Study
International Journal of Electronic and Electrical Engineering. ISSN 97-17 Volume 5, Number 1 (1), pp. 3-3 International Research Publication House http://www.irphouse.com Performance of Error Normalized
More informationRecent advances in Metamodel of Optimal Prognosis. Lectures. Thomas Most & Johannes Will
Lectures Recent advances in Metamodel of Optimal Prognosis Thomas Most & Johannes Will presented at the Weimar Optimization and Stochastic Days 2010 Source: www.dynardo.de/en/library Recent advances in
More informationAdaptive Filtering using Steepest Descent and LMS Algorithm
IJSTE - International Journal of Science Technology & Engineering Volume 2 Issue 4 October 2015 ISSN (online): 2349-784X Adaptive Filtering using Steepest Descent and LMS Algorithm Akash Sawant Mukesh
More informationAn Adaptive Eigenshape Model
An Adaptive Eigenshape Model Adam Baumberg and David Hogg School of Computer Studies University of Leeds, Leeds LS2 9JT, U.K. amb@scs.leeds.ac.uk Abstract There has been a great deal of recent interest
More informationA MORPHOLOGY-BASED FILTER STRUCTURE FOR EDGE-ENHANCING SMOOTHING
Proceedings of the 1994 IEEE International Conference on Image Processing (ICIP-94), pp. 530-534. (Austin, Texas, 13-16 November 1994.) A MORPHOLOGY-BASED FILTER STRUCTURE FOR EDGE-ENHANCING SMOOTHING
More informationFinite Sample Criteria for Autoregressive Order Selection
3550 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 48, NO. 12, DECEMBER 2000 Finite Sample Criteria for Autoregressive Order Selection Piet M. T. Broersen Abstract The quality of selected AR models depends
More informationAN ALGORITHM FOR BLIND RESTORATION OF BLURRED AND NOISY IMAGES
AN ALGORITHM FOR BLIND RESTORATION OF BLURRED AND NOISY IMAGES Nader Moayeri and Konstantinos Konstantinides Hewlett-Packard Laboratories 1501 Page Mill Road Palo Alto, CA 94304-1120 moayeri,konstant@hpl.hp.com
More informationCombined Weak Classifiers
Combined Weak Classifiers Chuanyi Ji and Sheng Ma Department of Electrical, Computer and System Engineering Rensselaer Polytechnic Institute, Troy, NY 12180 chuanyi@ecse.rpi.edu, shengm@ecse.rpi.edu Abstract
More informationConvex combination of adaptive filters for a variable tap-length LMS algorithm
Loughborough University Institutional Repository Convex combination of adaptive filters for a variable tap-length LMS algorithm This item was submitted to Loughborough University's Institutional Repository
More informationCHAPTER 6 HYBRID AI BASED IMAGE CLASSIFICATION TECHNIQUES
CHAPTER 6 HYBRID AI BASED IMAGE CLASSIFICATION TECHNIQUES 6.1 INTRODUCTION The exploration of applications of ANN for image classification has yielded satisfactory results. But, the scope for improving
More informationA New RBF Neural Network With Boundary Value Constraints
98 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS PART B: CYBERNETICS, VOL. 39, NO. 1, FEBRUARY 009 A New RBF Neural Network With Boundary Value Constraints Xia Hong, Senior Member, IEEE,and Sheng
More informationA Quantitative Approach for Textural Image Segmentation with Median Filter
International Journal of Advancements in Research & Technology, Volume 2, Issue 4, April-2013 1 179 A Quantitative Approach for Textural Image Segmentation with Median Filter Dr. D. Pugazhenthi 1, Priya
More informationAdvanced Digital Signal Processing Adaptive Linear Prediction Filter (Using The RLS Algorithm)
Advanced Digital Signal Processing Adaptive Linear Prediction Filter (Using The RLS Algorithm) Erick L. Oberstar 2001 Adaptive Linear Prediction Filter Using the RLS Algorithm A complete analysis/discussion
More informationAn ICA based Approach for Complex Color Scene Text Binarization
An ICA based Approach for Complex Color Scene Text Binarization Siddharth Kherada IIIT-Hyderabad, India siddharth.kherada@research.iiit.ac.in Anoop M. Namboodiri IIIT-Hyderabad, India anoop@iiit.ac.in
More informationMotion Estimation for Video Coding Standards
Motion Estimation for Video Coding Standards Prof. Ja-Ling Wu Department of Computer Science and Information Engineering National Taiwan University Introduction of Motion Estimation The goal of video compression
More informationTRACKING PERFORMANCE OF THE MMAX CONJUGATE GRADIENT ALGORITHM. Bei Xie and Tamal Bose
Proceedings of the SDR 11 Technical Conference and Product Exposition, Copyright 211 Wireless Innovation Forum All Rights Reserved TRACKING PERFORMANCE OF THE MMAX CONJUGATE GRADIENT ALGORITHM Bei Xie
More informationInvariant Recognition of Hand-Drawn Pictograms Using HMMs with a Rotating Feature Extraction
Invariant Recognition of Hand-Drawn Pictograms Using HMMs with a Rotating Feature Extraction Stefan Müller, Gerhard Rigoll, Andreas Kosmala and Denis Mazurenok Department of Computer Science, Faculty of
More informationAutomatic basis selection for RBF networks using Stein s unbiased risk estimator
Automatic basis selection for RBF networks using Stein s unbiased risk estimator Ali Ghodsi School of omputer Science University of Waterloo University Avenue West NL G anada Email: aghodsib@cs.uwaterloo.ca
More informationTime Complexity Analysis of the Genetic Algorithm Clustering Method
Time Complexity Analysis of the Genetic Algorithm Clustering Method Z. M. NOPIAH, M. I. KHAIRIR, S. ABDULLAH, M. N. BAHARIN, and A. ARIFIN Department of Mechanical and Materials Engineering Universiti
More informationAdvance Convergence Characteristic Based on Recycling Buffer Structure in Adaptive Transversal Filter
Advance Convergence Characteristic ased on Recycling uffer Structure in Adaptive Transversal Filter Gwang Jun Kim, Chang Soo Jang, Chan o Yoon, Seung Jin Jang and Jin Woo Lee Department of Computer Engineering,
More informationOptimized energy aware scheduling to minimize makespan in distributed systems.
Biomedical Research 2017; 28 (7): 2877-2883 ISSN 0970-938X www.biomedres.info Optimized aware scheduling to minimize makespan in distributed systems. Rajkumar K 1*, Swaminathan P 2 1 Department of Computer
More informationLearning Adaptive Parameters with Restricted Genetic Optimization Method
Learning Adaptive Parameters with Restricted Genetic Optimization Method Santiago Garrido and Luis Moreno Universidad Carlos III de Madrid, Leganés 28911, Madrid (Spain) Abstract. Mechanisms for adapting
More informationA Non-Iterative Approach to Frequency Estimation of a Complex Exponential in Noise by Interpolation of Fourier Coefficients
A on-iterative Approach to Frequency Estimation of a Complex Exponential in oise by Interpolation of Fourier Coefficients Shahab Faiz Minhas* School of Electrical and Electronics Engineering University
More information[spa-temp.inf] Spatial-temporal information
[spa-temp.inf] Spatial-temporal information VI Table of Contents for Spatial-temporal information I. Spatial-temporal information........................................... VI - 1 A. Cohort-survival method.........................................
More informationNoise-based Feature Perturbation as a Selection Method for Microarray Data
Noise-based Feature Perturbation as a Selection Method for Microarray Data Li Chen 1, Dmitry B. Goldgof 1, Lawrence O. Hall 1, and Steven A. Eschrich 2 1 Department of Computer Science and Engineering
More informationInformation Criteria Methods in SAS for Multiple Linear Regression Models
Paper SA5 Information Criteria Methods in SAS for Multiple Linear Regression Models Dennis J. Beal, Science Applications International Corporation, Oak Ridge, TN ABSTRACT SAS 9.1 calculates Akaike s Information
More informationA Parallel Hardware Architecture for Information-Theoretic Adaptive Filtering
A Parallel Hardware Architecture for Information-Theoretic Adaptive Filtering HPRCTA 2010 Stefan Craciun Dr. Alan D. George Dr. Herman Lam Dr. Jose C. Principe November 14, 2010 NSF CHREC Center ECE Department,
More informationFractional Discrimination for Texture Image Segmentation
Fractional Discrimination for Texture Image Segmentation Author You, Jia, Sattar, Abdul Published 1997 Conference Title IEEE 1997 International Conference on Image Processing, Proceedings of the DOI https://doi.org/10.1109/icip.1997.647743
More informationExpectation and Maximization Algorithm for Estimating Parameters of a Simple Partial Erasure Model
608 IEEE TRANSACTIONS ON MAGNETICS, VOL. 39, NO. 1, JANUARY 2003 Expectation and Maximization Algorithm for Estimating Parameters of a Simple Partial Erasure Model Tsai-Sheng Kao and Mu-Huo Cheng Abstract
More informationAI Technology for Quickly Solving Urban Security Positioning Problems
AI Technology for Quickly Solving Urban Security Positioning Problems Hiroaki Iwashita Kotaro Ohori Hirokazu Anai Security games are used for mathematically optimizing security measures aimed at minimizing
More informationAdaptive Wavelet Image Denoising Based on the Entropy of Homogenus Regions
International Journal of Electrical and Electronic Science 206; 3(4): 9-25 http://www.aascit.org/journal/ijees ISSN: 2375-2998 Adaptive Wavelet Image Denoising Based on the Entropy of Homogenus Regions
More informationISSN (Online), Volume 1, Special Issue 2(ICITET 15), March 2015 International Journal of Innovative Trends and Emerging Technologies
VLSI IMPLEMENTATION OF HIGH PERFORMANCE DISTRIBUTED ARITHMETIC (DA) BASED ADAPTIVE FILTER WITH FAST CONVERGENCE FACTOR G. PARTHIBAN 1, P.SATHIYA 2 PG Student, VLSI Design, Department of ECE, Surya Group
More informationLinear Methods for Regression and Shrinkage Methods
Linear Methods for Regression and Shrinkage Methods Reference: The Elements of Statistical Learning, by T. Hastie, R. Tibshirani, J. Friedman, Springer 1 Linear Regression Models Least Squares Input vectors
More informationEnsembles of Neural Networks for Forecasting of Time Series of Spacecraft Telemetry
ISSN 1060-992X, Optical Memory and Neural Networks, 2017, Vol. 26, No. 1, pp. 47 54. Allerton Press, Inc., 2017. Ensembles of Neural Networks for Forecasting of Time Series of Spacecraft Telemetry E. E.
More informationRedundancy Resolution by Minimization of Joint Disturbance Torque for Independent Joint Controlled Kinematically Redundant Manipulators
56 ICASE :The Institute ofcontrol,automation and Systems Engineering,KOREA Vol.,No.1,March,000 Redundancy Resolution by Minimization of Joint Disturbance Torque for Independent Joint Controlled Kinematically
More informationRegression on SAT Scores of 374 High Schools and K-means on Clustering Schools
Regression on SAT Scores of 374 High Schools and K-means on Clustering Schools Abstract In this project, we study 374 public high schools in New York City. The project seeks to use regression techniques
More informationSimultaneous Perturbation Stochastic Approximation Algorithm Combined with Neural Network and Fuzzy Simulation
.--- Simultaneous Perturbation Stochastic Approximation Algorithm Combined with Neural Networ and Fuzzy Simulation Abstract - - - - Keywords: Many optimization problems contain fuzzy information. Possibility
More informationEfficient Tuning of SVM Hyperparameters Using Radius/Margin Bound and Iterative Algorithms
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 13, NO. 5, SEPTEMBER 2002 1225 Efficient Tuning of SVM Hyperparameters Using Radius/Margin Bound and Iterative Algorithms S. Sathiya Keerthi Abstract This paper
More informationImage denoising in the wavelet domain using Improved Neigh-shrink
Image denoising in the wavelet domain using Improved Neigh-shrink Rahim Kamran 1, Mehdi Nasri, Hossein Nezamabadi-pour 3, Saeid Saryazdi 4 1 Rahimkamran008@gmail.com nasri_me@yahoo.com 3 nezam@uk.ac.ir
More informationSpeech Signal Filters based on Soft Computing Techniques: A Comparison
Speech Signal Filters based on Soft Computing Techniques: A Comparison Sachin Lakra Dept. of Information Technology Manav Rachna College of Engg Faridabad, Haryana, India and R. S., K L University sachinlakra@yahoo.co.in
More informationHigh Speed Pipelined Architecture for Adaptive Median Filter
Abstract High Speed Pipelined Architecture for Adaptive Median Filter D.Dhanasekaran, and **Dr.K.Boopathy Bagan *Assistant Professor, SVCE, Pennalur,Sriperumbudur-602105. **Professor, Madras Institute
More informationIN RECENT years, neural network techniques have been recognized
IEEE TRANSACTIONS ON MICROWAVE THEORY AND TECHNIQUES, VOL. 56, NO. 4, APRIL 2008 867 Neural Network Inverse Modeling and Applications to Microwave Filter Design Humayun Kabir, Student Member, IEEE, Ying
More informationImage Denoising Based on Hybrid Fourier and Neighborhood Wavelet Coefficients Jun Cheng, Songli Lei
Image Denoising Based on Hybrid Fourier and Neighborhood Wavelet Coefficients Jun Cheng, Songli Lei College of Physical and Information Science, Hunan Normal University, Changsha, China Hunan Art Professional
More informationFeature Selection. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani
Feature Selection CE-725: Statistical Pattern Recognition Sharif University of Technology Spring 2013 Soleymani Outline Dimensionality reduction Feature selection vs. feature extraction Filter univariate
More informationAn Optimized Pixel-Wise Weighting Approach For Patch-Based Image Denoising
An Optimized Pixel-Wise Weighting Approach For Patch-Based Image Denoising Dr. B. R.VIKRAM M.E.,Ph.D.,MIEEE.,LMISTE, Principal of Vijay Rural Engineering College, NIZAMABAD ( Dt.) G. Chaitanya M.Tech,
More informationAn Improved Measurement Placement Algorithm for Network Observability
IEEE TRANSACTIONS ON POWER SYSTEMS, VOL. 16, NO. 4, NOVEMBER 2001 819 An Improved Measurement Placement Algorithm for Network Observability Bei Gou and Ali Abur, Senior Member, IEEE Abstract This paper
More informationPERFORMANCE OF THE DISTRIBUTED KLT AND ITS APPROXIMATE IMPLEMENTATION
20th European Signal Processing Conference EUSIPCO 2012) Bucharest, Romania, August 27-31, 2012 PERFORMANCE OF THE DISTRIBUTED KLT AND ITS APPROXIMATE IMPLEMENTATION Mauricio Lara 1 and Bernard Mulgrew
More informationDIGITAL COLOR RESTORATION OF OLD PAINTINGS. Michalis Pappas and Ioannis Pitas
DIGITAL COLOR RESTORATION OF OLD PAINTINGS Michalis Pappas and Ioannis Pitas Department of Informatics Aristotle University of Thessaloniki, Box 451, GR-54006 Thessaloniki, GREECE phone/fax: +30-31-996304
More informationTHE CLASSICAL method for training a multilayer feedforward
930 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 10, NO. 4, JULY 1999 A Fast U-D Factorization-Based Learning Algorithm with Applications to Nonlinear System Modeling and Identification Youmin Zhang and
More informationFlexibility and Robustness of Hierarchical Fuzzy Signature Structures with Perturbed Input Data
Flexibility and Robustness of Hierarchical Fuzzy Signature Structures with Perturbed Input Data B. Sumudu U. Mendis Department of Computer Science The Australian National University Canberra, ACT 0200,
More informationABASIC principle in nonlinear system modelling is the
78 IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY, VOL. 16, NO. 1, JANUARY 2008 NARX-Based Nonlinear System Identification Using Orthogonal Least Squares Basis Hunting S. Chen, X. X. Wang, and C. J. Harris
More informationClassification. Vladimir Curic. Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University
Classification Vladimir Curic Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University Outline An overview on classification Basics of classification How to choose appropriate
More informationA Neural Network Model Of Insurance Customer Ratings
A Neural Network Model Of Insurance Customer Ratings Jan Jantzen 1 Abstract Given a set of data on customers the engineering problem in this study is to model the data and classify customers
More informationCHAPTER-III WAVELENGTH ROUTING ALGORITHMS
CHAPTER-III WAVELENGTH ROUTING ALGORITHMS Introduction A wavelength routing (WR) algorithm selects a good route and a wavelength to satisfy a connection request so as to improve the network performance.
More informationCHAPTER 4 WAVELET TRANSFORM-GENETIC ALGORITHM DENOISING TECHNIQUE
102 CHAPTER 4 WAVELET TRANSFORM-GENETIC ALGORITHM DENOISING TECHNIQUE 4.1 INTRODUCTION This chapter introduces an effective combination of genetic algorithm and wavelet transform scheme for the denoising
More informationLocally Weighted Least Squares Regression for Image Denoising, Reconstruction and Up-sampling
Locally Weighted Least Squares Regression for Image Denoising, Reconstruction and Up-sampling Moritz Baecher May 15, 29 1 Introduction Edge-preserving smoothing and super-resolution are classic and important
More informationHigh Information Rate and Efficient Color Barcode Decoding
High Information Rate and Efficient Color Barcode Decoding Homayoun Bagherinia and Roberto Manduchi University of California, Santa Cruz, Santa Cruz, CA 95064, USA {hbagheri,manduchi}@soe.ucsc.edu http://www.ucsc.edu
More informationEmpirical Mode Decomposition Based Denoising by Customized Thresholding
Vol:11, No:5, 17 Empirical Mode Decomposition Based Denoising by Customized Thresholding Wahiba Mohguen, Raïs El hadi Bekka International Science Index, Electronics and Communication Engineering Vol:11,
More informationCOMPARATIVE STUDY OF 5 RECURSIVE SYSTEM IDENTIFICATION METHODS IN EARTHQUAKE ENGINEERING
COMPARATIVE STUDY OF 5 RECURSIVE SYSTEM IDENTIFICATION METHODS IN EARTHQUAKE ENGINEERING FanLang Kong Dept. of Civil Engineering, State University of New York at Buffalo Amherst, NY 14260, U.S.A ABSTRACT
More informationARMA MODEL SELECTION USING PARTICLE SWARM OPTIMIZATION AND AIC CRITERIA. Mark S. Voss a b. and Xin Feng.
Copyright 2002 IFAC 5th Triennial World Congress, Barcelona, Spain ARMA MODEL SELECTION USING PARTICLE SWARM OPTIMIZATION AND AIC CRITERIA Mark S. Voss a b and Xin Feng a Department of Civil and Environmental
More informationIterative Removing Salt and Pepper Noise based on Neighbourhood Information
Iterative Removing Salt and Pepper Noise based on Neighbourhood Information Liu Chun College of Computer Science and Information Technology Daqing Normal University Daqing, China Sun Bishen Twenty-seventh
More informationMINI-PAPER A Gentle Introduction to the Analysis of Sequential Data
MINI-PAPER by Rong Pan, Ph.D., Assistant Professor of Industrial Engineering, Arizona State University We, applied statisticians and manufacturing engineers, often need to deal with sequential data, which
More informationAdaptive Doppler centroid estimation algorithm of airborne SAR
Adaptive Doppler centroid estimation algorithm of airborne SAR Jian Yang 1,2a), Chang Liu 1, and Yanfei Wang 1 1 Institute of Electronics, Chinese Academy of Sciences 19 North Sihuan Road, Haidian, Beijing
More informationNumerical Robustness. The implementation of adaptive filtering algorithms on a digital computer, which inevitably operates using finite word-lengths,
1. Introduction Adaptive filtering techniques are used in a wide range of applications, including echo cancellation, adaptive equalization, adaptive noise cancellation, and adaptive beamforming. These
More informationEstimating Noise and Dimensionality in BCI Data Sets: Towards Illiteracy Comprehension
Estimating Noise and Dimensionality in BCI Data Sets: Towards Illiteracy Comprehension Claudia Sannelli, Mikio Braun, Michael Tangermann, Klaus-Robert Müller, Machine Learning Laboratory, Dept. Computer
More informationEvaluation of texture features for image segmentation
RIT Scholar Works Articles 9-14-2001 Evaluation of texture features for image segmentation Navid Serrano Jiebo Luo Andreas Savakis Follow this and additional works at: http://scholarworks.rit.edu/article
More informationCOPY RIGHT. To Secure Your Paper As Per UGC Guidelines We Are Providing A Electronic Bar Code
COPY RIGHT 2018IJIEMR.Personal use of this material is permitted. Permission from IJIEMR must be obtained for all other uses, in any current or future media, including reprinting/republishing this material
More informationLOW-DENSITY PARITY-CHECK (LDPC) codes [1] can
208 IEEE TRANSACTIONS ON MAGNETICS, VOL 42, NO 2, FEBRUARY 2006 Structured LDPC Codes for High-Density Recording: Large Girth and Low Error Floor J Lu and J M F Moura Department of Electrical and Computer
More informationBlur Space Iterative De-blurring
Blur Space Iterative De-blurring RADU CIPRIAN BILCU 1, MEJDI TRIMECHE 2, SAKARI ALENIUS 3, MARKKU VEHVILAINEN 4 1,2,3,4 Multimedia Technologies Laboratory, Nokia Research Center Visiokatu 1, FIN-33720,
More informationImage Quality Assessment Techniques: An Overview
Image Quality Assessment Techniques: An Overview Shruti Sonawane A. M. Deshpande Department of E&TC Department of E&TC TSSM s BSCOER, Pune, TSSM s BSCOER, Pune, Pune University, Maharashtra, India Pune
More informationA Connection between Network Coding and. Convolutional Codes
A Connection between Network Coding and 1 Convolutional Codes Christina Fragouli, Emina Soljanin christina.fragouli@epfl.ch, emina@lucent.com Abstract The min-cut, max-flow theorem states that a source
More informationPrediction Method for Time Series of Imagery Data in Eigen Space
Prediction Method for Time Series of Imagery Data in Eigen Space Validity of the Proposed Prediction Metyhod for Remote Sensing Satellite Imagery Data Kohei Arai Graduate School of Science and Engineering
More informationA robust optimization based approach to the general solution of mp-milp problems
21 st European Symposium on Computer Aided Process Engineering ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A. Kokossis (Editors) 2011 Elsevier B.V. All rights reserved. A robust optimization based
More informationPredictive Interpolation for Registration
Predictive Interpolation for Registration D.G. Bailey Institute of Information Sciences and Technology, Massey University, Private bag 11222, Palmerston North D.G.Bailey@massey.ac.nz Abstract Predictive
More informationISSN: [Keswani* et al., 7(1): January, 2018] Impact Factor: 4.116
IJESRT INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY AUTOMATIC TEST CASE GENERATION FOR PERFORMANCE ENHANCEMENT OF SOFTWARE THROUGH GENETIC ALGORITHM AND RANDOM TESTING Bright Keswani,
More informationGENERAL AUTOMATED FLAW DETECTION SCHEME FOR NDE X-RAY IMAGES
GENERAL AUTOMATED FLAW DETECTION SCHEME FOR NDE X-RAY IMAGES Karl W. Ulmer and John P. Basart Center for Nondestructive Evaluation Department of Electrical and Computer Engineering Iowa State University
More informationNonlinear Operations for Colour Images Based on Pairwise Vector Ordering
Nonlinear Operations for Colour Images Based on Pairwise Vector Ordering Adrian N. Evans Department of Electronic and Electrical Engineering University of Bath Bath, BA2 7AY United Kingdom A.N.Evans@bath.ac.uk
More informationGlobally Stabilized 3L Curve Fitting
Globally Stabilized 3L Curve Fitting Turker Sahin and Mustafa Unel Department of Computer Engineering, Gebze Institute of Technology Cayirova Campus 44 Gebze/Kocaeli Turkey {htsahin,munel}@bilmuh.gyte.edu.tr
More informationVIDEO OBJECT SEGMENTATION BY EXTENDED RECURSIVE-SHORTEST-SPANNING-TREE METHOD. Ertem Tuncel and Levent Onural
VIDEO OBJECT SEGMENTATION BY EXTENDED RECURSIVE-SHORTEST-SPANNING-TREE METHOD Ertem Tuncel and Levent Onural Electrical and Electronics Engineering Department, Bilkent University, TR-06533, Ankara, Turkey
More informationA NEW METHODOLOGY FOR EMERGENT SYSTEM IDENTIFICATION USING PARTICLE SWARM OPTIMIZATION (PSO) AND THE GROUP METHOD OF DATA HANDLING (GMDH)
A NEW METHODOLOGY FOR EMERGENT SYSTEM IDENTIFICATION USING PARTICLE SWARM OPTIMIZATION (PSO) AND THE GROUP METHOD OF DATA HANDLING (GMDH) Mark S. Voss Dept. of Civil and Environmental Engineering Marquette
More informationSIGNAL COMPRESSION. 9. Lossy image compression: SPIHT and S+P
SIGNAL COMPRESSION 9. Lossy image compression: SPIHT and S+P 9.1 SPIHT embedded coder 9.2 The reversible multiresolution transform S+P 9.3 Error resilience in embedded coding 178 9.1 Embedded Tree-Based
More informationELEC Dr Reji Mathew Electrical Engineering UNSW
ELEC 4622 Dr Reji Mathew Electrical Engineering UNSW Review of Motion Modelling and Estimation Introduction to Motion Modelling & Estimation Forward Motion Backward Motion Block Motion Estimation Motion
More informationArtifacts and Textured Region Detection
Artifacts and Textured Region Detection 1 Vishal Bangard ECE 738 - Spring 2003 I. INTRODUCTION A lot of transformations, when applied to images, lead to the development of various artifacts in them. In
More informationAnalysis of Functional MRI Timeseries Data Using Signal Processing Techniques
Analysis of Functional MRI Timeseries Data Using Signal Processing Techniques Sea Chen Department of Biomedical Engineering Advisors: Dr. Charles A. Bouman and Dr. Mark J. Lowe S. Chen Final Exam October
More informationMulticomponent f-x seismic random noise attenuation via vector autoregressive operators
Multicomponent f-x seismic random noise attenuation via vector autoregressive operators Mostafa Naghizadeh and Mauricio Sacchi ABSTRACT We propose an extension of the traditional frequency-space (f-x)
More informationRate-distortion Optimized Streaming of Compressed Light Fields with Multiple Representations
Rate-distortion Optimized Streaming of Compressed Light Fields with Multiple Representations Prashant Ramanathan and Bernd Girod Department of Electrical Engineering Stanford University Stanford CA 945
More informationMULTI-SCALE STRUCTURAL SIMILARITY FOR IMAGE QUALITY ASSESSMENT. (Invited Paper)
MULTI-SCALE STRUCTURAL SIMILARITY FOR IMAGE QUALITY ASSESSMENT Zhou Wang 1, Eero P. Simoncelli 1 and Alan C. Bovik 2 (Invited Paper) 1 Center for Neural Sci. and Courant Inst. of Math. Sci., New York Univ.,
More informationRevision of a Floating-Point Genetic Algorithm GENOCOP V for Nonlinear Programming Problems
4 The Open Cybernetics and Systemics Journal, 008,, 4-9 Revision of a Floating-Point Genetic Algorithm GENOCOP V for Nonlinear Programming Problems K. Kato *, M. Sakawa and H. Katagiri Department of Artificial
More informationSolar Radiation Data Modeling with a Novel Surface Fitting Approach
Solar Radiation Data Modeling with a Novel Surface Fitting Approach F. Onur Hocao glu, Ömer Nezih Gerek, Mehmet Kurban Anadolu University, Dept. of Electrical and Electronics Eng., Eskisehir, Turkey {fohocaoglu,ongerek,mkurban}
More informationESTIMATING THE COST OF ENERGY USAGE IN SPORT CENTRES: A COMPARATIVE MODELLING APPROACH
ESTIMATING THE COST OF ENERGY USAGE IN SPORT CENTRES: A COMPARATIVE MODELLING APPROACH A.H. Boussabaine, R.J. Kirkham and R.G. Grew Construction Cost Engineering Research Group, School of Architecture
More informationEfficient Image Compression of Medical Images Using the Wavelet Transform and Fuzzy c-means Clustering on Regions of Interest.
Efficient Image Compression of Medical Images Using the Wavelet Transform and Fuzzy c-means Clustering on Regions of Interest. D.A. Karras, S.A. Karkanis and D. E. Maroulis University of Piraeus, Dept.
More informationA Survey of Light Source Detection Methods
A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light
More information