Modified multiblock partial least squares path modeling algorithm with backpropagation neural networks approach

Size: px
Start display at page:

Download "Modified multiblock partial least squares path modeling algorithm with backpropagation neural networks approach"

Transcription

1 Modified multiblock partial least squares path modeling algorithm with backpropagation neural networks approach Budi Yuniarto, and Robert Kurniawan Citation: AIP Conference Proceedings 1827, (2017); View online: View Table of Contents: Published by the American Institute of Physics Articles you may be interested in Modeling relationship between mean years of schooling and household expenditure at Central Sulawesi using constrained B-splines (COBS) in quantile regression AIP Conference Proceedings 1827, (2017); / Mean-Variance portfolio optimization by using non constant mean and volatility based on the negative exponential utility function AIP Conference Proceedings 1827, (2017); / Multilevel poisson regression modelling for determining factors of dengue fever cases in bandung AIP Conference Proceedings 1827, (2017); / Spectral analysis and markov switching model of Indonesia business cycle AIP Conference Proceedings 1827, (2017); / Sequential pattern mining of rawi hadis (Case study: Shahih hadis of Imam Bukhari from software Ensiklopedi Hadis Kitab 9 Imam) AIP Conference Proceedings 1827, (2017); / R programming for parameters estimation of geographically weighted ordinal logistic regression (GWOLR) model based on Newton Raphson AIP Conference Proceedings 1827, (2017); /

2 Modified Multiblock Partial Least Squares Path Modeling Algorithm with Backpropagation Neural Networks Approach Budi Yuniarto a) and Robert Kurniawan b) Department of Computational Statistics, Institute of Statistics (STIS), Jakarta Corresponding author: a) b) Abstract. PLS Path Modeling (PLS-PM) is different from covariance based SEM, where PLS-PM use an approach based on variance or component, therefore, PLS-PM is also known as a component based SEM. Multiblock Partial Least Squares (MBPLS) is a method in PLS regression which can be used in PLS Path Modeling which known as Multiblock PLS Path Modeling (MBPLS-PM). This method uses an iterative procedure in its algorithm. This research aims to modify MBPLS- PM with Back Propagation Neural Network approach. The result is MBPLS-PM algorithm can be modified using the Back Propagation Neural Network approach to replace the iterative process in backward and forward step to get the matrix t and the matrix u in the algorithm. By modifying the MBPLS-PM algorithm using Back Propagation Neural Network approach, the model parameters obtained are relatively not significantly different compared to model parameters obtained by original MBPLS-PM algorithm. Keywords: SEM, Partial Least Squares Path Modeling, Neural Network, PLS, PLS-PM, Multiblock PLS-PM. INTRODUCTION Structural Equation Models (SEM), is one of the multivariate regression models (Fox, 2002). While a classical multivariate linear model only has one predictee variable in the equation, in SEM, the predictee variable of one equation also could had a role as a predictor variable for other predictee variables. Structural equation modeling (SEM) is often used to analyze the causal relationship between latent variables. There are two approaches of SEM methods: covariance based SEM and variance or component-based SEM. Covariancebased SEM (CB-SEM) was first developed by Joreskog (1982), meanwhile variance or component-based SEM with partial least squares (PLS) approach was developed by Wold (1979), and then this approach was known as PLS Path Modeling or PLS-PM (Martens, 1989). Both methods are complementary rather than competitive (Hair, et.al, 2014). PLS method was introduced as a linear regression technique, and the non-linear algorithm Iterative Partial Least Squares or NIPALS (Wold, in Baffi, 1999) is an important key in the PLS method. PLS approach began to be used in the modeling path in 1980 (Wold, 1980). Wangen and Kowalski (1988) introduced a multiblock PLS algorithm for Statistics and its Applications AIP Conf. Proc. 1827, ; doi: / Published by AIP Publishing /$

3 PLS regression. Arteaga et al (2010) developed an algorithm of multiblock Partial Least Squares Path Modeling (MBPLS-PM), which is adapting the multiblock PLS regression method. In the real world, data often exhibit nonlinear properties (Wold et al, 2001 and Li et al, 2007 in Abdel-Rahman and Lim, 2009), so the technique of non-linear PLS was developed. According Vinzi et al (2010b), there are several options for using non-linear regression PLS in PLS path modeling, among other quadratic forms, smoothing procedure, spline functions, and neural networks. Therefore, in this paper, we modified the algorithm of multiblock Partial Least Squares Path Modeling (MBPLS-PM) with back-propagation feed forward neural networks. OVERVIEW PLS Path Modeling Wold (in Ghozali, 2008) developed Partial Least Square as a general method for estimating path models using the multiple indicator latent constructs. In contrast to covariance based SEM, PLS-PM does not aim to reproduce the sample covariance matrix. PLS-PM is a soft-modeling approach that does not require stringent assumptions in relation to the distribution, the sample size, and the scale of measurement (Vinzi et al, 2010). PLS-PM is the estimation method based on the component (Tenenhaus, 2008). PLS-PM is an iterative algorithm which independently makes the solution on measurement models and then estimates the structural path coefficients in the model. With PLS approach, it is assumed that all the size of the variance is a useful variance to be explained. The approach to estimate the latent variables is considered as a linear combination of indicators (measurement variables) so that it can avoid the trouble factor indeterminacy and provide an exact definition of the component scores (Wold, 1982). There are three categories of estimated parameters obtained with the PLS-PM. The first is a weight estimate that is used to create the latent variable score. The second estimate is the path estimate that links among latent variables and between latent variables and the indicators. And the third is the measurement coefficient between indicators and latent variables. PLS-PM has three stages iteration process in obtaining the three estimated parameters (weight estimates, path estimates, and measurement coefficient), where each stage estimates each parameter. The first stage is an important part of the PLS-PM algorithm which provides iterative procedure that will generates a stable estimated weight. As in SEM, we must specify a path model in PLS-PM, which is consisted of structural models and measurement models. Thus, in PLS-PM we have three kinds of models: an inner model, an outer model, and weight relation. The inner model or the structural model involve only the latent variables. Relationships among these latent variables basically can be represented by multi-linear equation. The general equation for inner models is (1) with (2) where is path coefficient parameter of latent variable i to latent variable j, is the inner residual, β 0j is a constant, and a is number of latent variable. From specification, we have (3)

4 which has mean that the expected value of inner residual equal to zero and does not correlate with latent variable. The outer model or the measurement model formed the relation between the blocks of indicators (measurement variables) and the latent variables. There are three ways to represent this relationship in the outer models, i.e reflective, formative, and MIMIC (multiple effect indicators for multiple causes). In the reflective outer model, the block indicator is a manifestation of the latent variable and assumed as a linear function of the latent variable ξj, the equation is given below: (4) where x jk is the k-th indicator variable of the j-th block (latent variable), λ jk is the loading coefficient of j-th block towards k-th indicator variable, is the outer residual of k-th indicator variable of j-th block. Thus, we have Then (5) (6) which means the expected value of residual is equal to zero and uncorrelated with the latent variable. with In the formative outer model, latent variable is assumed as a linear function of the indicator. (7) (8) (9) where is regression coefficient of k-th indicator in j-th block, δ j is residual, and p is number of latent variable in j-th block. Meanwhile, the MIMIC outer model is a mixture of reflective outer models and a formative outer model. Although the outer model describes the relationship between the latent variables and the block of indicators, the actual value of the latent variable cannot be known. Therefore, the weight relations should be defined to complete it. The estimation of the latent variables is defined as follow: (10) PLS Path Modeling Algorithm PLS path modeling algorithm (Wold, 1980) to estimate the parameters of the model can be explained as follow: Stage 1: Weight estimation 1) Define the initial outer weights 2) Estimate the latent variables scores using the outer weights

5 3) Re-estimate each latent variable using others latent variables which is connected to it. where e ji is an inner weights, which obtained by three scheme, i.e: a. Scheme of centroid: eji = sign[cor(yi,yj)] b. Scheme of factor: eji = cor(yi,yj) c. Scheme of path: eji = cor(yi,yj) if Yi predicts Yj eji = regression coefficient, if Yi predicted by Yj 4) Update outer weights w jk Mode A: = (Z j Zj) -1 Z j xjk, for reflective outer model Mode B: = (X j Xj) -1 X j Zj, for formative outer model 5) Check, if is convergence then proceed to next stage, else, return to step 2) and use new outer weight Stage 2: Estimate path coefficient 6) Do an OLS regression on structural model to obtain path coefficients. Stage 3: Estimate measurement coefficient 7) Do an OLS regression on measurement model to obtain measurement coefficient. Multiblock PLS Path Modeling In the two blocks or multiblocks PLS, Xj denotes predictor variables, while the block of respond variables are denoted by Y, the latent variables of block Xj are denoted by tj and the latent variables of block Y are denoted by u. The multiblock PLS Path Modeling algorithm (Arteaga, 2010) with j blocks variables can be explained as follow: Step 0. Initialization. Set tj and uj = first column of Xj for j increase from 1 to J Step 1. Backward stage. For j decrease from J to 1: - If Xj predicts no blocks, then set tj = uj. - If Xj predicts only one block Xj then wj = Xj T uj tj = (Xjwj) - If Xj predicts more than one blocks (say h blocks), then Uj = [uj1, uj2,, ujh] cuj = Uj T tj uuj = Ujcuj wj = Xj T uuj tj = (Xjwj). Step 2. Forward stage. For j increase from 1 to J: - If Xj not predicted by any blocks: set uj = tj - If Xj predicted by only one block Xj : cj = Xj T tj uj = (Xjcj) - If Xj predicted by more than one block (h blocks): Tj = [tj1, tj2,, tjh] wtj = Tj T uj ttj = Tjwtj wj = Xj T ttj tj = (Xjcj). Then check the convergence of uj, if, within desired precision, these uj are convergence, go to step 3, otherwise return to step

6 Step 3. Compute path coefficient using OLS regression. Feed forward back-propagation neural networks Basic model of neural networks (NN) composed of neurons, and arranged in some layers. In biology, neuron is a cell that has the ability to receive and forward the neural signals. While in NN, neuron is defined as an algorithm that implements a mathematical model inspired by the properties of neurons in the biological sciences (Fahey, 2003). Feed forward back-propagation neural networks (BPNN) needs a lot of pairs of input and target for data training, the internal procedures mapping is hard to understand and there are no indications that the entire system can generate acceptable solution (Anderson and McNeill, in Jeatrakul and Wong, 2009). However, BPNN is a robust NN and easily applied to various problems (Chen et al, in Jeatrakul and Wong, 2009). Figure 1 shows the general architecture of BPNN. Performance of BPNN depends on the number of neurons, networks design and methods used in the learning process. FIGURE 1. The backpropagation network structure In the BPNN algorithm, the backpropagation is used to update the weights and biases of the neural networks. Weights and biases are updated using a variety of gradient descent algorithms. The gradient is determined by propagating the backwards computation from output layer to first hidden layer. MULTIBLOCK PLS-PM WITH BACKPROPAGATION NEURAL NETWORKS APPROACH The basic algorithm of multiblock methods PLS Path Modeling involves an iterative procedure both in forward or backward stages. The basic idea of the multiblock PLS path modeling with BPNN in this study is to map the outer relations or measurement model in PLS-PM using back-propagation neural networks instead of using iterative procedure in the original multiblock PLS-PM algorithm

7 Modified multiblock PLS-PM with back-propagation neural networks algorithm that we proposed is given below: a. Initialization Set tj and uj = first column of Xj for j increase from 1 to J. b. Backward stage. For j decrease from J to 1: - If Xj predicts no blocks, then set tj = uj. - If Xj predicts only one block Xj then 1) train the data on uj to Xj with BPNN, 2) Compute tj = (Xjwj) where wj is the result of data training. - If Xj predicts more than one blocks (say h blocks), then 1) Uj = [uj1, uj2,, ujh] 2) do a data training on Uj toward Xj using BPNN 3) Compute tj = (Xjwj) where wj is the result of data training. c. Forward stage. For j increase from 1 to J: - If Xj not predicted by any blocks: set uj = tj - If Xj predicted by only one block Xj : 1) train the data on tj to Xj using BPNN, 2) Compute uj = (Xjcj) where cj is the result of data training. - If Xj predicted by more than one block (h blocks): 1) Tj = [tj1, tj2,, tjh] 2) do a data training on Tj toward Xj using BPNN 3) Compute uj = (Xjcj) where cj is the result of data training. d. Compute path coefficient and factor loading using OLS regression. MODEL EVALUATION Evaluation of PLS path modeling consists of a two-stage process, i.e the assessment on the measurement model and the assessment on the structural model. To assess the measurement models, it is important to check whether the indicators are reflective or formative. For reflective indicators, we can evaluate three aspects: unidimensionality of indicators, the indicators can be explained by the latent variables, and the difference of latent variables. One of the tools for evaluating the unidimensionality of a latent variable is Cronbach's alpha coefficient. Cronbach's alpha coefficient evaluates how well a block indicators measure the related latent variables. As an alternative, we can use Dillon Goldstein (DG) rho to evaluate unidimensionality of the latent variable. Evaluation of unidimensionality of the latent variables was performed to assess the composite reliability. Seidel & Back (2009) suggests a limit value of DG rho is at least 0.7 as moderate composite reliability. To assess the indicators can be explained by the latent variables or not, we can use communality and average variance extracted (AVE). The communality measures how the latent variable explain the variance from the existing manifest variables. Meanwhile, AVE measures the variance that can be captured by the latent variables from their indicators. If the value of the square root of AVE from each construct is greater than the value of the correlation between the construct and others construct in the model, then we can say that the model has a good discriminant validity (Fornell and Larcker, 1981 in Ghozali, 2008)

8 Evaluation of inner structural model is performed by check the R-squared value of the dependent latent variables and also by check the path coefficient of structural model. R-squared is the coefficient of determination that has the interpretation exactly same as in regression analysis, which is the percentage of variance explained by the model. To assess the overall model, we may use the global criterion of goodness of fit (GoF), where GoF is the geometric average of the average communality and the average R-squared (Amato et al, 2004). BOOTSTRAPPING Because of PLS path modeling does not depend on any assumption of statistical distribution, the classical theory based of significance test of model parameter estimators could not be applied. Therefore, resampling procedure such as blindfolding, jackknifing and bootstrapping, had been developed to obtain information about the estimation of the variability of the parameters. The bootstrapping technique is more superior than the other two resampling methods (Temme et al, 2006). In the bootstrap method, we generate some random samples with replacement from the original sample. Then we can compute the bootstrap standard error of the estimator of the parameter θ, i.e the standard deviation of the estimated parameter resulted by each random sample generated. TESTING THE ALGORITHM ON POVERTY MODEL OF EAST JAVA To test the modified multiblock PLS-PM algorithm with back-propagation neural networks approach, and to compare it to the original multiblock PLS-PM algorithm, we use Anuraga (2013) conceptual model on his research about poverty modeling in East Java using SEM-PLS. The conceptual model is shown in Figure 2. This research used the national socio-economic survey (SUSENAS) 2013 data from Baand Pusat Statistik (Indonesia Statistics). This data will be evaluated using two algorithms, then compare the results from both algorithms. FIGURE 2. Conceptual model of poverty

9 In the conceptual model, there are three models of the relationship among blocks of latent variables, i.e Absolute Poverty (LV1) modeled by three other blocks as a predictor, Economic (LV2) modeled by blocks Health (LV4) and Human Resources (LV3) as a predictor, and Human Resources modeled by Health as a predictor. The indicator variables for each latent variables are given below (Table 1). TABLE 1. Latent variables and its indicators No Latent Variables Indicators 1 Absolute poverty Percentage of the poor (Y1) Poverty gap index (Y2) Poverty severity index (Y3) 2 Economy Percentage of the poor age 15 and over who are not working (X1). Percentage of the poor age 15 and over who work in the agricultural sector (X2). Percentage of households that never bought subsidiary rice (X3). Percentage of non-food expenditure per capita (X4). 3. Human resources Percentage of the poor age 15 and over who not finish primary school (X5). Literacy Rate of poor people aged years (X6). School participation rate of the poor aged years (X7). Mean year school (X8). 4. Health Percentage of poor women who use contraception (X9). Percentage of toddler in poor households that the delivery process assisted by health worker (X10). Percentage of toddler in poor households who have had immunization (X11). Percentage of poor housholds with floor area per capita 8 msq (X12). Percentage of poor households with access to clean water (X13) Percentage of poor households with lattrine (their s own or joint) (X14). Percentage of poor households that receive public health insurance services (X15). Life expectancy (X16) Model Evaluation Table 2 shows the assesment of the unidimensionality of the latent variables for both algorithms. Cronbach's Alpha shows the Economy and Human Resources has a value less than 0.70, that means these blocks is less qualified. If we use the DG rho value, the Economy is the only block that less qualified. TABLE 2. Assesment of the unidimensionality of the latent variables Block Cronbach's DG rho 1st Eigen value 1 Health Human res Economy Abs. poverty nd

10 Fornnel and Larcker (1981) states that the average variance extracted (AVE) can be used to measure the reliability of component score of the latent variables. Recommended value for AVE is must be greater than AVE of both algorithms can be seen in Table 3. TABLE 3. Average Variance Extracted (AVE) Block AVE Predictor Predictee MBPLS-PM 1 Health Human res Economy Abs. poverty Modified MBPLS- 1 Health PM 2 Human res Economy Abs. poverty The evaluation of structural models uses R-squared, which R-squared indicates how is the model is able to explain the variance of the data. Table 4 presents the R squared value of a predictive model for both algorithms. TABLE 4. R-squared value of predictive model R-squared Predicted Latent MBPLS- Modified MBPLS- Variable PM PM 1 Health Human res Economy Abs. poverty Referring to Chin (1998), the predictor blocks of Health, Human Resources, and Economic in both algorithm is greater then 0.50, that means the blocks can explain the predicted of block Absolute Poverty moderately, then the predictor blocks of Health and Human Resources is greater then 0.70, that means the blocks are can explain the predicted block Economic substantially, and the predictor block Health able to explain predicted block Human Resource substantially. In Table 5, we can see the comparison of path coefficient parameters, where path coefficient is the regression coefficient of predicted blocks and their predictor blocks. Predicted Blocks TABLE 5. Path Coefficient of sample Path Coefficient Predictor Modified MBPLS- Block MBPLSPM PM Abs. poverty Health Human res Economy Economy Health Human res Human res. Health

11 Meanwhile, to evaluate the overall model, we use the global criterion of goodness of fit (GoF). The value of global criterion of Goodness of Fit from the model resulted by the MBPLS-PM algorithm is , then from the model resulted by modified MBPLS-PM algorithms, the value of global criterion of Goodness of Fit is Bootstrap The significance test for the path coefficient and loading factor performed by t-test using the bootstrap method. Bootstraping is done by taking 100 samples bootstrap. The results are shown in Table 6, and the results indicate that all path coefficient are significant for both algorithm and Table 6 also shows us that bootstrap path coefficients resulted by both algorithm is not different. TABLE 6. Mean of bootstrap path coefficient, t-statistic, and confidence interval Predicted Blocks Predictor Block Mean of Bootstrap t-statistic Conf. Interval lower bound upper bound MBPLS-PM Health Abs. Poverty Human Res Economy Economy Health Human Res Human Res. Health Modified MBPLS-PM Health Abs. Poverty Human Res Economy Economy Health Human Res Human Res. Health

12 Meanwhile, Table 7 shows the results of significance tests of factor loading for predictor latent variable, and Table 8 shows the results of significance tests of factor loading for predicted latent variable. Table 7 and table 8 show that all parameters of factor loading for each variable in all latent variables are significant, both for MBPLSPM algorithm or modified MBPLS-PM. The negative signs in front of the factor loading means the direction of the correlation between latent variable and its indicator is inverse. TABLE 7. Mean of bootstrap of factor loading, t-statistic and confidence interval (predictor latent variables) Predictor Latent Variables Indicators Mean of bootstrap t-statistic lower bound Conf. Interval upper bound MBPLS-PM X Health Human Res. X X X X X X X X X X X X Economy X X X Modified MBPLS-PM X Health Human Res. Economy X X X X X X X X X X X X X X X

13 TABLE 8. Mean of bootstrap of factor loading, t-statistic and confidence interval (predicted latent variables) Predicted Latent Variables MBPLS-PM Human Res. Economy Miskin Indicators Mean of bootstrap t-statistic Conf. Interval lower bound upper bound X X X X X X X X Y Y NA NA Y Modified MBPLS-PM X Human Res. Economy Poverty X X X X X X X Y Y NA NA Y CONCLUSION MBPLS-PM algorithm can be modified using Backpropagation Neural Network approach to replace the iterative process in backward step and forward to get the matrix t and the matrix u. The model parameters obtained by modified MBPLS-PM relatively similar to the parameter obtained by original MBPLS-PM algorithm. The GoF for the overall model for both algorithms provide moderate results and is not much different, that means the modified MBPLS-PM algorithm is not better than the original. However, the modified MBPLSPM algorithm still require a lot of testing with different condition of data and perhaps MBPLSPM algorithm can be developed using different computational method of optimization

14 REFERENCES 1. -, Indonesia Statistics (BPS), (2013), Survey Sosial Ekonomi Nasional (SUSENAS). 2. Anuraga, G and Otok, B.W., (2013), Pemodelan Kemiskinan di Jawa Timur dengan Structural Equation Modeling-Partial Least Square, Statistika, Vol. 1, No. 2, November 2013, p Arteaga, F., Gallar, M. G., and Gil, I., (2010), A new Multiblock PLS Based Method to Estimate Causal Models: Application to the Post-Consumption Behavior in Tourism dalam Handbook of Partial Least Squares: Concept, Method, and Application, eds. Vinzi, V. Esposito., Chin, W.W., Henseler, J., and Wang, H., Springer, Berlin, p Baffi, G., Martin, E.B. and Morris, A.J. (1999), Non linear projection to latent structures revisited (the neural network PLS algorithm), Journal of Computers & Chemical Engineering, Vol. 23, p Broomhead, D.S. and Lowe, D., (1988), Multivariable functional interpolation and adaptive network. Complex Systems, 2, p Chin, W. W. (1998), The partial least squares approach to structural equation modelling, dalam Modern methods for business research, eds. Marcoulides, G. A., Lawrence Erlbaum Associates, Inc., New Jersey, p Dijkstra, T. K. (2010), Latent Variables and Indices: Herman Wold s Basic Design and Partial Least Squares, dalam Handbook of Partial Least Squares: Concept, Method, and Application, eds. Vinzi, V. Esposito., Chin, W.W., Henseler, J., and Wang, H., Springer, Berlin, p Fahey, C. (2003), Artificial Neural Networks Fox, J. (2002), Structural Equation Models, Appendix in An R and S-PLUSCompanion to Applied Regression. 10. Ghozali, I. (2008), Structural Equation Modeling Metode Alternatif dengan Partial Least Square, Baand Penerbit Universitas Diponegoro, Semarang. 11. Haenlein, M., and Kaplan, A. M. (2004), A Beginner s Guide to Partial Least Squares Analysis, Understanding Statistics, 3(4), p Hair, J. F., Hult, G. T. M., Ringle, C. M., and Sarstedt, M. (2014). A Primer on Partial Least Squares Structural Equation Modeling (PLS-SEM). Thousand Oaks, CA: Sage. 13. Haykin, S. (1999), Neural Networks A Comprehensive Foundation, Second Edition, Prentice Hall International, Inc., New Jersey. 14. Jeatrakul, P. and Wong, K.W. (2009), "Comparing the performance of different neural networks for binary classification problems," proceeding Natural Language Processing, SNLP '09. Eighth International Symposium on, vol., no., p Joreskog, K. G., (1973), A general method for estimating a linear structural equation system, dalam Structural Equation Models in Social Sciences, eds, A. S. Goldberger and O. D. Duncan, New York: Academic Press, p

15 16. Joreskog, K. G., and Wold, H., (1982), The ML and PLS technique for modelling with latent variables: Historical and comparative aspects. Amsterdam, North-Holland. 17. Lohmoller, J.B, (1984), LVPLS 1.6 Program Manual: Latent Variable Path Analysis with Partial Least Squares Estimation, Zentralarchiv. 18. Qin, S. J. (1993). A statistical perspective of neural networks for process modelling and control. Proceedings of the 1993 International Symposium on Intelligent Control. Chicago, Illinois-USA, p Qin, S. J., & McAvoy, T. J. (1992). Non-linear PLS modelling using neural networks. Computers and Chemical Engineering, 16, p Shao, J. and D. Tu The Jackknife and Bootstrap, New York: Springer-Verlag. 21. Tenenhaus, M., Vinzi, V.E., Chatelin, Y.M., and Lauro, C. (2005), PLS path modeling, Computational Statistics & Data Analysis, Vol. 48, p Vinzi, V. E., Trinchera, L., and Amato, S. (2010a), PLS Path Modeling: From Foundations to Recent Developments and Open Issues for Model Assesment and Improvement dalam Handbook of Partial Least Squares: Concept, Method, and Application, eds. Vinzi, V. E., Chin, W.W., Henseler, J., and Wang, H., Springer, Berlin, p Vinzi, V. E., Russolillo, G., Trinchera, L. (2010b), A Joint Use of PLS Regression and PLS Path Modelling for a Data Analysis Approach to Latent Variable Modelling 24. Wangen, L. E., and Kowalski, B. R., (1988), A multiblock partial least squares algorithm for investigating complex chemical systems, Journal of Chemometrics, 3, p Wold, H, (1980), Soft modeling: the basic design and some extensions, dalam Systems under indirect observation, Part II, eds, K. G. Joreskog, and H. O. A. Wold, Amsterdam: North-Holland, p Wold, S., Kettaneh-Wold, N., and Skagerberg, B. (1989), Non-linear PLS modelling, Chemometrics and International Laboratory Systems, 7, p Yuniarto, Budi.,(2011), Multiblock Partial Least Squares Path Modeling dengan Pendekatan Radial Basis Fuction Networks, [Thesis], Surabaya: Institut Teknologi Sepuluh November

Everything taken from (Hair, Hult et al. 2017) but some formulas taken elswere or created by Erik Mønness.

Everything taken from (Hair, Hult et al. 2017) but some formulas taken elswere or created by Erik Mønness. /Users/astacbf/Desktop/Assessing smartpls (engelsk).docx 1/8 Assessing smartpls Everything taken from (Hair, Hult et al. 017) but some formulas taken elswere or created by Erik Mønness. Run PLS algorithm,

More information

Latent variable transformation using monotonic B-splines in PLS Path Modeling

Latent variable transformation using monotonic B-splines in PLS Path Modeling Latent variable transformation using monotonic B-splines in PLS Path Modeling E. Jakobowicz CEDRIC, Conservatoire National des Arts et Métiers, 9 rue Saint Martin, 754 Paris Cedex 3, France EDF R&D, avenue

More information

PDF hosted at the Radboud Repository of the Radboud University Nijmegen

PDF hosted at the Radboud Repository of the Radboud University Nijmegen PDF hosted at the Radboud Repository of the Radboud University Nijmegen The following full text is an author's version which may differ from the publisher's version. For additional information about this

More information

Bootstrap selection of. Multivariate Additive PLS Spline. models

Bootstrap selection of. Multivariate Additive PLS Spline. models Bootstrap selection of Multivariate Additive PLS Spline models Jean-François Durand Montpellier II University, France E-Mail: support@jf-durand-pls.com Abdelaziz Faraj Institut Français du Pétrole E-Mail:

More information

Ensemble methods in machine learning. Example. Neural networks. Neural networks

Ensemble methods in machine learning. Example. Neural networks. Neural networks Ensemble methods in machine learning Bootstrap aggregating (bagging) train an ensemble of models based on randomly resampled versions of the training set, then take a majority vote Example What if you

More information

CS6220: DATA MINING TECHNIQUES

CS6220: DATA MINING TECHNIQUES CS6220: DATA MINING TECHNIQUES Image Data: Classification via Neural Networks Instructor: Yizhou Sun yzsun@ccs.neu.edu November 19, 2015 Methods to Learn Classification Clustering Frequent Pattern Mining

More information

Big Data Methods. Chapter 5: Machine learning. Big Data Methods, Chapter 5, Slide 1

Big Data Methods. Chapter 5: Machine learning. Big Data Methods, Chapter 5, Slide 1 Big Data Methods Chapter 5: Machine learning Big Data Methods, Chapter 5, Slide 1 5.1 Introduction to machine learning What is machine learning? Concerned with the study and development of algorithms that

More information

Nonparametric Regression

Nonparametric Regression Nonparametric Regression John Fox Department of Sociology McMaster University 1280 Main Street West Hamilton, Ontario Canada L8S 4M4 jfox@mcmaster.ca February 2004 Abstract Nonparametric regression analysis

More information

Package matrixpls. September 5, 2013

Package matrixpls. September 5, 2013 Package matrixpls September 5, 2013 Encoding UTF-8 Type Package Title Matrix-based Partial Least Squares estimation Version 0.1.0 Date 2013-01-03 Author Mikko Rönkkö Maintainer Mikko Rönkkö

More information

Research on Evaluation Method of Product Style Semantics Based on Neural Network

Research on Evaluation Method of Product Style Semantics Based on Neural Network Research Journal of Applied Sciences, Engineering and Technology 6(23): 4330-4335, 2013 ISSN: 2040-7459; e-issn: 2040-7467 Maxwell Scientific Organization, 2013 Submitted: September 28, 2012 Accepted:

More information

STATISTICS (STAT) Statistics (STAT) 1

STATISTICS (STAT) Statistics (STAT) 1 Statistics (STAT) 1 STATISTICS (STAT) STAT 2013 Elementary Statistics (A) Prerequisites: MATH 1483 or MATH 1513, each with a grade of "C" or better; or an acceptable placement score (see placement.okstate.edu).

More information

Neural Networks. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani

Neural Networks. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani Neural Networks CE-725: Statistical Pattern Recognition Sharif University of Technology Spring 2013 Soleymani Outline Biological and artificial neural networks Feed-forward neural networks Single layer

More information

Artificial Neural Network-Based Prediction of Human Posture

Artificial Neural Network-Based Prediction of Human Posture Artificial Neural Network-Based Prediction of Human Posture Abstract The use of an artificial neural network (ANN) in many practical complicated problems encourages its implementation in the digital human

More information

CHAPTER 7 EXAMPLES: MIXTURE MODELING WITH CROSS- SECTIONAL DATA

CHAPTER 7 EXAMPLES: MIXTURE MODELING WITH CROSS- SECTIONAL DATA Examples: Mixture Modeling With Cross-Sectional Data CHAPTER 7 EXAMPLES: MIXTURE MODELING WITH CROSS- SECTIONAL DATA Mixture modeling refers to modeling with categorical latent variables that represent

More information

JMASM 46: Algorithm for Comparison of Robust Regression Methods In Multiple Linear Regression By Weighting Least Square Regression (SAS)

JMASM 46: Algorithm for Comparison of Robust Regression Methods In Multiple Linear Regression By Weighting Least Square Regression (SAS) Journal of Modern Applied Statistical Methods Volume 16 Issue 2 Article 27 December 2017 JMASM 46: Algorithm for Comparison of Robust Regression Methods In Multiple Linear Regression By Weighting Least

More information

This leads to our algorithm which is outlined in Section III, along with a tabular summary of it's performance on several benchmarks. The last section

This leads to our algorithm which is outlined in Section III, along with a tabular summary of it's performance on several benchmarks. The last section An Algorithm for Incremental Construction of Feedforward Networks of Threshold Units with Real Valued Inputs Dhananjay S. Phatak Electrical Engineering Department State University of New York, Binghamton,

More information

An Algorithm For Training Multilayer Perceptron (MLP) For Image Reconstruction Using Neural Network Without Overfitting.

An Algorithm For Training Multilayer Perceptron (MLP) For Image Reconstruction Using Neural Network Without Overfitting. An Algorithm For Training Multilayer Perceptron (MLP) For Image Reconstruction Using Neural Network Without Overfitting. Mohammad Mahmudul Alam Mia, Shovasis Kumar Biswas, Monalisa Chowdhury Urmi, Abubakar

More information

Multiresponse Sparse Regression with Application to Multidimensional Scaling

Multiresponse Sparse Regression with Application to Multidimensional Scaling Multiresponse Sparse Regression with Application to Multidimensional Scaling Timo Similä and Jarkko Tikka Helsinki University of Technology, Laboratory of Computer and Information Science P.O. Box 54,

More information

Study Guide. Module 1. Key Terms

Study Guide. Module 1. Key Terms Study Guide Module 1 Key Terms general linear model dummy variable multiple regression model ANOVA model ANCOVA model confounding variable squared multiple correlation adjusted squared multiple correlation

More information

An Introduction to Growth Curve Analysis using Structural Equation Modeling

An Introduction to Growth Curve Analysis using Structural Equation Modeling An Introduction to Growth Curve Analysis using Structural Equation Modeling James Jaccard New York University 1 Overview Will introduce the basics of growth curve analysis (GCA) and the fundamental questions

More information

Generalized least squares (GLS) estimates of the level-2 coefficients,

Generalized least squares (GLS) estimates of the level-2 coefficients, Contents 1 Conceptual and Statistical Background for Two-Level Models...7 1.1 The general two-level model... 7 1.1.1 Level-1 model... 8 1.1.2 Level-2 model... 8 1.2 Parameter estimation... 9 1.3 Empirical

More information

MS in Applied Statistics: Study Guide for the Data Science concentration Comprehensive Examination. 1. MAT 456 Applied Regression Analysis

MS in Applied Statistics: Study Guide for the Data Science concentration Comprehensive Examination. 1. MAT 456 Applied Regression Analysis MS in Applied Statistics: Study Guide for the Data Science concentration Comprehensive Examination. The Part II comprehensive examination is a three-hour closed-book exam that is offered on the second

More information

Natural Language Processing CS 6320 Lecture 6 Neural Language Models. Instructor: Sanda Harabagiu

Natural Language Processing CS 6320 Lecture 6 Neural Language Models. Instructor: Sanda Harabagiu Natural Language Processing CS 6320 Lecture 6 Neural Language Models Instructor: Sanda Harabagiu In this lecture We shall cover: Deep Neural Models for Natural Language Processing Introduce Feed Forward

More information

Assignment # 5. Farrukh Jabeen Due Date: November 2, Neural Networks: Backpropation

Assignment # 5. Farrukh Jabeen Due Date: November 2, Neural Networks: Backpropation Farrukh Jabeen Due Date: November 2, 2009. Neural Networks: Backpropation Assignment # 5 The "Backpropagation" method is one of the most popular methods of "learning" by a neural network. Read the class

More information

An algorithm for censored quantile regressions. Abstract

An algorithm for censored quantile regressions. Abstract An algorithm for censored quantile regressions Thanasis Stengos University of Guelph Dianqin Wang University of Guelph Abstract In this paper, we present an algorithm for Censored Quantile Regression (CQR)

More information

IMPROVEMENTS TO THE BACKPROPAGATION ALGORITHM

IMPROVEMENTS TO THE BACKPROPAGATION ALGORITHM Annals of the University of Petroşani, Economics, 12(4), 2012, 185-192 185 IMPROVEMENTS TO THE BACKPROPAGATION ALGORITHM MIRCEA PETRINI * ABSTACT: This paper presents some simple techniques to improve

More information

Comparing different interpolation methods on two-dimensional test functions

Comparing different interpolation methods on two-dimensional test functions Comparing different interpolation methods on two-dimensional test functions Thomas Mühlenstädt, Sonja Kuhnt May 28, 2009 Keywords: Interpolation, computer experiment, Kriging, Kernel interpolation, Thin

More information

Edge Detection for Dental X-ray Image Segmentation using Neural Network approach

Edge Detection for Dental X-ray Image Segmentation using Neural Network approach Volume 1, No. 7, September 2012 ISSN 2278-1080 The International Journal of Computer Science & Applications (TIJCSA) RESEARCH PAPER Available Online at http://www.journalofcomputerscience.com/ Edge Detection

More information

Time Series Analysis by State Space Methods

Time Series Analysis by State Space Methods Time Series Analysis by State Space Methods Second Edition J. Durbin London School of Economics and Political Science and University College London S. J. Koopman Vrije Universiteit Amsterdam OXFORD UNIVERSITY

More information

Analytical model A structure and process for analyzing a dataset. For example, a decision tree is a model for the classification of a dataset.

Analytical model A structure and process for analyzing a dataset. For example, a decision tree is a model for the classification of a dataset. Glossary of data mining terms: Accuracy Accuracy is an important factor in assessing the success of data mining. When applied to data, accuracy refers to the rate of correct values in the data. When applied

More information

A Boosting-Based Framework for Self-Similar and Non-linear Internet Traffic Prediction

A Boosting-Based Framework for Self-Similar and Non-linear Internet Traffic Prediction A Boosting-Based Framework for Self-Similar and Non-linear Internet Traffic Prediction Hanghang Tong 1, Chongrong Li 2, and Jingrui He 1 1 Department of Automation, Tsinghua University, Beijing 100084,

More information

The Use of Biplot Analysis and Euclidean Distance with Procrustes Measure for Outliers Detection

The Use of Biplot Analysis and Euclidean Distance with Procrustes Measure for Outliers Detection Volume-8, Issue-1 February 2018 International Journal of Engineering and Management Research Page Number: 194-200 The Use of Biplot Analysis and Euclidean Distance with Procrustes Measure for Outliers

More information

Linear Methods for Regression and Shrinkage Methods

Linear Methods for Regression and Shrinkage Methods Linear Methods for Regression and Shrinkage Methods Reference: The Elements of Statistical Learning, by T. Hastie, R. Tibshirani, J. Friedman, Springer 1 Linear Regression Models Least Squares Input vectors

More information

An Alternative Estimation Procedure For Partial Least Squares Path Modeling

An Alternative Estimation Procedure For Partial Least Squares Path Modeling An Alternative Estimation Procedure For Partial Least Squares Path Modeling Heungsun Hwang, Yoshio Takane, Arthur Tenenhaus To cite this version: Heungsun Hwang, Yoshio Takane, Arthur Tenenhaus. An Alternative

More information

4.12 Generalization. In back-propagation learning, as many training examples as possible are typically used.

4.12 Generalization. In back-propagation learning, as many training examples as possible are typically used. 1 4.12 Generalization In back-propagation learning, as many training examples as possible are typically used. It is hoped that the network so designed generalizes well. A network generalizes well when

More information

Data Mining. Neural Networks

Data Mining. Neural Networks Data Mining Neural Networks Goals for this Unit Basic understanding of Neural Networks and how they work Ability to use Neural Networks to solve real problems Understand when neural networks may be most

More information

Sequential Estimation in Item Calibration with A Two-Stage Design

Sequential Estimation in Item Calibration with A Two-Stage Design Sequential Estimation in Item Calibration with A Two-Stage Design Yuan-chin Ivan Chang Institute of Statistical Science, Academia Sinica, Taipei, Taiwan In this paper we apply a two-stage sequential design

More information

Generalized Additive Model

Generalized Additive Model Generalized Additive Model by Huimin Liu Department of Mathematics and Statistics University of Minnesota Duluth, Duluth, MN 55812 December 2008 Table of Contents Abstract... 2 Chapter 1 Introduction 1.1

More information

Further Simulation Results on Resampling Confidence Intervals for Empirical Variograms

Further Simulation Results on Resampling Confidence Intervals for Empirical Variograms University of Wollongong Research Online Centre for Statistical & Survey Methodology Working Paper Series Faculty of Engineering and Information Sciences 2010 Further Simulation Results on Resampling Confidence

More information

Seismic regionalization based on an artificial neural network

Seismic regionalization based on an artificial neural network Seismic regionalization based on an artificial neural network *Jaime García-Pérez 1) and René Riaño 2) 1), 2) Instituto de Ingeniería, UNAM, CU, Coyoacán, México D.F., 014510, Mexico 1) jgap@pumas.ii.unam.mx

More information

Predictive Analytics: Demystifying Current and Emerging Methodologies. Tom Kolde, FCAS, MAAA Linda Brobeck, FCAS, MAAA

Predictive Analytics: Demystifying Current and Emerging Methodologies. Tom Kolde, FCAS, MAAA Linda Brobeck, FCAS, MAAA Predictive Analytics: Demystifying Current and Emerging Methodologies Tom Kolde, FCAS, MAAA Linda Brobeck, FCAS, MAAA May 18, 2017 About the Presenters Tom Kolde, FCAS, MAAA Consulting Actuary Chicago,

More information

CHAPTER 1 INTRODUCTION

CHAPTER 1 INTRODUCTION Introduction CHAPTER 1 INTRODUCTION Mplus is a statistical modeling program that provides researchers with a flexible tool to analyze their data. Mplus offers researchers a wide choice of models, estimators,

More information

Preface to the Second Edition. Preface to the First Edition. 1 Introduction 1

Preface to the Second Edition. Preface to the First Edition. 1 Introduction 1 Preface to the Second Edition Preface to the First Edition vii xi 1 Introduction 1 2 Overview of Supervised Learning 9 2.1 Introduction... 9 2.2 Variable Types and Terminology... 9 2.3 Two Simple Approaches

More information

Multi Layer Perceptron trained by Quasi Newton learning rule

Multi Layer Perceptron trained by Quasi Newton learning rule Multi Layer Perceptron trained by Quasi Newton learning rule Feed-forward neural networks provide a general framework for representing nonlinear functional mappings between a set of input variables and

More information

A Data Classification Algorithm of Internet of Things Based on Neural Network

A Data Classification Algorithm of Internet of Things Based on Neural Network A Data Classification Algorithm of Internet of Things Based on Neural Network https://doi.org/10.3991/ijoe.v13i09.7587 Zhenjun Li Hunan Radio and TV University, Hunan, China 278060389@qq.com Abstract To

More information

Applying Supervised Learning

Applying Supervised Learning Applying Supervised Learning When to Consider Supervised Learning A supervised learning algorithm takes a known set of input data (the training set) and known responses to the data (output), and trains

More information

Introduction to Mixed Models: Multivariate Regression

Introduction to Mixed Models: Multivariate Regression Introduction to Mixed Models: Multivariate Regression EPSY 905: Multivariate Analysis Spring 2016 Lecture #9 March 30, 2016 EPSY 905: Multivariate Regression via Path Analysis Today s Lecture Multivariate

More information

ANNOUNCING THE RELEASE OF LISREL VERSION BACKGROUND 2 COMBINING LISREL AND PRELIS FUNCTIONALITY 2 FIML FOR ORDINAL AND CONTINUOUS VARIABLES 3

ANNOUNCING THE RELEASE OF LISREL VERSION BACKGROUND 2 COMBINING LISREL AND PRELIS FUNCTIONALITY 2 FIML FOR ORDINAL AND CONTINUOUS VARIABLES 3 ANNOUNCING THE RELEASE OF LISREL VERSION 9.1 2 BACKGROUND 2 COMBINING LISREL AND PRELIS FUNCTIONALITY 2 FIML FOR ORDINAL AND CONTINUOUS VARIABLES 3 THREE-LEVEL MULTILEVEL GENERALIZED LINEAR MODELS 3 FOUR

More information

Basics of Multivariate Modelling and Data Analysis

Basics of Multivariate Modelling and Data Analysis Basics of Multivariate Modelling and Data Analysis Kurt-Erik Häggblom 9. Linear regression with latent variables 9.1 Principal component regression (PCR) 9.2 Partial least-squares regression (PLS) [ mostly

More information

A fuzzy mathematical model of West Java population with logistic growth model

A fuzzy mathematical model of West Java population with logistic growth model IOP Conference Series: Materials Science and Engineering PAPER OPEN ACCESS A fuzzy mathematical model of West Java population with logistic growth model To cite this article: N S Nurkholipah et al 2018

More information

Structural Equation Models with Small Samples: A Comparative Study of Four Approaches

Structural Equation Models with Small Samples: A Comparative Study of Four Approaches University of Nebraska - Lincoln DigitalCommons@University of Nebraska - Lincoln Public Access Theses and Dissertations from the College of Education and Human Sciences Education and Human Sciences, College

More information

Generalized Additive Models

Generalized Additive Models :p Texts in Statistical Science Generalized Additive Models An Introduction with R Simon N. Wood Contents Preface XV 1 Linear Models 1 1.1 A simple linear model 2 Simple least squares estimation 3 1.1.1

More information

Machine Learning. Chao Lan

Machine Learning. Chao Lan Machine Learning Chao Lan Machine Learning Prediction Models Regression Model - linear regression (least square, ridge regression, Lasso) Classification Model - naive Bayes, logistic regression, Gaussian

More information

A Beginner's Guide to. Randall E. Schumacker. The University of Alabama. Richard G. Lomax. The Ohio State University. Routledge

A Beginner's Guide to. Randall E. Schumacker. The University of Alabama. Richard G. Lomax. The Ohio State University. Routledge A Beginner's Guide to Randall E. Schumacker The University of Alabama Richard G. Lomax The Ohio State University Routledge Taylor & Francis Group New York London About the Authors Preface xv xvii 1 Introduction

More information

Simultaneous Perturbation Stochastic Approximation Algorithm Combined with Neural Network and Fuzzy Simulation

Simultaneous Perturbation Stochastic Approximation Algorithm Combined with Neural Network and Fuzzy Simulation .--- Simultaneous Perturbation Stochastic Approximation Algorithm Combined with Neural Networ and Fuzzy Simulation Abstract - - - - Keywords: Many optimization problems contain fuzzy information. Possibility

More information

A Weighted Least Squares PET Image Reconstruction Method Using Iterative Coordinate Descent Algorithms

A Weighted Least Squares PET Image Reconstruction Method Using Iterative Coordinate Descent Algorithms A Weighted Least Squares PET Image Reconstruction Method Using Iterative Coordinate Descent Algorithms Hongqing Zhu, Huazhong Shu, Jian Zhou and Limin Luo Department of Biological Science and Medical Engineering,

More information

Statistical Matching using Fractional Imputation

Statistical Matching using Fractional Imputation Statistical Matching using Fractional Imputation Jae-Kwang Kim 1 Iowa State University 1 Joint work with Emily Berg and Taesung Park 1 Introduction 2 Classical Approaches 3 Proposed method 4 Application:

More information

Data Analysis and Solver Plugins for KSpread USER S MANUAL. Tomasz Maliszewski

Data Analysis and Solver Plugins for KSpread USER S MANUAL. Tomasz Maliszewski Data Analysis and Solver Plugins for KSpread USER S MANUAL Tomasz Maliszewski tmaliszewski@wp.pl Table of Content CHAPTER 1: INTRODUCTION... 3 1.1. ABOUT DATA ANALYSIS PLUGIN... 3 1.3. ABOUT SOLVER PLUGIN...

More information

Bioinformatics - Lecture 07

Bioinformatics - Lecture 07 Bioinformatics - Lecture 07 Bioinformatics Clusters and networks Martin Saturka http://www.bioplexity.org/lectures/ EBI version 0.4 Creative Commons Attribution-Share Alike 2.5 License Learning on profiles

More information

The exam is closed book, closed notes except your one-page cheat sheet.

The exam is closed book, closed notes except your one-page cheat sheet. CS 189 Fall 2015 Introduction to Machine Learning Final Please do not turn over the page before you are instructed to do so. You have 2 hours and 50 minutes. Please write your initials on the top-right

More information

Regression. Dr. G. Bharadwaja Kumar VIT Chennai

Regression. Dr. G. Bharadwaja Kumar VIT Chennai Regression Dr. G. Bharadwaja Kumar VIT Chennai Introduction Statistical models normally specify how one set of variables, called dependent variables, functionally depend on another set of variables, called

More information

Minimum sample size estimation in PLS-SEM: The inverse square root and gamma-exponential methods

Minimum sample size estimation in PLS-SEM: The inverse square root and gamma-exponential methods Minimum sample size estimation in PLS-SEM: The inverse square root and gamma-exponential methods Ned Kock Pierre Hadaya Full reference: Kock, N., & Hadaya, P. (2018). Minimum sample size estimation in

More information

Fuzzy Signature Neural Networks for Classification: Optimising the Structure

Fuzzy Signature Neural Networks for Classification: Optimising the Structure Fuzzy Signature Neural Networks for Classification: Optimising the Structure Tom Gedeon, Xuanying Zhu, Kun He, and Leana Copeland Research School of Computer Science, College of Engineering and Computer

More information

Artificial Neural Network and Multi-Response Optimization in Reliability Measurement Approximation and Redundancy Allocation Problem

Artificial Neural Network and Multi-Response Optimization in Reliability Measurement Approximation and Redundancy Allocation Problem International Journal of Mathematics and Statistics Invention (IJMSI) E-ISSN: 2321 4767 P-ISSN: 2321-4759 Volume 4 Issue 10 December. 2016 PP-29-34 Artificial Neural Network and Multi-Response Optimization

More information

LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS

LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS Neural Networks Classifier Introduction INPUT: classification data, i.e. it contains an classification (class) attribute. WE also say that the class

More information

Character Recognition Using Convolutional Neural Networks

Character Recognition Using Convolutional Neural Networks Character Recognition Using Convolutional Neural Networks David Bouchain Seminar Statistical Learning Theory University of Ulm, Germany Institute for Neural Information Processing Winter 2006/2007 Abstract

More information

The comparison of performance by using alternative refrigerant R152a in automobile climate system with different artificial neural network models

The comparison of performance by using alternative refrigerant R152a in automobile climate system with different artificial neural network models Journal of Physics: Conference Series PAPER OPEN ACCESS The comparison of performance by using alternative refrigerant R152a in automobile climate system with different artificial neural network models

More information

Center for Automation and Autonomous Complex Systems. Computer Science Department, Tulane University. New Orleans, LA June 5, 1991.

Center for Automation and Autonomous Complex Systems. Computer Science Department, Tulane University. New Orleans, LA June 5, 1991. Two-phase Backpropagation George M. Georgiou Cris Koutsougeras Center for Automation and Autonomous Complex Systems Computer Science Department, Tulane University New Orleans, LA 70118 June 5, 1991 Abstract

More information

CPSC 340: Machine Learning and Data Mining. Principal Component Analysis Fall 2016

CPSC 340: Machine Learning and Data Mining. Principal Component Analysis Fall 2016 CPSC 340: Machine Learning and Data Mining Principal Component Analysis Fall 2016 A2/Midterm: Admin Grades/solutions will be posted after class. Assignment 4: Posted, due November 14. Extra office hours:

More information

A Network Intrusion Detection System Architecture Based on Snort and. Computational Intelligence

A Network Intrusion Detection System Architecture Based on Snort and. Computational Intelligence 2nd International Conference on Electronics, Network and Computer Engineering (ICENCE 206) A Network Intrusion Detection System Architecture Based on Snort and Computational Intelligence Tao Liu, a, Da

More information

Machine Learning in Biology

Machine Learning in Biology Università degli studi di Padova Machine Learning in Biology Luca Silvestrin (Dottorando, XXIII ciclo) Supervised learning Contents Class-conditional probability density Linear and quadratic discriminant

More information

FMA901F: Machine Learning Lecture 3: Linear Models for Regression. Cristian Sminchisescu

FMA901F: Machine Learning Lecture 3: Linear Models for Regression. Cristian Sminchisescu FMA901F: Machine Learning Lecture 3: Linear Models for Regression Cristian Sminchisescu Machine Learning: Frequentist vs. Bayesian In the frequentist setting, we seek a fixed parameter (vector), with value(s)

More information

Recent advances in Metamodel of Optimal Prognosis. Lectures. Thomas Most & Johannes Will

Recent advances in Metamodel of Optimal Prognosis. Lectures. Thomas Most & Johannes Will Lectures Recent advances in Metamodel of Optimal Prognosis Thomas Most & Johannes Will presented at the Weimar Optimization and Stochastic Days 2010 Source: www.dynardo.de/en/library Recent advances in

More information

Latent Curve Models. A Structural Equation Perspective WILEY- INTERSCIENΠKENNETH A. BOLLEN

Latent Curve Models. A Structural Equation Perspective WILEY- INTERSCIENΠKENNETH A. BOLLEN Latent Curve Models A Structural Equation Perspective KENNETH A. BOLLEN University of North Carolina Department of Sociology Chapel Hill, North Carolina PATRICK J. CURRAN University of North Carolina Department

More information

Statistical Modeling with Spline Functions Methodology and Theory

Statistical Modeling with Spline Functions Methodology and Theory This is page 1 Printer: Opaque this Statistical Modeling with Spline Functions Methodology and Theory Mark H. Hansen University of California at Los Angeles Jianhua Z. Huang University of Pennsylvania

More information

Fast Learning for Big Data Using Dynamic Function

Fast Learning for Big Data Using Dynamic Function IOP Conference Series: Materials Science and Engineering PAPER OPEN ACCESS Fast Learning for Big Data Using Dynamic Function To cite this article: T Alwajeeh et al 2017 IOP Conf. Ser.: Mater. Sci. Eng.

More information

Correctly Compute Complex Samples Statistics

Correctly Compute Complex Samples Statistics SPSS Complex Samples 15.0 Specifications Correctly Compute Complex Samples Statistics When you conduct sample surveys, use a statistics package dedicated to producing correct estimates for complex sample

More information

Pattern Recognition. Kjell Elenius. Speech, Music and Hearing KTH. March 29, 2007 Speech recognition

Pattern Recognition. Kjell Elenius. Speech, Music and Hearing KTH. March 29, 2007 Speech recognition Pattern Recognition Kjell Elenius Speech, Music and Hearing KTH March 29, 2007 Speech recognition 2007 1 Ch 4. Pattern Recognition 1(3) Bayes Decision Theory Minimum-Error-Rate Decision Rules Discriminant

More information

A Combined Method for On-Line Signature Verification

A Combined Method for On-Line Signature Verification BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 14, No 2 Sofia 2014 Print ISSN: 1311-9702; Online ISSN: 1314-4081 DOI: 10.2478/cait-2014-0022 A Combined Method for On-Line

More information

Dynamic Analysis of Structures Using Neural Networks

Dynamic Analysis of Structures Using Neural Networks Dynamic Analysis of Structures Using Neural Networks Alireza Lavaei Academic member, Islamic Azad University, Boroujerd Branch, Iran Alireza Lohrasbi Academic member, Islamic Azad University, Boroujerd

More information

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS CHAPTER 4 CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS 4.1 Introduction Optical character recognition is one of

More information

SELECTION OF A MULTIVARIATE CALIBRATION METHOD

SELECTION OF A MULTIVARIATE CALIBRATION METHOD SELECTION OF A MULTIVARIATE CALIBRATION METHOD 0. Aim of this document Different types of multivariate calibration methods are available. The aim of this document is to help the user select the proper

More information

Chemometrics. Description of Pirouette Algorithms. Technical Note. Abstract

Chemometrics. Description of Pirouette Algorithms. Technical Note. Abstract 19-1214 Chemometrics Technical Note Description of Pirouette Algorithms Abstract This discussion introduces the three analysis realms available in Pirouette and briefly describes each of the algorithms

More information

Supervised vs unsupervised clustering

Supervised vs unsupervised clustering Classification Supervised vs unsupervised clustering Cluster analysis: Classes are not known a- priori. Classification: Classes are defined a-priori Sometimes called supervised clustering Extract useful

More information

Bootstrap Confidence Interval of the Difference Between Two Process Capability Indices

Bootstrap Confidence Interval of the Difference Between Two Process Capability Indices Int J Adv Manuf Technol (2003) 21:249 256 Ownership and Copyright 2003 Springer-Verlag London Limited Bootstrap Confidence Interval of the Difference Between Two Process Capability Indices J.-P. Chen 1

More information

Principal Component Image Interpretation A Logical and Statistical Approach

Principal Component Image Interpretation A Logical and Statistical Approach Principal Component Image Interpretation A Logical and Statistical Approach Md Shahid Latif M.Tech Student, Department of Remote Sensing, Birla Institute of Technology, Mesra Ranchi, Jharkhand-835215 Abstract

More information

Face Detection Using Radial Basis Function Neural Networks With Fixed Spread Value

Face Detection Using Radial Basis Function Neural Networks With Fixed Spread Value Detection Using Radial Basis Function Neural Networks With Fixed Value Khairul Azha A. Aziz Faculty of Electronics and Computer Engineering, Universiti Teknikal Malaysia Melaka, Ayer Keroh, Melaka, Malaysia.

More information

Introduction to Mplus

Introduction to Mplus Introduction to Mplus May 12, 2010 SPONSORED BY: Research Data Centre Population and Life Course Studies PLCS Interdisciplinary Development Initiative Piotr Wilk piotr.wilk@schulich.uwo.ca OVERVIEW Mplus

More information

A technique for constructing monotonic regression splines to enable non-linear transformation of GIS rasters

A technique for constructing monotonic regression splines to enable non-linear transformation of GIS rasters 18 th World IMACS / MODSIM Congress, Cairns, Australia 13-17 July 2009 http://mssanz.org.au/modsim09 A technique for constructing monotonic regression splines to enable non-linear transformation of GIS

More information

Lecture on Modeling Tools for Clustering & Regression

Lecture on Modeling Tools for Clustering & Regression Lecture on Modeling Tools for Clustering & Regression CS 590.21 Analysis and Modeling of Brain Networks Department of Computer Science University of Crete Data Clustering Overview Organizing data into

More information

COMP 551 Applied Machine Learning Lecture 14: Neural Networks

COMP 551 Applied Machine Learning Lecture 14: Neural Networks COMP 551 Applied Machine Learning Lecture 14: Neural Networks Instructor: (jpineau@cs.mcgill.ca) Class web page: www.cs.mcgill.ca/~jpineau/comp551 Unless otherwise noted, all material posted for this course

More information

Artificial Neural Networks MLP, RBF & GMDH

Artificial Neural Networks MLP, RBF & GMDH Artificial Neural Networks MLP, RBF & GMDH Jan Drchal drchajan@fel.cvut.cz Computational Intelligence Group Department of Computer Science and Engineering Faculty of Electrical Engineering Czech Technical

More information

TEMPORAL data mining is a research field of growing

TEMPORAL data mining is a research field of growing An Optimal Temporal and Feature Space Allocation in Supervised Data Mining S. Tom Au, Guangqin Ma, and Rensheng Wang, Abstract This paper presents an expository study of temporal data mining for prediction

More information

Graph Sampling Approach for Reducing. Computational Complexity of. Large-Scale Social Network

Graph Sampling Approach for Reducing. Computational Complexity of. Large-Scale Social Network Journal of Innovative Technology and Education, Vol. 3, 216, no. 1, 131-137 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/1.12988/jite.216.6828 Graph Sampling Approach for Reducing Computational Complexity

More information

Predict Outcomes and Reveal Relationships in Categorical Data

Predict Outcomes and Reveal Relationships in Categorical Data PASW Categories 18 Specifications Predict Outcomes and Reveal Relationships in Categorical Data Unleash the full potential of your data through predictive analysis, statistical learning, perceptual mapping,

More information

The Bootstrap and Jackknife

The Bootstrap and Jackknife The Bootstrap and Jackknife Summer 2017 Summer Institutes 249 Bootstrap & Jackknife Motivation In scientific research Interest often focuses upon the estimation of some unknown parameter, θ. The parameter

More information

Radial Basis Function (RBF) Neural Networks Based on the Triple Modular Redundancy Technology (TMR)

Radial Basis Function (RBF) Neural Networks Based on the Triple Modular Redundancy Technology (TMR) Radial Basis Function (RBF) Neural Networks Based on the Triple Modular Redundancy Technology (TMR) Yaobin Qin qinxx143@umn.edu Supervisor: Pro.lilja Department of Electrical and Computer Engineering Abstract

More information

Supervised Variable Clustering for Classification of NIR Spectra

Supervised Variable Clustering for Classification of NIR Spectra Supervised Variable Clustering for Classification of NIR Spectra Catherine Krier *, Damien François 2, Fabrice Rossi 3, Michel Verleysen, Université catholique de Louvain, Machine Learning Group, place

More information

100 Myung Hwan Na log-hazard function. The discussion section of Abrahamowicz, et al.(1992) contains a good review of many of the papers on the use of

100 Myung Hwan Na log-hazard function. The discussion section of Abrahamowicz, et al.(1992) contains a good review of many of the papers on the use of J. KSIAM Vol.3, No.2, 99-106, 1999 SPLINE HAZARD RATE ESTIMATION USING CENSORED DATA Myung Hwan Na Abstract In this paper, the spline hazard rate model to the randomly censored data is introduced. The

More information

Radial Basis Function Neural Network Classifier

Radial Basis Function Neural Network Classifier Recognition of Unconstrained Handwritten Numerals by a Radial Basis Function Neural Network Classifier Hwang, Young-Sup and Bang, Sung-Yang Department of Computer Science & Engineering Pohang University

More information