Software Fault Prediction using Machine Learning Algorithm Pooja Garg 1 Mr. Bhushan Dua 2

Size: px
Start display at page:

Download "Software Fault Prediction using Machine Learning Algorithm Pooja Garg 1 Mr. Bhushan Dua 2"

Transcription

1 IJSRD - International Journal for Scientific Researc & Development Vol. 3, Issue 04, 2015 ISSN (online): Software Fault Prediction using Macine Learning Algoritm Pooja Garg 1 Mr. Busan Dua 2 1,2 Department of Computer Science and Engineering 1,2 DIET, Karnal, Haryana, India Abstract Software fault prediction indicates te likeliood of software fault at an early stage of software development process and ence it will be easier to identify and correct tem and also reduce faults tat would occur at later stages. Tis will improve te overall quality of te software product. In te recent years, several macine learning tecniques wic uses examples of faulty and non-faulty modules to build prediction models. Software metric ave been used as input to tese macine learning tecniques to represent te software modules. Support Vector Macines (SVM) is te main algoritm wic as been used for classification of faulty and non-faulty modules. But prior to using SVM, A few data pre-processing metods as been proposed suc as bootstrapping, clustering and random projection. Tese metods are needed because of certain problems wit metric datasets suc as class imbalance, noise, small dataset size and ig dimension. It as sowed by experiments tat wen software metrics dataset is preprocessed and transformed using above tecniques, te performance of SVM in predicting faulty and non-faulty modules is better. Te accuracy measure used for comparing performances of different models was F-measure. F-measure as been used because of its robustness to class imbalance. For some models F-measure as increased by about 40%, wic is very encouraging. Experimental results sows tat te proposed approac worked better tan existing approac using MATLAB and Weka programming. Key words: Support Vector Macines (SVM) Bootstrapping, Clustering, Random Projection, Software Metrics, Metrics datasets I. INTRODUCTION A. Software Metrics: Software metrics are various measurements of software. Measurements are useful because tey are repeatable wic means same measurement value will be reported by different persons for a particular object. Metrics are being increasingly used for quantitative and qualitative analysis of software. Tey give software professionals te ability to evaluate software process. One of te simplest and earliest metric is Lines of Code (LOC). LOC as te name implies means number of text lines in a software program. It gives a good measure of size of software. LOC metrics can be different depending on wat tey count. Oter metrics are Halstead metrics, McCabe s metric suit, Node Count, Condition count, Edge count, Branc count, Call pairs, Error count Etc. B. Metrics Datasets: A major callenge in developing software quality prediction models is to find sources of metrics data. Datasets wic contains tese metrics and fault information are not available widely. Altoug tere are many organizations were tis kind of data is available internally for its management, but tese are not publicly available. NASA MDP datasets are one suc publicly available datasets wic is widely used in software quality modelling. C. NASA MDP Metrics Datasets NASA MDP datasets are available from a total of 12 different NASA projects (Promise Data, 2012).Cleaned versions of datasets denoted by D are used (Sepperd et al., 2013). Dataset from eac project is given a name. Te name, description and programming language for eac of tese 12 datasets is given below (Jianget al., 2008): JM1: Real-time predictive ground system written CM1: NASA spacecraft instrument written KC1: Storage management for receiving and processing ground data written in C++. PC1: Fligt software for eart orbiting satellite written KC3: Storage management for ground data written in Java. MW1: Zero gravity experiment related to combustion written MC2: Video guidance system written MC1: Combustion experiment of a space suttle written in bot C and C++. PC2: Dynamic simulator for attitude control systems written PC3: Fligt software for eart orbiting satellite written PC4: Fligt software for eart orbiting satellite written PC5: Fligt software for eart orbiting satellite written D. Support Vector Macines: Support Vector Macines (SVM) is te one of te most widely used supervised macine learning algoritm (Hsuet al., 2003). SVM tries to map te data so tat data of different class is separated by a yper plane tat as maximum distance from nearest training data point of all classes. SVM always cooses a yper plane tat maximizes tis margin. It is matematically proved tat by maximizing te margin of separation on training data, it reduces te complexity of SVM and gives better performance on unseen data. Te optimized yper plane is te one tat minimize straining error and as maximum margin of separation. Te basis of SVM is linear discriminant functions. SVM classification function like linear discriminant functions is of form Parameters to be learnt are. If tere are only 2 classes and patterns are linearly separable ten in linear discriminant functions, function tat is learn is suc tat for patterns of one class and for patterns of oter class. Parameters ave to be learned in suc a way tat above inequalities is followed for all training examples in case of linearly separable case. Now, consider two yperplane at equal margins from ; and. Tese planes are cosen suc tat training All rigts reserved by

2 Software Fault Prediction using Macine Learning Algoritm examples tat are nearest to yperplane falls on above 2 yperplanes only (Duda et al., 2012). Figure 4 gives a clear illustration ow yperplane will look like in 2- d plane. Fig. 1: Support Lines Caracterizing te Margin Figure 1 sows training patterns from two classes and labeled as O and X respectively. Te two lines on eiter side of middle line are called as support yperplanes (in tis case of 2-d data, yperplane is a line). Te distance between yperplane and decision boundary is given by: were, f(x) =1. Similarly distance between yperplane and decision boundary is also. So total distance between te 2 supporting yperplanes is. SVM algoritm is noting but to maximize tis distance i.e. maximizing te margin between supporting yperplanes. In oter words, task is to find a w tat minimizes. For simplicity in calculation, is replaced by. Constraint is tat all X class examples sould satisfy condition and all O class examples sould satisfy. In oter words, minimum margin of all examples sould be at least from decision boundary. Lets denotes te output lable were for all X class examples and for all O class examples and denoted number of examples. Ten te SVM problem can be summarized as follows: Minimize (1) Subject to: ; SVM needs only tose examples tat fall on supporting yperplanes to learn te function. Tese training examples are called as supporting vectors. Tese examples (or vectors) are sufficient to learn bot w and b. If data is not linearly separable, ten decision boundary could be nonlinear. Figure 4 sowed an example of SVM in linearly separable case. Training examples of positive class is denoted by solid circle and negative class by ollow circle. Margin of a linear classifier is te widt of boundary (yellow strip) tat can be increased before itting a point. A linear classifier for above example and its margin is sown in Figure 5(a). (a) (b) Fig. 2: (a) A linear classifier and its margin in yellow. (b) Linear classifier wit te maximum margin Linear SVM (Moore, 2001). SVM is te maximum margin classifier as discussed above. Te maximum margin will be someting like as sown in Figure 2(b). SVM tries to make margin between supporting vectors and decision boundary as large as possible. For above example, tere are tree support vectors for te maximum margin decision boundary as sown in Figure 5(b). Te boundaries of te yellow strip are te supporting yperplanes (or lines in tis case). E. Resampling and Bootstrapping: Resampling is a tecnique of generating new patterns from te available set of patterns of a class (Wolter, 2007). Tis is a popular way of reducing sparseness of data. If te training set size is small due to reasons like unavailability of enoug patterns or ig cost of collecting patterns ten Pattern Syntesis is used to generate new patterns to add to te training set. Resampling is broadly divided into te following two types: Model based Instance based Model based resampling derives a model from te training set of patterns and uses tis model for generating new patterns. Te derived model could be a Hidden Markov Model (HMM) and or it could be a probability distribution model (Viswanatet al., 2005). Te advantage is tat artificial patterns can easily be generated once a model is in and. But if during te derivation of te model some wrong assumptions were made ten te patterns generated may be erroneous. Computational time for deriving te model can be proibitive. Moreover once te model is derived tis can be used to classify te patterns directly instead of using it to generate more patterns and ten use some oter tecnique to classify. In Instance based resampling, tere is no need to derive any model. By using te existing pattern, new patterns are derived instantaneously using some properties of te data. Te advantage of instance based resampling is tat tere is no need to any assumptions and it s very fast compared to generating a probability model first (Viswanatet al., 2005). Unlike probability distribution model, number of generated patterns is finite. Bootstrapping is one suc tecnique and is popularly used in statistics for resampling (Davison, 1997). In a popular implementation of bootstrapping, for every pattern its r-nearest neigbors are found from among te training set patterns of te same class and a weigted average of te neigbors is taken(hamamoto et al., 1997). Suc an average pattern is called a bootstrap pattern. Tis procedure can be repeated many times to generate more patterns. Tere are many variations of tis basic All rigts reserved by

3 Software Fault Prediction using Macine Learning Algoritm bootstrapping tecnique like avoiding te same pattern s use more tan once and taking weigts differently etc. Bootstrapping as been used in conjunction wit nearest neigbor classifiers only. In tis dissertation te researcer uses a novel metod wic is to use patterns tat are generated using bootstrapping instead of original software metrics training set and investigates weter tis could increase prediction accuracy of SVM for te software fault proneness prediction problem. F. Clustering: Clustering is an unsupervised learning tecnique were te given patterns are clustered based on some criteria like distance between points (Duda et al., 2012). It is te process of partitioning a set of patterns. For example, consider te following collection of nine caracters: C.a A, e c * E Te tree clusters after portioning are:, *aec C A E In te above example, size is used as te basis for clustering. Te tree clusters consist of smallest sized, lower case and upper case letters. Tus clustering is a tecnique to divide data into smaller groups of data based on some similarity features. Most commonly used similarity measure in clustering is distance. Given a set of points te aim is to divide te points into clusters based suc tat points are close to eac oter in a cluster. A popular algoritm based on inter-cluster similarity is K-means clustering algoritm in wic data points are divided into k clusters. Eac point is closer to te mean of its cluster tan te mean of oter clusters (Kanungoet al., 2000). An example is given below. Suppose tere are following 9 points in 3-D plane. { } Let te number of clusters be 3;. In above example distance between any two points in a cluster is less tan distance between any two points in different clusters. For example, distance between is 2.5 unit and distance between is around 6 units. G. Random Projection If we reduce te dimensionality of te input data by some tecnique tat preserves certain properties in te input space like distance (dissimilarity measure) or angles (similarity measure) between data points, ten we will be andling lower dimensional data. Moreover te minimum number of examples m required is also lower in te lower dimensional space. Random projection is widely used tecnique to reduce dimension. Random projection preserves all pair wise distances to witin a factor of were ( ) Let R be a random matrix used for projecting d- dimensional vectors onto k-dimensional space. If we ave n d-dimensional patterns ten A will be and B will be matrix It can be sown tat te time taken to construct te random matrix is and performing projection takes time. II. LITERATURE SURVEY Coosing te appropriate macine learning algoritm is te most important task before building any software quality model. A study to analyze various macine learning algoritms for software quality prediction on NASA MDP data sets as been done by few autors in 2008 (Vandecruyset al., 2008). Tey ave used 70% data for training and 30% for testing. SVM as also been tried. All macine learning algoritms tat tey analyzed were found to give close results. Tey also proposed a new algoritm AntiMiner+ wic was giving sligtly better results (Vandecruyset al., 2008). One major issue in te software metrics datasets is te class imbalance (Japkowicz, 2000). Class imbalance means instances of one class in muc more tan instances of oter class. In case of software metrics datasets, most of te modules are non-faulty. Hence, tere is large imbalance in number of non-faulty and faulty modules. Class imbalance is found in many datasets and not just software metrics datasets (Guo et al., 2008). In metrics dataset most of te software modules are not defective. Because of tis imbalance, problems arise in learning instance-based macine learning classifiers (Seiffertet al., 2007). Since tere is imbalance in number of software modules instances wit defects and number of instances wit no defects, accuracy of te classifier is not te best possible measure due to bias. Because of te bias, if classifier tries to increase accuracy it will also increase false alarm rate. Increase in false alarm rate will mean software modules wic are not defective would be predicted as defective. Companies ten ave to again test te modules for defects wen tere is none. Tis will increase time and cost. A researc was done to use resampling tecniques to overcome class imbalance (Pelayo& Dick, 2007). A resampling tecnique known as SMOTE wic oversamples te minority class data was used. Results of te researc were encouraging. For few selected datasets, tere was around 25% improvement in accuracy. In one anoter researc, empirical evaluation of five resampling tecniques was done to overcome class imbalance (Afzal et al., 2012). Bootstrapping gave better performance compared to oter resampling metods for four datasets but overall tere was no significant improvement. Combination of different classifiers for predicting software quality was also tried to improve performance (Tosun et al., 2008). Autors ave used tree classifiers and decision was taken on te majority vote. Tis means if two of te tree classifiers predict one ting ten tat will be te overall result. Te classifiers autors used were simple Naïve Bayes, Artificial Neural Networks (ANN) and Voting feature intervals (VFI). He applied te combination of tree classifiers on NASA MDP datasets. Tere was no significant improvement in performance. But on non-nasa datasets, performance improved sligtly. Te algoritm was able to predict 75% of defected modules wile using Naïve Bayes would ave given only 70% detection rate (Tosun et al., 2008). SVM was also used for software defect prediction. One suc paper evaluated SVM capability for fault All rigts reserved by

4 Software Fault Prediction using Macine Learning Algoritm proneness prediction and compared its performance wit oter classifiers. (Elis&Elis, 2008). Te results were at least as good as for oter classifiers for all datasets wit sligtly better results for couple of datasets. One more study analyzed te empirically validate te relation between software metrics and fault proneness using SVM as te learning algoritm (Sing et al., 2009). Te findings stated tat tere were few metrics wic were found to ave relation to fault proneness. For macine learning applications, data transformation tecniques could elp to acieve better performance. For software quality models also, autors ave tried to use tese transformation tecniques (Jiang et al., 2007). Tey ave used log normalization, discretization and teir combination for filtering. After filtering tey used te data on various macining learning classifiers. Performances of Naïve Bayes improved after pre-processing using discretization. Oter tan tat classifiers performance did not improve muc by pre-processing data using log normalization and discretization. So te important finding is tat tis transformation did not succeed in improving te performance of classifiers. Oter finding was tat te random forest performed better overall tan oter classifier (Jiang et al., 2007). Logistic regression was also used for fault prediction on a small data set (Denaroet al., 2002). Te data set was from antenna network configuration. Performance of logistic regression was good on tat data set wit True Positive Rate (TPR) just below 90%. TPR is te fraction of examples belonging to positive class tat are correctly classified. Tere was also one study performed on te analysis of NASA MDP data sets (Koru &Liu, 2005). It was found tat large sized software modules contain more defects. Tat is of course expected. Tey also noted tat many data sets are small sized. Hence metrics values will also be small. Tis will result in small variance between data sets of two classes and ence classification becomes difficult. To prove tis, tey decided to divide sets of instances of modules into different sizes. Different classifiers were used on eac set and te result was tat modules wit large size performed better tan smaller sized modules(koru &Liu, 2005). Selecting best subset of metrics is a good way to eliminate unwanted and redundant metrics. Tis process is called as feature selection. Principle Component Analysis (PCA) is a popular algoritm to select best subset of features and as been tried for software metrics datasets also (Menzieset al.,2003). Autors ave used PCA to remove inter-correlation among te metrics. Tis was done by transforming te metrics set to a smaller dimension set. Number of metrics was reduced to 5-6 from original 20 metrics. Results were good and better performance was acieved. Anoter major issue in software metrics data is noise and outliers (Seiffertet al., 2007). In te context of macine learning, noise means corruption, redundancy or any oter imperfection in data tat may impact te learning algoritm s performance. Noise may appear because a faulty module was not detected in testing and ence during metrics collection it was labeled as non-faulty. If in a dataset, tere are one or more software metrics tat sows very little variation across te faulty and non-faulty models, ten tese would also be considered as noise. Anoter example of noise would be two or more metrics sowing nearly identical variation across all instances (Kosgoftaaret al., 2005). Tis would result in redundant information from some metrics. Next is te problem of outliers. Outliers are related to measurement errors. An instance in dataset is termed as outlier if values of its features are very different from oter instances from te same class. A researc paper based on Random Forests sowed more robustness to noise and outliers (Guo et al., 2004).Autors ave used trade off between accuracy and recall using a parameter called cut-off. Tey also used Random Forests for feature selection and selected five best features. But tat did not improve te performance of te classifier. Te model s performance was also compared wit oter macine learning classifiers. It was found tat Random Forests give sligtly better performance (Guo et al., 2004). III. PROPOSED WORK Te proposed algoritm is described below: Two variations of bootstrapping were used to pre-process te metrics data. In te first metod nearest neigbors of eac dataset example were found in te input space. In te second metod nearest neigbors were found in te kernel space. Bot te algoritms are given below altoug only algoritm 1 will be used in tis dissertation. A. Bootstrapping in Input Space Algoritm 1:Bootstrapping in Input Space Output: Bootstrapped metrics data set Bootstrapping in Kernel Space Finding neigbors in te kernel space makes more sense. Using kernel function, faulty and non-faulty modules are linearly separated by projecting metrics into iger dimension space. Bootstrapping in te kernel space reduces te Dia and increases te Margin in te kernel space. Neigbors in te kernel space were found using kernel trick as follows: Te distance between two points and is ( ) Algoritm 2: Bootstrapping in Kernel Space All rigts reserved by

5 Software Fault Prediction using Macine Learning Algoritm Output: Bootstrapped metric data set B. Clustering Clustering can also be used as a data syntesis tecnique. Clustering is an unsupervised learning tecnique were te given training examples are clustered based on some criteria like distance between points. Clustering as been used in association wit SVMs before (Finley&Joacims,2005). A notable example is CBSVM (Yu et al., 2003). But te impact of clustering to control te VC dimension of SVM as not been discussed in te literature earlier. Te researcer presents an approac to reduce te VC dimension of SVM using clustering. Here cluster centers replace te original metrics data sets. Training of SVM takes place on tese cluster centers instead of te original metrics data set. Tis can be tougt of as a training data syntesis tecnique were te syntetic data examples are te cluster centers. It can be sown tat clustering reduces te VC dimension of SVM. Wen te data examples belonging to a class are clustered and te entire dataset is replaced by tese cluster centers, te Margin increases and te diameter decreases. Te training examples responsible for decreasing te margin or increasing te diameter are tose wic lie at te boundaries. Wen metrics data set is clustered tese points are sifted towards te centre of te metrics data set. Tey are not present temselves in te final data set (te cluster centres form te final data set)but te centre to wose cluster tese points belong is influenced by eac point. Tus if te cluster as a boundary data example, ten it will pull te cluster centre towards te boundary but oter inner data examples also pull te centre. Tus every data example as its influence on te centre, and te sape of te boundary is not lost due to clustering. Te boundary just srinks and all te data examples wic lie outside tis new boundary are effectively removed. Most probably tese outside data examples are noise. Tus clustering takes care of noisy data removal. Clustering brings out te true boundary of te data set by filtering out te noisy data. If required, we can eliminate points in small size clusters from te data sets. Wen te training data is clustered, te cluster centroid is average of all data examples belonging to tat cluster. If te size of every cluster is more tan one ten tis average pattern lies closer to te centroid of te entire training set tat te fartest pattern in te training set. Wen te cluster centres replace te training set te new Dia and Margin are smaller tan te corresponding values of te original training set. Let te number of clusters in eac class after clustering be k. Let te cluster centres be in class 1 and in class 2. Let te points belonging to cluster be and similarly for oter clusters in class 2. Ten we ave clusters centres. Let te new Dia and Margin be Now we sow tat In te case of Margin too, te proof follows in similar fasion. IV. EXPERIMENT RESULT Te improved approac of using bootstrapping and clustering wit SVM as many advantages compared to previous approac: A. Experiment: First normalization is done on all datasets. Ten outliers are removed using Weka Outlier removal filter. Ten for 10- fold cross validation, dataset is broken into 10 equal subsets. 9 sets are used for training and rest one for testing. Before dividing datasets, randomize option is first selected in Weka to randomize te data. Te training set s minority class, i.e. faulty modules are oversampled by using SMOTE. Ten bootstrapping is applied to te oversampled training set separately for eac class. Bootstrapping will give output dataset for eac class wic is te combined to make a single set before applying SVM. After tat six SVM models wit RBF kernels are built by varying gamma and cost factor. Ten models are testing on test data. Procedure is repeated for all 10 folds. An example of using SMOTE in Weka is sown in Figure 3. Te dataset is one fold on CM1 after outlier removal. Before applying SMOTE, number of defective modules was 30 as sown in Figure 3(a). Te effect of applying SMOTE on CM1 is sown in Figure 3(b). SMOTE percentage used was 300% wic increase te number of defective modules to 120. Table 1 gives te SMOTE percentage for eac dataset used in tis experiment. Fig. 3(a) Fig. 3(b) Fig. 3: (a)smote filter in Weka. (b) Effect of SMOTE on CM1 dataset wit percentage parameter as 300. Dataset SMOTE Percentage CM1 300 JM1 100 KC1 200 KC3 150 All rigts reserved by

6 Software Fault Prediction using Macine Learning Algoritm MC1 300 MC2 0 MW1 300 PC1 300 PC2 300 PC3 250 PC4 300 PC5 300 Table 1: SMOTE percentage for different datasets Finally an example of comparison of Cyclomatic Density values before and after bootstrapping is sown below by Figure 4 for CM1 dataset. Fig. 4: Design Complexity Metric (a) before bootstrapping b) after bootstrapping As can been seen by Figure 17 tat standard deviation as come down wic is expected wile taking any kind of averages. V. CONCLUSION By using different filtering tecniques and various preprocessing and transformation tecniques like bootstrapping, clustering, random projection, large number of fault prediction problem are solved using te SVM(software vector macine) Algoritm and te datasets used are te popular NASA Metric Data Program datasets. From te result te conclusions are: 1) Tere is very significant improved in fault prediction of models built after using bootstrapping and clustering as data transformation. 2) Performance of fault proneness prediction model increases wen a subset of metrics based on a correlation measure is selected instead of all metrics for building te model. prediction?. Journal of Empirical Software Engineering, vol. 32, no. 2, pp [5] Guo, L., Ma, Y., Cukic, B., & Sing, H. (2004).Robust prediction of fault-proneness by random forests. Software Reliability Engineering, 2004.ISSRE t International Symposium on IEEE, pp [6] Koru, A. G., & Liu, H. (2005).An investigation of te effect of module size on defect prediction using static measures.acm SIGSOFT Software Engineering Notes.ACM, vol. 30, no. 4, pp [7] Lanza, M., Marinescu, R., &Ducasse, S. (2006). Object-oriented metrics in practice. Heidelberg: Springer, pp [8] Jiang, Y., Cukic, B., & Menzies, T. (2007). Fault prediction using early lifecycle data. Software Reliability, 2007.ISSRE'07.Te 18t IEEE International Symposium on. IEEE, pp [9] Kotsiantis, S. B. (2007). Supervised Macine Learning: A Review of Classification Tecniques. Informatica, vol. 31, pp [10] Elis, K. O., &Elis, M. O. (2008). Predicting defectprone software modules using support vector macines. Journal of Systems and Software, vol. 81, no. 5, pp [11] Gondra, I. (2008). Applying macine learning to software fault-proneness prediction. Journal of Systems and Software, vol. 81, no. 2, pp [12] Guo, X., Yin, Y., Dong, C., Yang, G., & Zou, G. (2008).On te class imbalance problem. Natural Computation, 2008.ICNC'08.Fourt International Conference on. IEEE, vol. 4, pp [13] Catal, C., &Diri, B. (2009). A systematic review of software fault prediction studies. Expert systems wit applications, vol. 36, no. 4, pp [14] Pressman, R. S. (2010) Software Engineering: A Practitioner's Approac. 8t Ed. New York: McGraw- Hill. REFERENCES [1] Japkowicz,N.(2000). Te class imbalance problem: Significance and strategies. Proc. of te Int l Conf. on Artificial Intelligence. [2] Denaro, G., Morasca, S., &Pezzè, M. (2002).Deriving models of software fault-proneness. Proceedings of te 14t international conference on Software engineering and knowledge engineering.acm, pp [3] Dijkstra, E. W. (2002). Go to statement considered armful. Software pioneers. Springer Berlin Heidelberg, pp [4] Menzies, T., Ammar, K., Nikora, A., & Stefano, S. (2003). How simple is software defect All rigts reserved by

Bounding Tree Cover Number and Positive Semidefinite Zero Forcing Number

Bounding Tree Cover Number and Positive Semidefinite Zero Forcing Number Bounding Tree Cover Number and Positive Semidefinite Zero Forcing Number Sofia Burille Mentor: Micael Natanson September 15, 2014 Abstract Given a grap, G, wit a set of vertices, v, and edges, various

More information

Unsupervised Learning for Hierarchical Clustering Using Statistical Information

Unsupervised Learning for Hierarchical Clustering Using Statistical Information Unsupervised Learning for Hierarcical Clustering Using Statistical Information Masaru Okamoto, Nan Bu, and Tosio Tsuji Department of Artificial Complex System Engineering Hirosima University Kagamiyama

More information

4.1 Tangent Lines. y 2 y 1 = y 2 y 1

4.1 Tangent Lines. y 2 y 1 = y 2 y 1 41 Tangent Lines Introduction Recall tat te slope of a line tells us ow fast te line rises or falls Given distinct points (x 1, y 1 ) and (x 2, y 2 ), te slope of te line troug tese two points is cange

More information

Our Calibrated Model has No Predictive Value: An Example from the Petroleum Industry

Our Calibrated Model has No Predictive Value: An Example from the Petroleum Industry Our Calibrated Model as No Predictive Value: An Example from te Petroleum Industry J.N. Carter a, P.J. Ballester a, Z. Tavassoli a and P.R. King a a Department of Eart Sciences and Engineering, Imperial

More information

MATH 5a Spring 2018 READING ASSIGNMENTS FOR CHAPTER 2

MATH 5a Spring 2018 READING ASSIGNMENTS FOR CHAPTER 2 MATH 5a Spring 2018 READING ASSIGNMENTS FOR CHAPTER 2 Note: Tere will be a very sort online reading quiz (WebWork) on eac reading assignment due one our before class on its due date. Due dates can be found

More information

4.2 The Derivative. f(x + h) f(x) lim

4.2 The Derivative. f(x + h) f(x) lim 4.2 Te Derivative Introduction In te previous section, it was sown tat if a function f as a nonvertical tangent line at a point (x, f(x)), ten its slope is given by te it f(x + ) f(x). (*) Tis is potentially

More information

Two Modifications of Weight Calculation of the Non-Local Means Denoising Method

Two Modifications of Weight Calculation of the Non-Local Means Denoising Method Engineering, 2013, 5, 522-526 ttp://dx.doi.org/10.4236/eng.2013.510b107 Publised Online October 2013 (ttp://www.scirp.org/journal/eng) Two Modifications of Weigt Calculation of te Non-Local Means Denoising

More information

Density Estimation Over Data Stream

Density Estimation Over Data Stream Density Estimation Over Data Stream Aoying Zou Dept. of Computer Science, Fudan University 22 Handan Rd. Sangai, 2433, P.R. Cina ayzou@fudan.edu.cn Ziyuan Cai Dept. of Computer Science, Fudan University

More information

Linear Interpolating Splines

Linear Interpolating Splines Jim Lambers MAT 772 Fall Semester 2010-11 Lecture 17 Notes Tese notes correspond to Sections 112, 11, and 114 in te text Linear Interpolating Splines We ave seen tat ig-degree polynomial interpolation

More information

Coarticulation: An Approach for Generating Concurrent Plans in Markov Decision Processes

Coarticulation: An Approach for Generating Concurrent Plans in Markov Decision Processes Coarticulation: An Approac for Generating Concurrent Plans in Markov Decision Processes Kasayar Roanimanes kas@cs.umass.edu Sridar Maadevan maadeva@cs.umass.edu Department of Computer Science, University

More information

Multi-Stack Boundary Labeling Problems

Multi-Stack Boundary Labeling Problems Multi-Stack Boundary Labeling Problems Micael A. Bekos 1, Micael Kaufmann 2, Katerina Potika 1 Antonios Symvonis 1 1 National Tecnical University of Atens, Scool of Applied Matematical & Pysical Sciences,

More information

More on Functions and Their Graphs

More on Functions and Their Graphs More on Functions and Teir Graps Difference Quotient ( + ) ( ) f a f a is known as te difference quotient and is used exclusively wit functions. Te objective to keep in mind is to factor te appearing in

More information

Effect of Principle Component Analysis and Support Vector Machine in Software Fault Prediction

Effect of Principle Component Analysis and Support Vector Machine in Software Fault Prediction International Journal of Computer Trends and Technology (IJCTT) volume 7 number 3 Jan 2014 Effect of Principle Component Analysis and Support Vector Machine in Software Fault Prediction A. Shanthini 1,

More information

Symmetric Tree Replication Protocol for Efficient Distributed Storage System*

Symmetric Tree Replication Protocol for Efficient Distributed Storage System* ymmetric Tree Replication Protocol for Efficient Distributed torage ystem* ung Cune Coi 1, Hee Yong Youn 1, and Joong up Coi 2 1 cool of Information and Communications Engineering ungkyunkwan University

More information

1.4 RATIONAL EXPRESSIONS

1.4 RATIONAL EXPRESSIONS 6 CHAPTER Fundamentals.4 RATIONAL EXPRESSIONS Te Domain of an Algebraic Epression Simplifying Rational Epressions Multiplying and Dividing Rational Epressions Adding and Subtracting Rational Epressions

More information

3.6 Directional Derivatives and the Gradient Vector

3.6 Directional Derivatives and the Gradient Vector 288 CHAPTER 3. FUNCTIONS OF SEVERAL VARIABLES 3.6 Directional Derivatives and te Gradient Vector 3.6.1 Functions of two Variables Directional Derivatives Let us first quickly review, one more time, te

More information

Fast Calculation of Thermodynamic Properties of Water and Steam in Process Modelling using Spline Interpolation

Fast Calculation of Thermodynamic Properties of Water and Steam in Process Modelling using Spline Interpolation P R E P R N T CPWS XV Berlin, September 8, 008 Fast Calculation of Termodynamic Properties of Water and Steam in Process Modelling using Spline nterpolation Mattias Kunick a, Hans-Joacim Kretzscmar a,

More information

Numerical Derivatives

Numerical Derivatives Lab 15 Numerical Derivatives Lab Objective: Understand and implement finite difference approximations of te derivative in single and multiple dimensions. Evaluate te accuracy of tese approximations. Ten

More information

2 The Derivative. 2.0 Introduction to Derivatives. Slopes of Tangent Lines: Graphically

2 The Derivative. 2.0 Introduction to Derivatives. Slopes of Tangent Lines: Graphically 2 Te Derivative Te two previous capters ave laid te foundation for te study of calculus. Tey provided a review of some material you will need and started to empasize te various ways we will view and use

More information

Proceedings of the 8th WSEAS International Conference on Neural Networks, Vancouver, British Columbia, Canada, June 19-21,

Proceedings of the 8th WSEAS International Conference on Neural Networks, Vancouver, British Columbia, Canada, June 19-21, Proceedings of te 8t WSEAS International Conference on Neural Networks, Vancouver, Britis Columbia, Canada, June 9-2, 2007 3 Neural Network Structures wit Constant Weigts to Implement Dis-Jointly Removed

More information

Vector Processing Contours

Vector Processing Contours Vector Processing Contours Andrey Kirsanov Department of Automation and Control Processes MAMI Moscow State Tecnical University Moscow, Russia AndKirsanov@yandex.ru A.Vavilin and K-H. Jo Department of

More information

MAPI Computer Vision

MAPI Computer Vision MAPI Computer Vision Multiple View Geometry In tis module we intend to present several tecniques in te domain of te 3D vision Manuel Joao University of Mino Dep Industrial Electronics - Applications -

More information

SOFTWARE DEFECT PREDICTION USING IMPROVED SUPPORT VECTOR MACHINE CLASSIFIER

SOFTWARE DEFECT PREDICTION USING IMPROVED SUPPORT VECTOR MACHINE CLASSIFIER International Journal of Mechanical Engineering and Technology (IJMET) Volume 7, Issue 5, September October 2016, pp.417 421, Article ID: IJMET_07_05_041 Available online at http://www.iaeme.com/ijmet/issues.asp?jtype=ijmet&vtype=7&itype=5

More information

Section 2.3: Calculating Limits using the Limit Laws

Section 2.3: Calculating Limits using the Limit Laws Section 2.3: Calculating Limits using te Limit Laws In previous sections, we used graps and numerics to approimate te value of a it if it eists. Te problem wit tis owever is tat it does not always give

More information

Cubic smoothing spline

Cubic smoothing spline Cubic smooting spline Menu: QCExpert Regression Cubic spline e module Cubic Spline is used to fit any functional regression curve troug data wit one independent variable x and one dependent random variable

More information

The (, D) and (, N) problems in double-step digraphs with unilateral distance

The (, D) and (, N) problems in double-step digraphs with unilateral distance Electronic Journal of Grap Teory and Applications () (), Te (, D) and (, N) problems in double-step digraps wit unilateral distance C Dalfó, MA Fiol Departament de Matemàtica Aplicada IV Universitat Politècnica

More information

CESILA: Communication Circle External Square Intersection-Based WSN Localization Algorithm

CESILA: Communication Circle External Square Intersection-Based WSN Localization Algorithm Sensors & Transducers 2013 by IFSA ttp://www.sensorsportal.com CESILA: Communication Circle External Square Intersection-Based WSN Localization Algoritm Sun Hongyu, Fang Ziyi, Qu Guannan College of Computer

More information

Investigating an automated method for the sensitivity analysis of functions

Investigating an automated method for the sensitivity analysis of functions Investigating an automated metod for te sensitivity analysis of functions Sibel EKER s.eker@student.tudelft.nl Jill SLINGER j..slinger@tudelft.nl Delft University of Tecnology 2628 BX, Delft, te Neterlands

More information

PYRAMID FILTERS BASED ON BILINEAR INTERPOLATION

PYRAMID FILTERS BASED ON BILINEAR INTERPOLATION PYRAMID FILTERS BASED ON BILINEAR INTERPOLATION Martin Kraus Computer Grapics and Visualization Group, Tecnisce Universität Müncen, Germany krausma@in.tum.de Magnus Strengert Visualization and Interactive

More information

Redundancy Awareness in SQL Queries

Redundancy Awareness in SQL Queries Redundancy Awareness in QL Queries Bin ao and Antonio Badia omputer Engineering and omputer cience Department University of Louisville bin.cao,abadia @louisville.edu Abstract In tis paper, we study QL

More information

2.8 The derivative as a function

2.8 The derivative as a function CHAPTER 2. LIMITS 56 2.8 Te derivative as a function Definition. Te derivative of f(x) istefunction f (x) defined as follows f f(x + ) f(x) (x). 0 Note: tis differs from te definition in section 2.7 in

More information

An Interactive X-Ray Image Segmentation Technique for Bone Extraction

An Interactive X-Ray Image Segmentation Technique for Bone Extraction An Interactive X-Ray Image Segmentation Tecnique for Bone Extraction Cristina Stolojescu-Crisan and Stefan Holban Politenica University of Timisoara V. Parvan 2, 300223 Timisoara, Romania {cristina.stolojescu@etc.upt.ro

More information

Traffic Sign Classification Using Ring Partitioned Method

Traffic Sign Classification Using Ring Partitioned Method Traffic Sign Classification Using Ring Partitioned Metod Aryuanto Soetedjo and Koici Yamada Laboratory for Management and Information Systems Science, Nagaoa University of Tecnology 603- Kamitomioamaci,

More information

Fault Localization Using Tarantula

Fault Localization Using Tarantula Class 20 Fault localization (cont d) Test-data generation Exam review: Nov 3, after class to :30 Responsible for all material up troug Nov 3 (troug test-data generation) Send questions beforeand so all

More information

12.2 TECHNIQUES FOR EVALUATING LIMITS

12.2 TECHNIQUES FOR EVALUATING LIMITS Section Tecniques for Evaluating Limits 86 TECHNIQUES FOR EVALUATING LIMITS Wat ou sould learn Use te dividing out tecnique to evaluate its of functions Use te rationalizing tecnique to evaluate its of

More information

Optimal In-Network Packet Aggregation Policy for Maximum Information Freshness

Optimal In-Network Packet Aggregation Policy for Maximum Information Freshness 1 Optimal In-etwork Packet Aggregation Policy for Maimum Information Fresness Alper Sinan Akyurek, Tajana Simunic Rosing Electrical and Computer Engineering, University of California, San Diego aakyurek@ucsd.edu,

More information

A Cost Model for Distributed Shared Memory. Using Competitive Update. Jai-Hoon Kim Nitin H. Vaidya. Department of Computer Science

A Cost Model for Distributed Shared Memory. Using Competitive Update. Jai-Hoon Kim Nitin H. Vaidya. Department of Computer Science A Cost Model for Distributed Sared Memory Using Competitive Update Jai-Hoon Kim Nitin H. Vaidya Department of Computer Science Texas A&M University College Station, Texas, 77843-3112, USA E-mail: fjkim,vaidyag@cs.tamu.edu

More information

Computing geodesic paths on manifolds

Computing geodesic paths on manifolds Proc. Natl. Acad. Sci. USA Vol. 95, pp. 8431 8435, July 1998 Applied Matematics Computing geodesic pats on manifolds R. Kimmel* and J. A. Setian Department of Matematics and Lawrence Berkeley National

More information

Design of PSO-based Fuzzy Classification Systems

Design of PSO-based Fuzzy Classification Systems Tamkang Journal of Science and Engineering, Vol. 9, No 1, pp. 6370 (006) 63 Design of PSO-based Fuzzy Classification Systems Cia-Cong Cen Department of Electronics Engineering, Wufeng Institute of Tecnology,

More information

CHAPTER 7: TRANSCENDENTAL FUNCTIONS

CHAPTER 7: TRANSCENDENTAL FUNCTIONS 7.0 Introduction and One to one Functions Contemporary Calculus 1 CHAPTER 7: TRANSCENDENTAL FUNCTIONS Introduction In te previous capters we saw ow to calculate and use te derivatives and integrals of

More information

UNSUPERVISED HIERARCHICAL IMAGE SEGMENTATION BASED ON THE TS-MRF MODEL AND FAST MEAN-SHIFT CLUSTERING

UNSUPERVISED HIERARCHICAL IMAGE SEGMENTATION BASED ON THE TS-MRF MODEL AND FAST MEAN-SHIFT CLUSTERING UNSUPERVISED HIERARCHICAL IMAGE SEGMENTATION BASED ON THE TS-MRF MODEL AND FAST MEAN-SHIFT CLUSTERING Raffaele Gaetano, Giuseppe Scarpa, Giovanni Poggi, and Josiane Zerubia Dip. Ing. Elettronica e Telecomunicazioni,

More information

Haar Transform CS 430 Denbigh Starkey

Haar Transform CS 430 Denbigh Starkey Haar Transform CS Denbig Starkey. Background. Computing te transform. Restoring te original image from te transform 7. Producing te transform matrix 8 5. Using Haar for lossless compression 6. Using Haar

More information

Hash-Based Indexes. Chapter 11. Comp 521 Files and Databases Fall

Hash-Based Indexes. Chapter 11. Comp 521 Files and Databases Fall Has-Based Indexes Capter 11 Comp 521 Files and Databases Fall 2012 1 Introduction Hasing maps a searc key directly to te pid of te containing page/page-overflow cain Doesn t require intermediate page fetces

More information

International Journal of Advanced Research in Computer Science and Software Engineering

International Journal of Advanced Research in Computer Science and Software Engineering ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: Applying Machine Learning for Fault Prediction Using Software

More information

Classification of Osteoporosis using Fractal Texture Features

Classification of Osteoporosis using Fractal Texture Features Classification of Osteoporosis using Fractal Texture Features V.Srikant, C.Dines Kumar and A.Tobin Department of Electronics and Communication Engineering Panimalar Engineering College Cennai, Tamil Nadu,

More information

A signature analysis based method for elliptical shape

A signature analysis based method for elliptical shape A signature analysis based metod for elliptical sape Ivana Guarneri, Mirko Guarnera, Giuseppe Messina and Valeria Tomaselli STMicroelectronics - AST Imaging Lab, Stradale rimosole 50, Catania, Italy ABSTRACT

More information

A Statistical Approach for Target Counting in Sensor-Based Surveillance Systems

A Statistical Approach for Target Counting in Sensor-Based Surveillance Systems Proceedings IEEE INFOCOM A Statistical Approac for Target Counting in Sensor-Based Surveillance Systems Dengyuan Wu, Decang Cen,aiXing, Xiuzen Ceng Department of Computer Science, Te George Wasington University,

More information

Implementation of Integral based Digital Curvature Estimators in DGtal

Implementation of Integral based Digital Curvature Estimators in DGtal Implementation of Integral based Digital Curvature Estimators in DGtal David Coeurjolly 1, Jacques-Olivier Lacaud 2, Jérémy Levallois 1,2 1 Université de Lyon, CNRS INSA-Lyon, LIRIS, UMR5205, F-69621,

More information

Pedestrian Detection Algorithm for On-board Cameras of Multi View Angles

Pedestrian Detection Algorithm for On-board Cameras of Multi View Angles Pedestrian Detection Algoritm for On-board Cameras of Multi View Angles S. Kamijo IEEE, K. Fujimura, and Y. Sibayama Abstract In tis paper, a general algoritm for pedestrian detection by on-board monocular

More information

, 1 1, A complex fraction is a quotient of rational expressions (including their sums) that result

, 1 1, A complex fraction is a quotient of rational expressions (including their sums) that result RT. Complex Fractions Wen working wit algebraic expressions, sometimes we come across needing to simplify expressions like tese: xx 9 xx +, xx + xx + xx, yy xx + xx + +, aa Simplifying Complex Fractions

More information

Hash-Based Indexes. Chapter 11. Comp 521 Files and Databases Spring

Hash-Based Indexes. Chapter 11. Comp 521 Files and Databases Spring Has-Based Indexes Capter 11 Comp 521 Files and Databases Spring 2010 1 Introduction As for any index, 3 alternatives for data entries k*: Data record wit key value k

More information

Economic design of x control charts considering process shift distributions

Economic design of x control charts considering process shift distributions J Ind Eng Int (2014) 10:163 171 DOI 10.1007/s40092-014-0086-2 ORIGINAL RESEARCH Economic design of x control carts considering process sift distributions Vijayababu Vommi Rukmini V. Kasarapu Received:

More information

12.2 Investigate Surface Area

12.2 Investigate Surface Area Investigating g Geometry ACTIVITY Use before Lesson 12.2 12.2 Investigate Surface Area MATERIALS grap paper scissors tape Q U E S T I O N How can you find te surface area of a polyedron? A net is a pattern

More information

The Euler and trapezoidal stencils to solve d d x y x = f x, y x

The Euler and trapezoidal stencils to solve d d x y x = f x, y x restart; Te Euler and trapezoidal stencils to solve d d x y x = y x Te purpose of tis workseet is to derive te tree simplest numerical stencils to solve te first order d equation y x d x = y x, and study

More information

Announcements SORTING. Prelim 1. Announcements. A3 Comments 9/26/17. This semester s event is on Saturday, November 4 Apply to be a teacher!

Announcements SORTING. Prelim 1. Announcements. A3 Comments 9/26/17. This semester s event is on Saturday, November 4 Apply to be a teacher! Announcements 2 "Organizing is wat you do efore you do someting, so tat wen you do it, it is not all mixed up." ~ A. A. Milne SORTING Lecture 11 CS2110 Fall 2017 is a program wit a teac anyting, learn

More information

ANTENNA SPHERICAL COORDINATE SYSTEMS AND THEIR APPLICATION IN COMBINING RESULTS FROM DIFFERENT ANTENNA ORIENTATIONS

ANTENNA SPHERICAL COORDINATE SYSTEMS AND THEIR APPLICATION IN COMBINING RESULTS FROM DIFFERENT ANTENNA ORIENTATIONS NTNN SPHRICL COORDINT SSTMS ND THIR PPLICTION IN COMBINING RSULTS FROM DIFFRNT NTNN ORINTTIONS llen C. Newell, Greg Hindman Nearfield Systems Incorporated 133. 223 rd St. Bldg. 524 Carson, C 9745 US BSTRCT

More information

Comparison of the Efficiency of the Various Algorithms in Stratified Sampling when the Initial Solutions are Determined with Geometric Method

Comparison of the Efficiency of the Various Algorithms in Stratified Sampling when the Initial Solutions are Determined with Geometric Method International Journal of Statistics and Applications 0, (): -0 DOI: 0.9/j.statistics.000.0 Comparison of te Efficiency of te Various Algoritms in Stratified Sampling wen te Initial Solutions are Determined

More information

A UPnP-based Decentralized Service Discovery Improved Algorithm

A UPnP-based Decentralized Service Discovery Improved Algorithm Indonesian Journal of Electrical Engineering and Informatics (IJEEI) Vol.1, No.1, Marc 2013, pp. 21~26 ISSN: 2089-3272 21 A UPnP-based Decentralized Service Discovery Improved Algoritm Yu Si-cai*, Wu Yan-zi,

More information

Chapter K. Geometric Optics. Blinn College - Physics Terry Honan

Chapter K. Geometric Optics. Blinn College - Physics Terry Honan Capter K Geometric Optics Blinn College - Pysics 2426 - Terry Honan K. - Properties of Ligt Te Speed of Ligt Te speed of ligt in a vacuum is approximately c > 3.0µ0 8 mês. Because of its most fundamental

More information

Integrating Constraints and Metric Learning in Semi-Supervised Clustering

Integrating Constraints and Metric Learning in Semi-Supervised Clustering Integrating Constraints and Metric Learning in Semi-Supervised Clustering Mikail Bilenko MBILENKO@CS.UTEXAS.EDU Sugato Basu SUGATO@CS.UTEXAS.EDU Raymond J. Mooney MOONEY@CS.UTEXAS.EDU Department of Computer

More information

Laser Radar based Vehicle Localization in GPS Signal Blocked Areas

Laser Radar based Vehicle Localization in GPS Signal Blocked Areas International Journal of Computational Intelligence Systems, Vol. 4, No. 6 (December, 20), 00-09 Laser Radar based Veicle Localization in GPS Signal Bloced Areas Ming Yang Department of Automation, Sangai

More information

An Effective Sensor Deployment Strategy by Linear Density Control in Wireless Sensor Networks Chiming Huang and Rei-Heng Cheng

An Effective Sensor Deployment Strategy by Linear Density Control in Wireless Sensor Networks Chiming Huang and Rei-Heng Cheng An ffective Sensor Deployment Strategy by Linear Density Control in Wireless Sensor Networks Ciming Huang and ei-heng Ceng 5 De c e mbe r0 International Journal of Advanced Information Tecnologies (IJAIT),

More information

12.2 Techniques for Evaluating Limits

12.2 Techniques for Evaluating Limits 335_qd /4/5 :5 PM Page 863 Section Tecniques for Evaluating Limits 863 Tecniques for Evaluating Limits Wat ou sould learn Use te dividing out tecnique to evaluate its of functions Use te rationalizing

More information

On the Use of Radio Resource Tests in Wireless ad hoc Networks

On the Use of Radio Resource Tests in Wireless ad hoc Networks Tecnical Report RT/29/2009 On te Use of Radio Resource Tests in Wireless ad oc Networks Diogo Mónica diogo.monica@gsd.inesc-id.pt João Leitão jleitao@gsd.inesc-id.pt Luis Rodrigues ler@ist.utl.pt Carlos

More information

A geometric analysis of heuristic search

A geometric analysis of heuristic search A geometric analysis of euristic searc by GORDON J. VANDERBRUG University of Maryland College Park, Maryland ABSTRACT Searc spaces for various types of problem representations can be represented in one

More information

Minimizing Memory Access By Improving Register Usage Through High-level Transformations

Minimizing Memory Access By Improving Register Usage Through High-level Transformations Minimizing Memory Access By Improving Register Usage Troug Hig-level Transformations San Li Scool of Computer Engineering anyang Tecnological University anyang Avenue, SIGAPORE 639798 Email: p144102711@ntu.edu.sg

More information

13.5 DIRECTIONAL DERIVATIVES and the GRADIENT VECTOR

13.5 DIRECTIONAL DERIVATIVES and the GRADIENT VECTOR 13.5 Directional Derivatives and te Gradient Vector Contemporary Calculus 1 13.5 DIRECTIONAL DERIVATIVES and te GRADIENT VECTOR Directional Derivatives In Section 13.3 te partial derivatives f x and f

More information

SORTING 9/26/18. Prelim 1. Prelim 1. Why Sorting? InsertionSort. Some Sorting Algorithms. Tonight!!!! Two Sessions:

SORTING 9/26/18. Prelim 1. Prelim 1. Why Sorting? InsertionSort. Some Sorting Algorithms. Tonight!!!! Two Sessions: Prelim 1 2 "Organizing is wat you do efore you do someting, so tat wen you do it, it is not all mixed up." ~ A. A. Milne SORTING Tonigt!!!! Two Sessions: You sould now y now wat room to tae te final. Jenna

More information

THANK YOU FOR YOUR PURCHASE!

THANK YOU FOR YOUR PURCHASE! THANK YOU FOR YOUR PURCHASE! Te resources included in tis purcase were designed and created by me. I ope tat you find tis resource elpful in your classroom. Please feel free to contact me wit any questions

More information

RECONSTRUCTING OF A GIVEN PIXEL S THREE- DIMENSIONAL COORDINATES GIVEN BY A PERSPECTIVE DIGITAL AERIAL PHOTOS BY APPLYING DIGITAL TERRAIN MODEL

RECONSTRUCTING OF A GIVEN PIXEL S THREE- DIMENSIONAL COORDINATES GIVEN BY A PERSPECTIVE DIGITAL AERIAL PHOTOS BY APPLYING DIGITAL TERRAIN MODEL IV. Évfolyam 3. szám - 2009. szeptember Horvát Zoltán orvat.zoltan@zmne.u REONSTRUTING OF GIVEN PIXEL S THREE- DIMENSIONL OORDINTES GIVEN Y PERSPETIVE DIGITL ERIL PHOTOS Y PPLYING DIGITL TERRIN MODEL bsztrakt/bstract

More information

THE POSSIBILITY OF ESTIMATING THE VOLUME OF A SQUARE FRUSTRUM USING THE KNOWN VOLUME OF A CONICAL FRUSTRUM

THE POSSIBILITY OF ESTIMATING THE VOLUME OF A SQUARE FRUSTRUM USING THE KNOWN VOLUME OF A CONICAL FRUSTRUM THE POSSIBILITY OF ESTIMATING THE VOLUME OF A SQUARE FRUSTRUM USING THE KNOWN VOLUME OF A CONICAL FRUSTRUM SAMUEL OLU OLAGUNJU Adeyemi College of Education NIGERIA Email: lagsam04@aceondo.edu.ng ABSTRACT

More information

Can Data Transformation Help in the Detection of Fault-prone Modules?

Can Data Transformation Help in the Detection of Fault-prone Modules? Can Data Transformation Help in the Detection of Fault-prone Modules? Yue Jiang, Bojan Cukic,Tim Menzies Lane Department of Computer Science and Electrical Engineering West Virginia University Morgantown,WV,USA

More information

A Novel QC-LDPC Code with Flexible Construction and Low Error Floor

A Novel QC-LDPC Code with Flexible Construction and Low Error Floor A Novel QC-LDPC Code wit Flexile Construction and Low Error Floor Hanxin WANG,2, Saoping CHEN,2,CuitaoZHU,2 and Kaiyou SU Department of Electronics and Information Engineering, Sout-Central University

More information

Some Handwritten Signature Parameters in Biometric Recognition Process

Some Handwritten Signature Parameters in Biometric Recognition Process Some Handwritten Signature Parameters in Biometric Recognition Process Piotr Porwik Institute of Informatics, Silesian Uniersity, Bdziska 39, 41- Sosnowiec, Poland porwik@us.edu.pl Tomasz Para Institute

More information

Excel based finite difference modeling of ground water flow

Excel based finite difference modeling of ground water flow Journal of Himalaan Eart Sciences 39(006) 49-53 Ecel based finite difference modeling of ground water flow M. Gulraiz Akter 1, Zulfiqar Amad 1 and Kalid Amin Kan 1 Department of Eart Sciences, Quaid-i-Azam

More information

Non-Interferometric Testing

Non-Interferometric Testing NonInterferometric Testing.nb Optics 513 - James C. Wyant 1 Non-Interferometric Testing Introduction In tese notes four non-interferometric tests are described: (1) te Sack-Hartmann test, (2) te Foucault

More information

SNS College of Technology, Coimbatore, India

SNS College of Technology, Coimbatore, India Support Vector Machine: An efficient classifier for Method Level Bug Prediction using Information Gain 1 M.Vaijayanthi and 2 M. Nithya, 1,2 Assistant Professor, Department of Computer Science and Engineering,

More information

NOTES: A quick overview of 2-D geometry

NOTES: A quick overview of 2-D geometry NOTES: A quick overview of 2-D geometry Wat is 2-D geometry? Also called plane geometry, it s te geometry tat deals wit two dimensional sapes flat tings tat ave lengt and widt, suc as a piece of paper.

More information

Search-aware Conditions for Probably Approximately Correct Heuristic Search

Search-aware Conditions for Probably Approximately Correct Heuristic Search Searc-aware Conditions for Probably Approximately Correct Heuristic Searc Roni Stern Ariel Felner Information Systems Engineering Ben Gurion University Beer-Seva, Israel 85104 roni.stern@gmail.com, felner@bgu.ac.il

More information

Notes: Dimensional Analysis / Conversions

Notes: Dimensional Analysis / Conversions Wat is a unit system? A unit system is a metod of taking a measurement. Simple as tat. We ave units for distance, time, temperature, pressure, energy, mass, and many more. Wy is it important to ave a standard?

More information

Multi-Objective Particle Swarm Optimizers: A Survey of the State-of-the-Art

Multi-Objective Particle Swarm Optimizers: A Survey of the State-of-the-Art Multi-Objective Particle Swarm Optimizers: A Survey of te State-of-te-Art Margarita Reyes-Sierra and Carlos A. Coello Coello CINVESTAV-IPN (Evolutionary Computation Group) Electrical Engineering Department,

More information

When a BST becomes badly unbalanced, the search behavior can degenerate to that of a sorted linked list, O(N).

When a BST becomes badly unbalanced, the search behavior can degenerate to that of a sorted linked list, O(N). Balanced Binary Trees Binary searc trees provide O(log N) searc times provided tat te nodes are distributed in a reasonably balanced manner. Unfortunately, tat is not always te case and performing a sequence

More information

CS 234. Module 6. October 16, CS 234 Module 6 ADT Dictionary 1 / 33

CS 234. Module 6. October 16, CS 234 Module 6 ADT Dictionary 1 / 33 CS 234 Module 6 October 16, 2018 CS 234 Module 6 ADT Dictionary 1 / 33 Idea for an ADT Te ADT Dictionary stores pairs (key, element), were keys are distinct and elements can be any data. Notes: Tis is

More information

All truths are easy to understand once they are discovered; the point is to discover them. Galileo

All truths are easy to understand once they are discovered; the point is to discover them. Galileo Section 7. olume All truts are easy to understand once tey are discovered; te point is to discover tem. Galileo Te main topic of tis section is volume. You will specifically look at ow to find te volume

More information

MAP MOSAICKING WITH DISSIMILAR PROJECTIONS, SPATIAL RESOLUTIONS, DATA TYPES AND NUMBER OF BANDS 1. INTRODUCTION

MAP MOSAICKING WITH DISSIMILAR PROJECTIONS, SPATIAL RESOLUTIONS, DATA TYPES AND NUMBER OF BANDS 1. INTRODUCTION MP MOSICKING WITH DISSIMILR PROJECTIONS, SPTIL RESOLUTIONS, DT TYPES ND NUMBER OF BNDS Tyler J. lumbaug and Peter Bajcsy National Center for Supercomputing pplications 605 East Springfield venue, Campaign,

More information

Louis Fourrier Fabien Gaie Thomas Rolf

Louis Fourrier Fabien Gaie Thomas Rolf CS 229 Stay Alert! The Ford Challenge Louis Fourrier Fabien Gaie Thomas Rolf Louis Fourrier Fabien Gaie Thomas Rolf 1. Problem description a. Goal Our final project is a recent Kaggle competition submitted

More information

Intra- and Inter-Session Network Coding in Wireless Networks

Intra- and Inter-Session Network Coding in Wireless Networks Intra- and Inter-Session Network Coding in Wireless Networks Hulya Seferoglu, Member, IEEE, Atina Markopoulou, Member, IEEE, K K Ramakrisnan, Fellow, IEEE arxiv:857v [csni] 3 Feb Abstract In tis paper,

More information

Tuning MAX MIN Ant System with off-line and on-line methods

Tuning MAX MIN Ant System with off-line and on-line methods Université Libre de Bruxelles Institut de Recerces Interdisciplinaires et de Développements en Intelligence Artificielle Tuning MAX MIN Ant System wit off-line and on-line metods Paola Pellegrini, Tomas

More information

Contents. Preface to the Second Edition

Contents. Preface to the Second Edition Preface to the Second Edition v 1 Introduction 1 1.1 What Is Data Mining?....................... 4 1.2 Motivating Challenges....................... 5 1.3 The Origins of Data Mining....................

More information

An Algorithm for Loopless Deflection in Photonic Packet-Switched Networks

An Algorithm for Loopless Deflection in Photonic Packet-Switched Networks An Algoritm for Loopless Deflection in Potonic Packet-Switced Networks Jason P. Jue Center for Advanced Telecommunications Systems and Services Te University of Texas at Dallas Ricardson, TX 75083-0688

More information

The impact of simplified UNBab mapping function on GPS tropospheric delay

The impact of simplified UNBab mapping function on GPS tropospheric delay Te impact of simplified UNBab mapping function on GPS troposperic delay Hamza Sakidin, Tay Coo Cuan, and Asmala Amad Citation: AIP Conference Proceedings 1621, 363 (2014); doi: 10.1063/1.4898493 View online:

More information

15-122: Principles of Imperative Computation, Summer 2011 Assignment 6: Trees and Secret Codes

15-122: Principles of Imperative Computation, Summer 2011 Assignment 6: Trees and Secret Codes 15-122: Principles of Imperative Computation, Summer 2011 Assignment 6: Trees and Secret Codes William Lovas (wlovas@cs) Karl Naden Out: Tuesday, Friday, June 10, 2011 Due: Monday, June 13, 2011 (Written

More information

Alternating Direction Implicit Methods for FDTD Using the Dey-Mittra Embedded Boundary Method

Alternating Direction Implicit Methods for FDTD Using the Dey-Mittra Embedded Boundary Method Te Open Plasma Pysics Journal, 2010, 3, 29-35 29 Open Access Alternating Direction Implicit Metods for FDTD Using te Dey-Mittra Embedded Boundary Metod T.M. Austin *, J.R. Cary, D.N. Smite C. Nieter Tec-X

More information

A Feature Selection Method to Handle Imbalanced Data in Text Classification

A Feature Selection Method to Handle Imbalanced Data in Text Classification A Feature Selection Method to Handle Imbalanced Data in Text Classification Fengxiang Chang 1*, Jun Guo 1, Weiran Xu 1, Kejun Yao 2 1 School of Information and Communication Engineering Beijing University

More information

Multi-View Clustering with Constraint Propagation for Learning with an Incomplete Mapping Between Views

Multi-View Clustering with Constraint Propagation for Learning with an Incomplete Mapping Between Views Multi-View Clustering wit Constraint Propagation for Learning wit an Incomplete Mapping Between Views Eric Eaton Bryn Mawr College Computer Science Department Bryn Mawr, PA 19010 eeaton@brynmawr.edu Marie

More information

Piecewise Polynomial Interpolation, cont d

Piecewise Polynomial Interpolation, cont d Jim Lambers MAT 460/560 Fall Semester 2009-0 Lecture 2 Notes Tese notes correspond to Section 4 in te text Piecewise Polynomial Interpolation, cont d Constructing Cubic Splines, cont d Having determined

More information

Data Imbalance Problem solving for SMOTE Based Oversampling: Study on Fault Detection Prediction Model in Semiconductor Manufacturing Process

Data Imbalance Problem solving for SMOTE Based Oversampling: Study on Fault Detection Prediction Model in Semiconductor Manufacturing Process Vol.133 (Information Technology and Computer Science 2016), pp.79-84 http://dx.doi.org/10.14257/astl.2016. Data Imbalance Problem solving for SMOTE Based Oversampling: Study on Fault Detection Prediction

More information

Efficient Content-Based Indexing of Large Image Databases

Efficient Content-Based Indexing of Large Image Databases Efficient Content-Based Indexing of Large Image Databases ESSAM A. EL-KWAE University of Nort Carolina at Carlotte and MANSUR R. KABUKA University of Miami Large image databases ave emerged in various

More information

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS CHAPTER 4 CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS 4.1 Introduction Optical character recognition is one of

More information

Feature-Based Steganalysis for JPEG Images and its Implications for Future Design of Steganographic Schemes

Feature-Based Steganalysis for JPEG Images and its Implications for Future Design of Steganographic Schemes Feature-Based Steganalysis for JPEG Images and its Implications for Future Design of Steganograpic Scemes Jessica Fridric Dept. of Electrical Engineering, SUNY Bingamton, Bingamton, NY 3902-6000, USA fridric@bingamton.edu

More information