Chapter 4. Adaptive Self-tuning : A Neural Network approach. 4.1 Introduction

Size: px
Start display at page:

Download "Chapter 4. Adaptive Self-tuning : A Neural Network approach. 4.1 Introduction"

Transcription

1 Chapter 4 Adaptive Self-tuning : A Neural Network approach 4.1 Introduction Machine learning is a method of solving real world problems by employing the hidden knowledge present in the past data or data patterns. Several expert systems have been developed that are widely used in the industry. The objective of machine learning technique is to provide higher levels of automation in the domain of engineering application. This is a replacement for the time consuming, error prone human intervention with automatic techniques that enhance productivity, performance and also result in cost saving. The machine learning can be categorized as Inductive and Deductive learning. Inductive machine learning techniques use computer programs that act on massive data- sets by extracting rules and patterns. This method 75

2 takes examples of a concept as input and generalizes the concept rather than starting with existing knowledge. On the other hand deductive learning takes existing facts and knowledge as input to deduce new knowledge. Machine learning relies heavily on large volume of data for deducing new knowledge and hence, its working overlaps with statistical approach of solving real world problems. Use of machine learning is demonstrated in a wide variety of applications [17] [55] [61], these include search engines, bio-informatics, stock market analysis, medical diagnosis, gaming, robotic automation etc. One important characteristic of machine learning technique is that, it effectively learns how to estimate the desired output by learning from a given training data-set Neural network : A machine learning technique Some of the important machine learning techniques are, statistical approach, case based reasoning, classification and regression, genetic algorithms, Artificial Intelligence using Neural network etc. Among the machine learning techniques neural network based machine learning approach is the best choice for systems that are highly complex [61] [62] for which there is no analytical solution possible. The primary advantage of using neural network approach is that they are adaptable. A typical neural network setup comprises of an input layer, a hidden layer and an output layer. The nodes in the neural network are connected through edges with suitable weights assigned. As the neural network is trained using past data the weights are adjusted using an appropriate learning algorithm so that the desired target at the output node is reached for a given set of inputs. The output of each node depends on the weighted sum of the values of the preceding nodes, the type of activation function and the firing angle. Some of the activation functions are binary, bipolar, sigmoidal etc. The convergence of the output of the neural network depends on the learning rate, acceptable error, momentum factor and the number of iterations. The size of training data-set, number training iteration(epochs), and acceptable error, determine the accuracy of the result and hence must be chosen very carefully. In the following section, the use of neural network approach for estimating the tuning parameters is presented Neural network setup for Self-tuning Neural network provides an elegant way to setup a self-tuning system that can learn from past values of tuning parameters to predict the future values based on the sensor inputs like Buffer-Hit-Ratio, User-load and Database size. In this approach, Neural network is used to estimate the tuning parameter values based on the above three inputs. Neural networks are best suited to provide solutions to problems in complex systems that are intrinsically non-linear in 76

3 nature. Since, DBMS is one such complex software system and also the response time of the user queries exhibit non-linear relationship with the tuning parameters. Neural network can be trained to learn the complex relationship between the input and the output from training dataset. Machine learning technique can be leveraged as an effective method of predicting the estimated values of the tuning parameters from a given training data-set. The training dataset comprises of tuning parameter values corresponding to Buffer-Hit-ratio, User-load and Database size under different load conditions. For the purpose of estimating the tuning parameter values namely, Buffer_cache_size(BCS), Shared_Pool_size(SHP) and Large_Pool_size(LRP) are used as input to the neural network setup as shown in figure 4.1. It is a three layer neural network with three nodes in the input layer(one for each of the 3 sensor inputs), 10 nodes in the hidden layer and 3 nodes in the output layer(one for each of the three tuning parameter). BHR N user load Large_Pool_Size Shared_Pool_Size Database size DB_Cache_Size Figure 4.1 A 3-layer Neural network setup The neural network uses a Feedforward Backpropagation learning algorithm wherein, the learning is decided by difference in the error between the target and estimated values and the gradient of the activation function. The activation function used is sigmoidal, for all the inner nodes. It is this function that gives the neural network the ability to learn and produce an output for which it is not trained. This algorithm provides a procedure for changing the weights of the edges in the network during the training process. The basic idea in this learning method is the use of gradient descent method to propagate the error back to the nodes in the previous layers. Though, the primary objective of this algorithm is to achieve faster convergence it must also balance between the ability of the neural network to quickly respond to the changing inputs and also achieve generalization. However, for the proper functioning of the neural 77

4 network, a well defined training data set, appropriate values of learning rate, momentum coefficient etc. are needed. The output nodes have a pure-linear activation function to generate the final estimated values for a given test input data Properties of Neural networks Following are some of the very important parameters that need to be carefully chosen for the successful implementation of the neural network for the purpose of estimating the correct set of tuning parameters depending on the external factors like workload type, user-load and database size. 1. Initial weights : The choice of initial weights decides the quality of the solution and also how fast the network converges. The initial weights can t be very high because the sigmoidal activation function will saturate and would not respond to the changing inputs. The initial weight is usually done using a method called Nyigen-Widrow method [61] as it leads to faster convergence. In the above setup, the weights are randomly assigned through this method. 2. Learning rate : The learning rate affects the convergence of the neural network output, and hence, must be carefully chosen. A higher value may lead to faster convergence, but also may result into overshooting of the desired output. The typical range of learning rate is in the range 10-3 to 10 as has been established through successful implementation. In the above setup a learning rate of has been found to be most suitable. 3. Momentum factor : The gradient descent technique used in Backpropagation approach becomes very slow if the learning rate chosen is small and oscillates widely if it is chosen to be large. By choosing an appropriate value of momentum factor, oscillation can be avoided even if a larger value of learning rate is chosen for faster convergence. The momentum factor used in the above setup is Generalization : A network is said to generalize the learning, if it produces the correct expected output for the inputs that were not present in the training data set. The back propagation is known to be the best generalization network. However, if the number of nodes are large then this network is not only computationally expensive, it also fails to generalize on the input values. Hence, the number nodes should be kept to the minimum. In the above setup, only 10 nodes are used for the network to be computationally faster and effective in generalization. 5. Training data-set : For the above setup to generate meaningful and useful output without generating arbitrarily large or absurd(-ve values) values in the output, the 78

5 training data setup must be sufficient and properly framed. The general rule in choosing the training data-set is that the data-set must cover the entire input data space and training values must be randomly chosen from this set. It would also be appropriate to separate the input training data sets into N disjoint regions with each disjoint set has different effects on the output values. In the above setup, 4 different disjoint sets were defined, one for each of the 4 different workload types. 6. Number of hidden nodes : The number and size of each hidden layer has to experimentally established as there are no rules of thumb to decide these two. The number of nodes in the hidden layers may have to be increased in small steps, if the result doesn t converge faster. On the other hand, if the network converges swiftly, then the number of nodes have to be reduced. In the above setup, the neural network with only 4 nodes was not converging fast and also was not generating meaningful output. Hence, the nodes in the final setup were increased to 8. With 8 nodes though convergence was faster, the results for some of the inputs was overshooting the target. The nodes in the network were increased to 12 nodes. 4.2 MAPE : A General framework for self-tuning architecture A general self-optimizing technique that works on the principle of Monitor, Analyze, Plan and Execute (MAPE) feedback control loop and is part of the autonomic computing architecture proposed by IBM. The general self-tuning architecture is modified to implement a self-tuning architecture for improving the performance of DBMS and is shown in figure 4.2. In this architecture the sensor inputs that indicate the degradation in system performance are extracted by a sensor module and are then fed to an Analyzer. The analyzer module examines the sensor inputs and other external factors and estimates the extent of changes needed in the system. The planner generates a plan of action based on the input received from the Analyzer. In some specific applications, for example, the Neural network based self-tuning system as shown in figure 4.2, the analyzer & planner could be one single module which does both the analysis of input data and generation of a plan of action. The plan of action is given to the execution unit of MAPE to effect the changes to the effectors. 79

6 Knowledge-Base : Training Dataset Monitor Neural Network to estimate the tuning parameters (Analyze + Plan) Execute Sensor Managed Resource Touch Point Effector Database Management System DBMS Memory Components End User App. Database Figure 4.2 A Self-tuning Architecture based on MAPE autonomic computing paradigm Figure 4.3 shows the details of Neural network based tuning architecture setup that works on the principle of MAPE. In this setup the neural network setup that is presented in the previous section forms an important part of the entire self-tuning setup. As can be seen from the diagram, the inputs to the Neural network block are extracted from the DBMS using appropriate SQL commands[a.1] and are preprocessed before being fed into the neural network proper. In this setup three parameters that act as sensors, namely the user-load N, Buffer Hit Ratio(BHR) and the Database size(dbs) are extracted from the DBMS by sensor data extraction module. This module uses simple SQL commands or scripts to extract the sensor data. The neural network uses the training dataset stored in the knowledgebase for learning and estimate the extent of change in the tuning parameters need for changed workload scenario and/or performance degradation. The estimated values are used to change the tuning parameter values, namely Buffer_Cache_Size(BCS), Shared_Pool_size(SHP) and Large_Pool_size(LRP) again using appropriate SQL commands. 80

7 DBMS Db_Cache_Size(DCS) Large_Pool_Size(LPS) Shared_Pool_Size(SPS) Tuner N BHR DBS Extraction Module Pre- Processing Neural Network Post- Processing DBA Database Database Training Dataset Training Dataa Set Figure 4.3. Neural Network based tuning system As discussed earlier, for the Neural network to generate consistent and meaningful output, it is important to choose an appropriate activation function, learning rate, number of training loops and sizeablee number of nodes in the hidden layer. In the proposed auto-tuning based on neural network has 10 nodes in the hidden layer and the learning rate & accuracy are set to 0.05 and respectively for fast convergence and accurate results. For the Feedforwardd Backpropagation Network(FBN), it is also important to preprocess the input data as the FBN uses a bipolar activation function. After preprocessing the values fed to the network would be in the range [-1, +1] as required by the FBN. Similarly, the output from the neural network must be given to a post processor module for the data to be mapped into actual tuning values from the output values in the range [-1, +1]. The tuner after receiving the estimated values of the tuning parameter executes certain commands to effect the changes to the dynamic tuning parameters namely, the DB_Cache, Shared_pool and Large_pool. In some cases if the estimated values are arbitrarily large due to sudden change in the input workload pattern or surge in user load, the tuner module may have to moderate the values before it changes the values of the tuning parameters just to ensure system stability. 81

8 4.2.1 Training Data-Set generation methodology As discussed in section 4.2.1, the role of framing training dataset is extremely important for the neural network based self-tuning method to accurately predict the values of the tuning parameters that are appropriate for the given user-load, workload type and database size. Hence, an experimental based method to evolve the training dataset has been presented in this section. The method starts by varying the value of a tuning parameter Tp for a given workload type and user-load and the corresponding response-time is noted. For instance, as shown in table 4.1, DB_CACHE tuning parameter is varied from 32MB to 1296MB for a user load of 5 and workload type TPC-C. In the next step the value of DB_CACHE tuning parameter where the response time reaches a minimum value is noted. In this case, the response time settles at 7 msecs. For a DB_Cache_size of 784MB. This value would be higher for higher user loads and scaling factors for obvious reasons. As can been seen from table 4.1, for 20 user-load, scaling factor of 2 and workload type TPC-C, the value of response-time that reaches saturation is 824MB and that for 80 user-load is 1264MB etc. This experiment was repeated for the scaling factors 4, 8 and 10 and similar tables were constructed. From these tables, the training data-set was constructed as shown in table 4-II, wherein the entries were derived from table 4.1. The process constructing training data-set from the experimental data is explained through the following algorithm. ALGORITHM Gen_Training_DataSet(Exp_Data_Table, maxnum, maxusld, NTps, wkldtypes) // Constructs the Training data set from the Exp_data that has maxnum of rows and maxsf number of columns //Input : Exp_Data_Table : table of response time values for different tuning parameters and scaling factors, the number of tuning parameters and workload types //Output: Training Dataset : Contains NTps columns one for each tuning parameter and number of rows depends on number of readings in Exp_Data_Table and workload types. Begin for i = 1 to wkldtypes // for each workload for j = 1 to NTps // for each tuning parameter for k = 1 to maxusload // for each scaling factor tp_min= min(exp_data_table, k) // Compute the tp value for which response time is minimum in the column 82

9 Insert_Data(TrainingDataset, j, tp_min) // Record the min value in Training dataset next k // repeat for each user load next j // repeat for each tuning parameter next k // repeat for each workload type End The self-tuning database system architecture proposed in this work uses three tuning parameters, that supports four workload types and the user-load varies from 1-10 or depending on the workload type. Table 4.1 Effect on Response-time due to increase in Db_Cache Db_Cache (MB) R-time(ms) (5 Users) R-time(ms) (10 Users) R-time(ms) (20 Users) R-time(ms) (40 users) R-time(ms) (80 Users) R-time(ms) (100 Users) Application of the above procedure to the experimental data in table 4.1 results in training dataset in Table 4.2 and Table 4.3. Though, a single table is used for storing the training data-set of all the workload types, they are shown in two separate tables for clarity. For instance, the first entry in Table 4.2 for user load 5, BHR=88.34 and scaling factor of 2 is selected from Table 4.1 that corresponds to the minimum response time entry at Db_cache_size value of 784. Table The Neural Network Training Data Set for the Workload Type TPC-C N BHR DBS DB_CACHE(MB) SH_POOL(MB) LRG_POOL(MB)

10 Similarly for user-load of 20, the corresponding Db_cache value of 824 from table 4.1 is chosen and entered in table 4.2. Since, the training data set in the table are derived from the experimental outcomes and not from empirical formulas or approximate models, the neural network after training with these set of values would accurately predict the tuning parameter values. Table 4.2 shows the sample training dataset for TPC-C workload type and scaling factors 2 and 5. As can be seen from the table, the DBS, SHP tuning parameters are set to progressively higher values as the user-load varies from and 1-10 for TPC-C and TPC-H workloads respectively. However, LARGE_POOL is kept constant as it has no effect on the response-time for this type of workload. It is to be noted that the training data-set shown in figure 4.2 and 4.3 is only the sample dataset. The co,mplete dataset used in the experimentation has 234 rows. Table Training dataset for TPC-H workload type N BHR DBS DB_CACHE(MB) SH_POOL(MB) LRG_POOL(MB) It is also important to invoke the self-tuning system at regular intervals based on certain criteria. The self-tuning can be invoked when the user-load increases by a constant. For example, at every 5 user-interval the self-tuning module can be run to check for noticeable change in performance degradation. However, an increase in user-load alone may not result in performance deterioration as the users the queries generated by users may not result in major disk activity. On the other hand, change in BHR may be a good indication that lot of disk activity is happening. Hence, a user-load change of 5-10 and 10 or 20% change in BHR can be used to trigger the tuning action. A user-load change of 10 and 20% change in BHR is used to trigger the tuning in the validation of results presented in the next section. 84

11 4.4.2 Result & Analysis The proposed adaptive self-tuning technique based on Neural network approach has been tested for two workload types namely TPC-C and TPC-E. 250 Response time in ms Responsetime V/s User load Scaling factor of 2 NeuroTech Auto-Tun Userload User Load Figure 4.4 : Query response time v/s User-load for TPC-C Workload SF=2 As can be seen from figure 4.4, the performance of Neural Network based tuning was found to be 27% and 16.3% better as compared to the auto-tuning feature of Oracle 10g for the entire user range of users for TPC-C Workload type and a scaling factors of 2 and 5 respectively. The up-down behavior in response-time in figure 4.4 and 4.5 is due to individual transactions of the workload take different amount of time to execute according to their level of complexity. This improvement in performance can be ascribed to the ability of the Neural network to learn from past data to predict appropriate values of the tuning parameters and also its ability to effectively utilize all the subcomponents of the system memory areas. However, the drop in performance from 27% to 16.3% is on expected lines, as the database size increases, the number of disk accesses will also increase. 85

12 % improvement in Response-time TPC-C SF=5 Response-time (msec) Auto NN User Load Figure 4.5 : Query response time v/s User-load for TPC-C Workload, SF=5 Figure 4.6 Query Response time V/s User load : TPC-E workload SF=1 Figure 4.6 shows the result of applying the proposed tuning method when the DBMS is presented with TPC-E workload type of scaling factor =1. Here again there is 32.8% improvement. This a significant performance improvement as compared to auto-tuning feature and also throughout the user-load range of users. Figure 4.6 shows the effect of scaling factor on response time. It is evident that the proposed method to self-tune the system is scalable. Summary In this chapter, a new adaptive self-tuning technique based on machine learning approach using Neural network is presented. In this setup, the sensor values after extraction are 86

13 pre-processed before being fed to the Neural network input. A three layer neural network with Feedforward Backpropagation based learning is used in the implementation. The experimental setup used though small compared to large databases[114] found in business enterprises, the self-tuning is most important in small and medium business enterprises where employing and expert DBA for manual tuning is very expensive. And in small and medium enterprises the database sizes are comparable to that used in the experimental setup. The training dataset is derived from a series of experiments and the methodology to construct the training data-set is presented. The self-tuning method has been validated for two workload types namely TPC-C and TPC-E and the results were found to be better as compared to auto-tuning feature of Oracle 10g. The performance improvement varies from workload to workload and also depends on the scaling factor. The performance improvement for TPC-C and TPC-E workload types was 27% and 32.8% respectively. 87

14 88

Adaptive Self-tuning : A Neuro-Fuzzy approach

Adaptive Self-tuning : A Neuro-Fuzzy approach Chapter 6 Adaptive Self-tuning : A Neuro-Fuzzy approach 6.1 Introduction The adaptive techniques presented in the previous two chapters have shown how the performance-tuning of DBMS can be done effectively.

More information

Enhanced Performance of Database by Automated Self-Tuned Systems

Enhanced Performance of Database by Automated Self-Tuned Systems 22 Enhanced Performance of Database by Automated Self-Tuned Systems Ankit Verma Department of Computer Science & Engineering, I.T.M. University, Gurgaon (122017) ankit.verma.aquarius@gmail.com Abstract

More information

Neural Networks. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani

Neural Networks. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani Neural Networks CE-725: Statistical Pattern Recognition Sharif University of Technology Spring 2013 Soleymani Outline Biological and artificial neural networks Feed-forward neural networks Single layer

More information

Assignment # 5. Farrukh Jabeen Due Date: November 2, Neural Networks: Backpropation

Assignment # 5. Farrukh Jabeen Due Date: November 2, Neural Networks: Backpropation Farrukh Jabeen Due Date: November 2, 2009. Neural Networks: Backpropation Assignment # 5 The "Backpropagation" method is one of the most popular methods of "learning" by a neural network. Read the class

More information

Notes on Multilayer, Feedforward Neural Networks

Notes on Multilayer, Feedforward Neural Networks Notes on Multilayer, Feedforward Neural Networks CS425/528: Machine Learning Fall 2012 Prepared by: Lynne E. Parker [Material in these notes was gleaned from various sources, including E. Alpaydin s book

More information

LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS

LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS Neural Networks Classifier Introduction INPUT: classification data, i.e. it contains an classification (class) attribute. WE also say that the class

More information

9. Lecture Neural Networks

9. Lecture Neural Networks Soft Control (AT 3, RMA) 9. Lecture Neural Networks Application in Automation Engineering Outline of the lecture 1. Introduction to Soft Control: definition and limitations, basics of "smart" systems 2.

More information

Data Mining. Neural Networks

Data Mining. Neural Networks Data Mining Neural Networks Goals for this Unit Basic understanding of Neural Networks and how they work Ability to use Neural Networks to solve real problems Understand when neural networks may be most

More information

EXAMGOOD QUESTION & ANSWER. Accurate study guides High passing rate! Exam Good provides update free of charge in one year!

EXAMGOOD QUESTION & ANSWER. Accurate study guides High passing rate! Exam Good provides update free of charge in one year! EXAMGOOD QUESTION & ANSWER Exam Good provides update free of charge in one year! Accurate study guides High passing rate! http://www.examgood.com Exam : C2090-610 Title : DB2 10.1 Fundamentals Version

More information

REGRESSION ANALYSIS : LINEAR BY MAUAJAMA FIRDAUS & TULIKA SAHA

REGRESSION ANALYSIS : LINEAR BY MAUAJAMA FIRDAUS & TULIKA SAHA REGRESSION ANALYSIS : LINEAR BY MAUAJAMA FIRDAUS & TULIKA SAHA MACHINE LEARNING It is the science of getting computer to learn without being explicitly programmed. Machine learning is an area of artificial

More information

Ensemble methods in machine learning. Example. Neural networks. Neural networks

Ensemble methods in machine learning. Example. Neural networks. Neural networks Ensemble methods in machine learning Bootstrap aggregating (bagging) train an ensemble of models based on randomly resampled versions of the training set, then take a majority vote Example What if you

More information

Analytical model A structure and process for analyzing a dataset. For example, a decision tree is a model for the classification of a dataset.

Analytical model A structure and process for analyzing a dataset. For example, a decision tree is a model for the classification of a dataset. Glossary of data mining terms: Accuracy Accuracy is an important factor in assessing the success of data mining. When applied to data, accuracy refers to the rate of correct values in the data. When applied

More information

COMPUTATIONAL INTELLIGENCE

COMPUTATIONAL INTELLIGENCE COMPUTATIONAL INTELLIGENCE Fundamentals Adrian Horzyk Preface Before we can proceed to discuss specific complex methods we have to introduce basic concepts, principles, and models of computational intelligence

More information

Multilayer Feed-forward networks

Multilayer Feed-forward networks Multi Feed-forward networks 1. Computational models of McCulloch and Pitts proposed a binary threshold unit as a computational model for artificial neuron. This first type of neuron has been generalized

More information

Dynamic Analysis of Structures Using Neural Networks

Dynamic Analysis of Structures Using Neural Networks Dynamic Analysis of Structures Using Neural Networks Alireza Lavaei Academic member, Islamic Azad University, Boroujerd Branch, Iran Alireza Lohrasbi Academic member, Islamic Azad University, Boroujerd

More information

Classification Lecture Notes cse352. Neural Networks. Professor Anita Wasilewska

Classification Lecture Notes cse352. Neural Networks. Professor Anita Wasilewska Classification Lecture Notes cse352 Neural Networks Professor Anita Wasilewska Neural Networks Classification Introduction INPUT: classification data, i.e. it contains an classification (class) attribute

More information

ECS289: Scalable Machine Learning

ECS289: Scalable Machine Learning ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Sept 22, 2016 Course Information Website: http://www.stat.ucdavis.edu/~chohsieh/teaching/ ECS289G_Fall2016/main.html My office: Mathematical Sciences

More information

Using Machine Learning to Optimize Storage Systems

Using Machine Learning to Optimize Storage Systems Using Machine Learning to Optimize Storage Systems Dr. Kiran Gunnam 1 Outline 1. Overview 2. Building Flash Models using Logistic Regression. 3. Storage Object classification 4. Storage Allocation recommendation

More information

Akarsh Pokkunuru EECS Department Contractive Auto-Encoders: Explicit Invariance During Feature Extraction

Akarsh Pokkunuru EECS Department Contractive Auto-Encoders: Explicit Invariance During Feature Extraction Akarsh Pokkunuru EECS Department 03-16-2017 Contractive Auto-Encoders: Explicit Invariance During Feature Extraction 1 AGENDA Introduction to Auto-encoders Types of Auto-encoders Analysis of different

More information

Programming Exercise 3: Multi-class Classification and Neural Networks

Programming Exercise 3: Multi-class Classification and Neural Networks Programming Exercise 3: Multi-class Classification and Neural Networks Machine Learning Introduction In this exercise, you will implement one-vs-all logistic regression and neural networks to recognize

More information

Exam Questions C

Exam Questions C Exam Questions C2090-610 DB2 10.1 Fundamentals https://www.2passeasy.com/dumps/c2090-610/ 1.If the following command is executed: CREATE DATABASE test What is the page size (in kilobytes) of the database?

More information

Bulldozers II. Tomasek,Hajic, Havranek, Taufer

Bulldozers II. Tomasek,Hajic, Havranek, Taufer Bulldozers II Tomasek,Hajic, Havranek, Taufer Building database CSV -> SQL Table trainraw 1 : 1 parsed csv data input Building database table trainraw Columns need transformed into the form for effective

More information

Neural Network Weight Selection Using Genetic Algorithms

Neural Network Weight Selection Using Genetic Algorithms Neural Network Weight Selection Using Genetic Algorithms David Montana presented by: Carl Fink, Hongyi Chen, Jack Cheng, Xinglong Li, Bruce Lin, Chongjie Zhang April 12, 2005 1 Neural Networks Neural networks

More information

Cse634 DATA MINING TEST REVIEW. Professor Anita Wasilewska Computer Science Department Stony Brook University

Cse634 DATA MINING TEST REVIEW. Professor Anita Wasilewska Computer Science Department Stony Brook University Cse634 DATA MINING TEST REVIEW Professor Anita Wasilewska Computer Science Department Stony Brook University Preprocessing stage Preprocessing: includes all the operations that have to be performed before

More information

An Oracle White Paper September Oracle Utilities Meter Data Management Demonstrates Extreme Performance on Oracle Exadata/Exalogic

An Oracle White Paper September Oracle Utilities Meter Data Management Demonstrates Extreme Performance on Oracle Exadata/Exalogic An Oracle White Paper September 2011 Oracle Utilities Meter Data Management 2.0.1 Demonstrates Extreme Performance on Oracle Exadata/Exalogic Introduction New utilities technologies are bringing with them

More information

Event: PASS SQL Saturday - DC 2018 Presenter: Jon Tupitza, CTO Architect

Event: PASS SQL Saturday - DC 2018 Presenter: Jon Tupitza, CTO Architect Event: PASS SQL Saturday - DC 2018 Presenter: Jon Tupitza, CTO Architect BEOP.CTO.TP4 Owner: OCTO Revision: 0001 Approved by: JAT Effective: 08/30/2018 Buchanan & Edwards Proprietary: Printed copies of

More information

1. Introduction. 2. Motivation and Problem Definition. Volume 8 Issue 2, February Susmita Mohapatra

1. Introduction. 2. Motivation and Problem Definition. Volume 8 Issue 2, February Susmita Mohapatra Pattern Recall Analysis of the Hopfield Neural Network with a Genetic Algorithm Susmita Mohapatra Department of Computer Science, Utkal University, India Abstract: This paper is focused on the implementation

More information

Abstract /10/$26.00 c 2010 IEEE

Abstract /10/$26.00 c 2010 IEEE Abstract Clustering solutions are frequently used in large enterprise and mission critical applications with high performance and availability requirements. This is achieved by deploying multiple servers

More information

Outrun Your Competition With SAS In-Memory Analytics Sascha Schubert Global Technology Practice, SAS

Outrun Your Competition With SAS In-Memory Analytics Sascha Schubert Global Technology Practice, SAS Outrun Your Competition With SAS In-Memory Analytics Sascha Schubert Global Technology Practice, SAS Topics AGENDA Challenges with Big Data Analytics How SAS can help you to minimize time to value with

More information

Business Club. Decision Trees

Business Club. Decision Trees Business Club Decision Trees Business Club Analytics Team December 2017 Index 1. Motivation- A Case Study 2. The Trees a. What is a decision tree b. Representation 3. Regression v/s Classification 4. Building

More information

Deep Learning with Tensorflow AlexNet

Deep Learning with Tensorflow   AlexNet Machine Learning and Computer Vision Group Deep Learning with Tensorflow http://cvml.ist.ac.at/courses/dlwt_w17/ AlexNet Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton, "Imagenet classification

More information

Week 3: Perceptron and Multi-layer Perceptron

Week 3: Perceptron and Multi-layer Perceptron Week 3: Perceptron and Multi-layer Perceptron Phong Le, Willem Zuidema November 12, 2013 Last week we studied two famous biological neuron models, Fitzhugh-Nagumo model and Izhikevich model. This week,

More information

Image Compression: An Artificial Neural Network Approach

Image Compression: An Artificial Neural Network Approach Image Compression: An Artificial Neural Network Approach Anjana B 1, Mrs Shreeja R 2 1 Department of Computer Science and Engineering, Calicut University, Kuttippuram 2 Department of Computer Science and

More information

PERFORMANCE OF GRID COMPUTING FOR DISTRIBUTED NEURAL NETWORK. Submitted By:Mohnish Malviya & Suny Shekher Pankaj [CSE,7 TH SEM]

PERFORMANCE OF GRID COMPUTING FOR DISTRIBUTED NEURAL NETWORK. Submitted By:Mohnish Malviya & Suny Shekher Pankaj [CSE,7 TH SEM] PERFORMANCE OF GRID COMPUTING FOR DISTRIBUTED NEURAL NETWORK Submitted By:Mohnish Malviya & Suny Shekher Pankaj [CSE,7 TH SEM] All Saints` College Of Technology, Gandhi Nagar, Bhopal. Abstract: In this

More information

The k-means Algorithm and Genetic Algorithm

The k-means Algorithm and Genetic Algorithm The k-means Algorithm and Genetic Algorithm k-means algorithm Genetic algorithm Rough set approach Fuzzy set approaches Chapter 8 2 The K-Means Algorithm The K-Means algorithm is a simple yet effective

More information

CS317 File and Database Systems

CS317 File and Database Systems CS317 File and Database Systems Lecture 9 Intro to Physical DBMS Design October 22, 2017 Sam Siewert Reminders Assignment #4 Due Friday, Monday Late Assignment #3 Returned Assignment #5, B-Trees and Physical

More information

CSC 578 Neural Networks and Deep Learning

CSC 578 Neural Networks and Deep Learning CSC 578 Neural Networks and Deep Learning Fall 2018/19 7. Recurrent Neural Networks (Some figures adapted from NNDL book) 1 Recurrent Neural Networks 1. Recurrent Neural Networks (RNNs) 2. RNN Training

More information

COMPUTATIONAL INTELLIGENCE SEW (INTRODUCTION TO MACHINE LEARNING) SS18. Lecture 6: k-nn Cross-validation Regularization

COMPUTATIONAL INTELLIGENCE SEW (INTRODUCTION TO MACHINE LEARNING) SS18. Lecture 6: k-nn Cross-validation Regularization COMPUTATIONAL INTELLIGENCE SEW (INTRODUCTION TO MACHINE LEARNING) SS18 Lecture 6: k-nn Cross-validation Regularization LEARNING METHODS Lazy vs eager learning Eager learning generalizes training data before

More information

What is Data Mining? Data Mining. Data Mining Architecture. Illustrative Applications. Pharmaceutical Industry. Pharmaceutical Industry

What is Data Mining? Data Mining. Data Mining Architecture. Illustrative Applications. Pharmaceutical Industry. Pharmaceutical Industry Data Mining Andrew Kusiak Intelligent Systems Laboratory 2139 Seamans Center The University of Iowa Iowa City, IA 52242-1527 andrew-kusiak@uiowa.edu http://www.icaen.uiowa.edu/~ankusiak Tel. 319-335 5934

More information

Machine Learning and Bioinformatics 機器學習與生物資訊學

Machine Learning and Bioinformatics 機器學習與生物資訊學 Molecular Biomedical Informatics 分子生醫資訊實驗室 機器學習與生物資訊學 Machine Learning & Bioinformatics 1 Evaluation The key to success 2 Three datasets of which the answers must be known 3 Note on parameter tuning It

More information

Knowledge Discovery and Data Mining. Neural Nets. A simple NN as a Mathematical Formula. Notes. Lecture 13 - Neural Nets. Tom Kelsey.

Knowledge Discovery and Data Mining. Neural Nets. A simple NN as a Mathematical Formula. Notes. Lecture 13 - Neural Nets. Tom Kelsey. Knowledge Discovery and Data Mining Lecture 13 - Neural Nets Tom Kelsey School of Computer Science University of St Andrews http://tom.home.cs.st-andrews.ac.uk twk@st-andrews.ac.uk Tom Kelsey ID5059-13-NN

More information

Artificial Intelligence. Programming Styles

Artificial Intelligence. Programming Styles Artificial Intelligence Intro to Machine Learning Programming Styles Standard CS: Explicitly program computer to do something Early AI: Derive a problem description (state) and use general algorithms to

More information

Knowledge Discovery and Data Mining

Knowledge Discovery and Data Mining Knowledge Discovery and Data Mining Lecture 13 - Neural Nets Tom Kelsey School of Computer Science University of St Andrews http://tom.home.cs.st-andrews.ac.uk twk@st-andrews.ac.uk Tom Kelsey ID5059-13-NN

More information

4. Feedforward neural networks. 4.1 Feedforward neural network structure

4. Feedforward neural networks. 4.1 Feedforward neural network structure 4. Feedforward neural networks 4.1 Feedforward neural network structure Feedforward neural network is one of the most common network architectures. Its structure and some basic preprocessing issues required

More information

Data Set. What is Data Mining? Data Mining (Big Data Analytics) Illustrative Applications. What is Knowledge Discovery?

Data Set. What is Data Mining? Data Mining (Big Data Analytics) Illustrative Applications. What is Knowledge Discovery? Data Mining (Big Data Analytics) Andrew Kusiak Intelligent Systems Laboratory 2139 Seamans Center The University of Iowa Iowa City, IA 52242-1527 andrew-kusiak@uiowa.edu http://user.engineering.uiowa.edu/~ankusiak/

More information

CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION

CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION 6.1 INTRODUCTION Fuzzy logic based computational techniques are becoming increasingly important in the medical image analysis arena. The significant

More information

Consolidating OLTP Workloads on Dell PowerEdge R th generation Servers

Consolidating OLTP Workloads on Dell PowerEdge R th generation Servers Consolidating OLTP Workloads on Dell PowerEdge R720 12 th generation Servers B Balamurugan Phani MV Dell Database Solutions Engineering March 2012 This document is for informational purposes only and may

More information

Supervised Learning with Neural Networks. We now look at how an agent might learn to solve a general problem by seeing examples.

Supervised Learning with Neural Networks. We now look at how an agent might learn to solve a general problem by seeing examples. Supervised Learning with Neural Networks We now look at how an agent might learn to solve a general problem by seeing examples. Aims: to present an outline of supervised learning as part of AI; to introduce

More information

Machine Learning Classifiers and Boosting

Machine Learning Classifiers and Boosting Machine Learning Classifiers and Boosting Reading Ch 18.6-18.12, 20.1-20.3.2 Outline Different types of learning problems Different types of learning algorithms Supervised learning Decision trees Naïve

More information

Linear Regression & Gradient Descent

Linear Regression & Gradient Descent Linear Regression & Gradient Descent These slides were assembled by Byron Boots, with grateful acknowledgement to Eric Eaton and the many others who made their course materials freely available online.

More information

USING IMAGES PATTERN RECOGNITION AND NEURAL NETWORKS FOR COATING QUALITY ASSESSMENT Image processing for quality assessment

USING IMAGES PATTERN RECOGNITION AND NEURAL NETWORKS FOR COATING QUALITY ASSESSMENT Image processing for quality assessment USING IMAGES PATTERN RECOGNITION AND NEURAL NETWORKS FOR COATING QUALITY ASSESSMENT Image processing for quality assessment L.-M. CHANG and Y.A. ABDELRAZIG School of Civil Engineering, Purdue University,

More information

The Self-Managing Database: Automatic SGA Memory Management. An Oracle White Paper Nov. 2003

The Self-Managing Database: Automatic SGA Memory Management. An Oracle White Paper Nov. 2003 The Self-Managing Database: Automatic SGA Memory Management An Oracle White Paper Nov. 2003 The Self-Managing Database: Automatic SGA Memory Management Introduction... 3 Current Challenges... 3 Introducing

More information

Embedded Technosolutions

Embedded Technosolutions Hadoop Big Data An Important technology in IT Sector Hadoop - Big Data Oerie 90% of the worlds data was generated in the last few years. Due to the advent of new technologies, devices, and communication

More information

Artificial Neural Network based Curve Prediction

Artificial Neural Network based Curve Prediction Artificial Neural Network based Curve Prediction LECTURE COURSE: AUSGEWÄHLTE OPTIMIERUNGSVERFAHREN FÜR INGENIEURE SUPERVISOR: PROF. CHRISTIAN HAFNER STUDENTS: ANTHONY HSIAO, MICHAEL BOESCH Abstract We

More information

Fit for Purpose Platform Positioning and Performance Architecture

Fit for Purpose Platform Positioning and Performance Architecture Fit for Purpose Platform Positioning and Performance Architecture Joe Temple IBM Monday, February 4, 11AM-12PM Session Number 12927 Insert Custom Session QR if Desired. Fit for Purpose Categorized Workload

More information

Neuro-Fuzzy Inverse Forward Models

Neuro-Fuzzy Inverse Forward Models CS9 Autumn Neuro-Fuzzy Inverse Forward Models Brian Highfill Stanford University Department of Computer Science Abstract- Internal cognitive models are useful methods for the implementation of motor control

More information

Optimizing Number of Hidden Nodes for Artificial Neural Network using Competitive Learning Approach

Optimizing Number of Hidden Nodes for Artificial Neural Network using Competitive Learning Approach Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 4, Issue. 5, May 2015, pg.358

More information

Edge Classification in Networks

Edge Classification in Networks Charu C. Aggarwal, Peixiang Zhao, and Gewen He Florida State University IBM T J Watson Research Center Edge Classification in Networks ICDE Conference, 2016 Introduction We consider in this paper the edge

More information

CS 4510/9010 Applied Machine Learning

CS 4510/9010 Applied Machine Learning CS 4510/9010 Applied Machine Learning Neural Nets Paula Matuszek Spring, 2015 1 Neural Nets, the very short version A neural net consists of layers of nodes, or neurons, each of which has an activation

More information

Programming Exercise 4: Neural Networks Learning

Programming Exercise 4: Neural Networks Learning Programming Exercise 4: Neural Networks Learning Machine Learning Introduction In this exercise, you will implement the backpropagation algorithm for neural networks and apply it to the task of hand-written

More information

What is Data Mining? Data Mining. Data Mining Architecture. Illustrative Applications. Pharmaceutical Industry. Pharmaceutical Industry

What is Data Mining? Data Mining. Data Mining Architecture. Illustrative Applications. Pharmaceutical Industry. Pharmaceutical Industry Data Mining Andrew Kusiak Intelligent Systems Laboratory 2139 Seamans Center The University it of Iowa Iowa City, IA 52242-1527 andrew-kusiak@uiowa.edu http://www.icaen.uiowa.edu/~ankusiak Tel. 319-335

More information

Multi-label classification using rule-based classifier systems

Multi-label classification using rule-based classifier systems Multi-label classification using rule-based classifier systems Shabnam Nazmi (PhD candidate) Department of electrical and computer engineering North Carolina A&T state university Advisor: Dr. A. Homaifar

More information

Topics covered 10/12/2015. Pengantar Teknologi Informasi dan Teknologi Hijau. Suryo Widiantoro, ST, MMSI, M.Com(IS)

Topics covered 10/12/2015. Pengantar Teknologi Informasi dan Teknologi Hijau. Suryo Widiantoro, ST, MMSI, M.Com(IS) Pengantar Teknologi Informasi dan Teknologi Hijau Suryo Widiantoro, ST, MMSI, M.Com(IS) 1 Topics covered 1. Basic concept of managing files 2. Database management system 3. Database models 4. Data mining

More information

Build a system health check for Db2 using IBM Machine Learning for z/os

Build a system health check for Db2 using IBM Machine Learning for z/os Build a system health check for Db2 using IBM Machine Learning for z/os Jonathan Sloan Senior Analytics Architect, IBM Analytics Agenda A brief machine learning overview The Db2 ITOA model solutions template

More information

Deep Learning. Practical introduction with Keras JORDI TORRES 27/05/2018. Chapter 3 JORDI TORRES

Deep Learning. Practical introduction with Keras JORDI TORRES 27/05/2018. Chapter 3 JORDI TORRES Deep Learning Practical introduction with Keras Chapter 3 27/05/2018 Neuron A neural network is formed by neurons connected to each other; in turn, each connection of one neural network is associated

More information

Liquefaction Analysis in 3D based on Neural Network Algorithm

Liquefaction Analysis in 3D based on Neural Network Algorithm Liquefaction Analysis in 3D based on Neural Network Algorithm M. Tolon Istanbul Technical University, Turkey D. Ural Istanbul Technical University, Turkey SUMMARY: Simplified techniques based on in situ

More information

Neural Networks (pp )

Neural Networks (pp ) Notation: Means pencil-and-paper QUIZ Means coding QUIZ Neural Networks (pp. 106-121) The first artificial neural network (ANN) was the (single-layer) perceptron, a simplified model of a biological neuron.

More information

CHAPTER 5 PROPAGATION DELAY

CHAPTER 5 PROPAGATION DELAY 98 CHAPTER 5 PROPAGATION DELAY Underwater wireless sensor networks deployed of sensor nodes with sensing, forwarding and processing abilities that operate in underwater. In this environment brought challenges,

More information

Improving the way neural networks learn Srikumar Ramalingam School of Computing University of Utah

Improving the way neural networks learn Srikumar Ramalingam School of Computing University of Utah Improving the way neural networks learn Srikumar Ramalingam School of Computing University of Utah Reference Most of the slides are taken from the third chapter of the online book by Michael Nielson: neuralnetworksanddeeplearning.com

More information

COMPUTATIONAL INTELLIGENCE

COMPUTATIONAL INTELLIGENCE COMPUTATIONAL INTELLIGENCE Radial Basis Function Networks Adrian Horzyk Preface Radial Basis Function Networks (RBFN) are a kind of artificial neural networks that use radial basis functions (RBF) as activation

More information

Neural Networks. Robot Image Credit: Viktoriya Sukhanova 123RF.com

Neural Networks. Robot Image Credit: Viktoriya Sukhanova 123RF.com Neural Networks These slides were assembled by Eric Eaton, with grateful acknowledgement of the many others who made their course materials freely available online. Feel free to reuse or adapt these slides

More information

9. Conclusions. 9.1 Definition KDD

9. Conclusions. 9.1 Definition KDD 9. Conclusions Contents of this Chapter 9.1 Course review 9.2 State-of-the-art in KDD 9.3 KDD challenges SFU, CMPT 740, 03-3, Martin Ester 419 9.1 Definition KDD [Fayyad, Piatetsky-Shapiro & Smyth 96]

More information

CS 4510/9010 Applied Machine Learning. Neural Nets. Paula Matuszek Fall copyright Paula Matuszek 2016

CS 4510/9010 Applied Machine Learning. Neural Nets. Paula Matuszek Fall copyright Paula Matuszek 2016 CS 4510/9010 Applied Machine Learning 1 Neural Nets Paula Matuszek Fall 2016 Neural Nets, the very short version 2 A neural net consists of layers of nodes, or neurons, each of which has an activation

More information

Artificial Neural Network and Multi-Response Optimization in Reliability Measurement Approximation and Redundancy Allocation Problem

Artificial Neural Network and Multi-Response Optimization in Reliability Measurement Approximation and Redundancy Allocation Problem International Journal of Mathematics and Statistics Invention (IJMSI) E-ISSN: 2321 4767 P-ISSN: 2321-4759 Volume 4 Issue 10 December. 2016 PP-29-34 Artificial Neural Network and Multi-Response Optimization

More information

Neural Network Neurons

Neural Network Neurons Neural Networks Neural Network Neurons 1 Receives n inputs (plus a bias term) Multiplies each input by its weight Applies activation function to the sum of results Outputs result Activation Functions Given

More information

Predict the Likelihood of Responding to Direct Mail Campaign in Consumer Lending Industry

Predict the Likelihood of Responding to Direct Mail Campaign in Consumer Lending Industry Predict the Likelihood of Responding to Direct Mail Campaign in Consumer Lending Industry Jincheng Cao, SCPD Jincheng@stanford.edu 1. INTRODUCTION When running a direct mail campaign, it s common practice

More information

The Role of Database Aware Flash Technologies in Accelerating Mission- Critical Databases

The Role of Database Aware Flash Technologies in Accelerating Mission- Critical Databases The Role of Database Aware Flash Technologies in Accelerating Mission- Critical Databases Gurmeet Goindi Principal Product Manager Oracle Flash Memory Summit 2013 Santa Clara, CA 1 Agenda Relational Database

More information

IBM EXAM - C DB Fundamentals. Buy Full Product.

IBM EXAM - C DB Fundamentals. Buy Full Product. IBM EXAM - C2090-610 DB2 10.1 Fundamentals Buy Full Product http://www.examskey.com/c2090-610.html Examskey IBM C2090-610 exam demo product is here for you to test the quality of the product. This IBM

More information

Practical Database Design Methodology and Use of UML Diagrams Design & Analysis of Database Systems

Practical Database Design Methodology and Use of UML Diagrams Design & Analysis of Database Systems Practical Database Design Methodology and Use of UML Diagrams 406.426 Design & Analysis of Database Systems Jonghun Park jonghun@snu.ac.kr Dept. of Industrial Engineering Seoul National University chapter

More information

Clustering: Classic Methods and Modern Views

Clustering: Classic Methods and Modern Views Clustering: Classic Methods and Modern Views Marina Meilă University of Washington mmp@stat.washington.edu June 22, 2015 Lorentz Center Workshop on Clusters, Games and Axioms Outline Paradigms for clustering

More information

Oracle 11g AMM Inderpal S. Johal. Inderpal S. Johal, Data Softech Inc.

Oracle 11g AMM  Inderpal S. Johal. Inderpal S. Johal, Data Softech Inc. ORACLE 11G AUTOMATIC MEMORY MANAGEMENT Inderpal S. Johal, Data Softech Inc. INTRODUCTION Oracle has introduced Automatic Shared Memory Management in Oracle 10g and thus allows automatic tuning of five

More information

CS229 Final Project: Predicting Expected Response Times

CS229 Final Project: Predicting Expected  Response Times CS229 Final Project: Predicting Expected Email Response Times Laura Cruz-Albrecht (lcruzalb), Kevin Khieu (kkhieu) December 15, 2017 1 Introduction Each day, countless emails are sent out, yet the time

More information

Copyright 2018, Oracle and/or its affiliates. All rights reserved.

Copyright 2018, Oracle and/or its affiliates. All rights reserved. Beyond SQL Tuning: Insider's Guide to Maximizing SQL Performance Monday, Oct 22 10:30 a.m. - 11:15 a.m. Marriott Marquis (Golden Gate Level) - Golden Gate A Ashish Agrawal Group Product Manager Oracle

More information

Fast Learning for Big Data Using Dynamic Function

Fast Learning for Big Data Using Dynamic Function IOP Conference Series: Materials Science and Engineering PAPER OPEN ACCESS Fast Learning for Big Data Using Dynamic Function To cite this article: T Alwajeeh et al 2017 IOP Conf. Ser.: Mater. Sci. Eng.

More information

Natural Language Processing CS 6320 Lecture 6 Neural Language Models. Instructor: Sanda Harabagiu

Natural Language Processing CS 6320 Lecture 6 Neural Language Models. Instructor: Sanda Harabagiu Natural Language Processing CS 6320 Lecture 6 Neural Language Models Instructor: Sanda Harabagiu In this lecture We shall cover: Deep Neural Models for Natural Language Processing Introduce Feed Forward

More information

Adapting Mixed Workloads to Meet SLOs in Autonomic DBMSs

Adapting Mixed Workloads to Meet SLOs in Autonomic DBMSs Adapting Mixed Workloads to Meet SLOs in Autonomic DBMSs Baoning Niu, Patrick Martin, Wendy Powley School of Computing, Queen s University Kingston, Ontario, Canada, K7L 3N6 {niu martin wendy}@cs.queensu.ca

More information

Oracle 1Z0-054 Exam Questions and Answers (PDF) Oracle 1Z0-054 Exam Questions 1Z0-054 BrainDumps

Oracle 1Z0-054 Exam Questions and Answers (PDF) Oracle 1Z0-054 Exam Questions 1Z0-054 BrainDumps Oracle 1Z0-054 Dumps with Valid 1Z0-054 Exam Questions PDF [2018] The Oracle 1Z0-054 Oracle Database 11g: Performance Tuning exam is an ultimate source for professionals to retain their credentials dynamic.

More information

Lecture 2 Notes. Outline. Neural Networks. The Big Idea. Architecture. Instructors: Parth Shah, Riju Pahwa

Lecture 2 Notes. Outline. Neural Networks. The Big Idea. Architecture. Instructors: Parth Shah, Riju Pahwa Instructors: Parth Shah, Riju Pahwa Lecture 2 Notes Outline 1. Neural Networks The Big Idea Architecture SGD and Backpropagation 2. Convolutional Neural Networks Intuition Architecture 3. Recurrent Neural

More information

732A54/TDDE31 Big Data Analytics

732A54/TDDE31 Big Data Analytics 732A54/TDDE31 Big Data Analytics Lecture 10: Machine Learning with MapReduce Jose M. Peña IDA, Linköping University, Sweden 1/27 Contents MapReduce Framework Machine Learning with MapReduce Neural Networks

More information

Germán Llort

Germán Llort Germán Llort gllort@bsc.es >10k processes + long runs = large traces Blind tracing is not an option Profilers also start presenting issues Can you even store the data? How patient are you? IPDPS - Atlanta,

More information

27: Hybrid Graphical Models and Neural Networks

27: Hybrid Graphical Models and Neural Networks 10-708: Probabilistic Graphical Models 10-708 Spring 2016 27: Hybrid Graphical Models and Neural Networks Lecturer: Matt Gormley Scribes: Jakob Bauer Otilia Stretcu Rohan Varma 1 Motivation We first look

More information

Machine Learning Applications for Data Center Optimization

Machine Learning Applications for Data Center Optimization Machine Learning Applications for Data Center Optimization Jim Gao, Google Ratnesh Jamidar Indian Institute of Technology, Kanpur October 27, 2014 Outline Introduction Methodology General Background Model

More information

Machine Learning 13. week

Machine Learning 13. week Machine Learning 13. week Deep Learning Convolutional Neural Network Recurrent Neural Network 1 Why Deep Learning is so Popular? 1. Increase in the amount of data Thanks to the Internet, huge amount of

More information

Configuration changes such as conversion from a single instance to RAC, ASM, etc.

Configuration changes such as conversion from a single instance to RAC, ASM, etc. Today, enterprises have to make sizeable investments in hardware and software to roll out infrastructure changes. For example, a data center may have an initiative to move databases to a low cost computing

More information

Neural Networks. By Laurence Squires

Neural Networks. By Laurence Squires Neural Networks By Laurence Squires Machine learning What is it? Type of A.I. (possibly the ultimate A.I.?!?!?!) Algorithms that learn how to classify data The algorithms slowly change their own variables

More information

1 Dulcian, Inc., 2001 All rights reserved. Oracle9i Data Warehouse Review. Agenda

1 Dulcian, Inc., 2001 All rights reserved. Oracle9i Data Warehouse Review. Agenda Agenda Oracle9i Warehouse Review Dulcian, Inc. Oracle9i Server OLAP Server Analytical SQL Mining ETL Infrastructure 9i Warehouse Builder Oracle 9i Server Overview E-Business Intelligence Platform 9i Server:

More information

Analytics Research Internship at Hewlett Packard Labs

Analytics Research Internship at Hewlett Packard Labs Analytics Research Internship at Hewlett Packard Labs Stefanie Deo Mentor: Mehran Kafai September 12, 2016 First, another opportunity that came my way but didn t pan out: Data Science Internship at Intuit!

More information

A Reflective Database-Oriented Framework for Autonomic Managers

A Reflective Database-Oriented Framework for Autonomic Managers A Reflective Database-Oriented Framework for Autonomic Managers Wendy Powley and Pat Martin School of Computing, Queen s University Kingston, ON Canada {wendy, martin}@cs.queensu.ca Abstract The trend

More information

Neuron Selectivity as a Biologically Plausible Alternative to Backpropagation

Neuron Selectivity as a Biologically Plausible Alternative to Backpropagation Neuron Selectivity as a Biologically Plausible Alternative to Backpropagation C.J. Norsigian Department of Bioengineering cnorsigi@eng.ucsd.edu Vishwajith Ramesh Department of Bioengineering vramesh@eng.ucsd.edu

More information

Oracle EXAM - 1Z Oracle Database 11g: Performance Tuning. Buy Full Product.

Oracle EXAM - 1Z Oracle Database 11g: Performance Tuning. Buy Full Product. Oracle EXAM - 1Z0-054 Oracle Database 11g: Performance Tuning Buy Full Product http://www.examskey.com/1z0-054.html Examskey Oracle 1Z0-054 exam demo product is here for you to test the quality of the

More information