POST ALARM ANALYSIS USING ACTIVE CONNECTION

Size: px
Start display at page:

Download "POST ALARM ANALYSIS USING ACTIVE CONNECTION"

Transcription

1

2 POST ALARM ANALYSIS USING ACTIVE CONNECTION A Thesis submitted to the Division of Research and Advanced Studies of the University of Cincinnati In partial fulfillment of the requirements for the degree of MASTER OF SCIENCE (M.S.) Department of Electrical Engineering Of the College of Engineering & Applied Science 2013 By Vindhya Vodela B.E., Osmania University 2010 Committee Chair: Arthur J. Helmicki, Ph.D.

3 Abstract: UCII has been involved with Bridge Monitoring for over two decades and a Bridge Health Monitoring System for the Ironton Russell Bridge was developed and is being maintained by UCII. The Bridge Monitoring System collects data from the sensors placed on the bridge and sends real time updates regarding any abnormal behavior. Whenever an abnormal behavior is detected, Alarms are sent out via and in order to be better understand the alarm scenario, the feature Active Connection was developed. Active Connection involves establishing a dynamic connection to the system on the bridge which makes two way communication possible to get additional data points when the Alarms occur. By utilizing this connection, additional data points are then obtained which are analyzed using sequential analysis and a severity level for the alarm is declared. This feature of active connection is helpful in better assessing the alarm situation and improving the reliability of the bridge monitoring system on the Ironton Russell Bridge.

4

5 Table of Contents: Chapter 1: Introduction & Problem Statement..1 Section 1.1: Overview of the Monitoring System at UCII...2 Section 1.2: Ironton-Russell Bridge 2 Section 1.3: Bridge Monitoring System..3 Section 1.4 Active Connection 5 Chapter 2: Literature Review..11 Chapter 3: Sequential Analysis 17 Section 3.1: Sequential Analysis for Active Connection..17 Section 3.2: Active Connection.19 Section 3.3 Sequential Probability Ratio Test...22 Section Gaussian distribution for the gages on Ironton Russell Bridge..26 Section 3.4 Choosing the Limits A & B 32 Section 3.5 Probability Calculations.36 Chapter 4: Implementation of Active Connection...42 Section 4.1 Overview of Monitoring System on Ironton-Russell Bridge.42 Section 4.2 Hardware Upgrade.43 Section Vibrating Wire Gage.44 Section Multiplexer 44 Section Data Logger...45 Section Network Device 47 Section Digital Signal Processing Equipment...48 Section 4.3 Programming Data Acquisition System.50 Section CRBasic Programming Language.50

6 Section 4.4 Layout of the Equipment 51 Section SDI-12 Protocol 52 Section 4.5 Implementation...54 Section 4.6 Timeline for Processing.56 Section 4.7 Active Connection & Equipment Upgrade 59 Section 4.8 Processes Involved in Implementation of Active Connection...60 Section 4.9 Results 68 Chapter 5: Conclusions & Future Work...74 Section 5.1 Conclusions 74 Section 5.2 Future Work...75 Bibliography...77 Appendix.80

7 List of Figures: Figure 1.1 Ironton Russell Bridge 2 Figure 1.2 Monitoring System Overview for Ironton Russell Bridge.3 Figure 1.3 Active Connection Overview.5 Figure 2.1 Reliability Curve during Useful life for a Structure.14 Figure 3.1 Alarm on Gage L23L24TW_OH due to Temperature Changes..18 Figure 3.2 Linear Regression for a Gage which has an Outlier 20 Figure 3.3 Residual distribution for Gage L18U18S_KY (along with its shifted version at Delta of Concern)...28 Figure 3.4 Residual distribution for Gage L11L12TE_OH (along with its shifted version at Delta of Concern)...29 Figure 3.5 Representation of Decision Error.33 Figure 3.6 Illustration of Residual & Alarm distribution..36 Figure 3.7 Table Showing Delta of Concern for different members of the Ironton Russell Bridge.38 Figure 3.8 Plot of the Log Likelihood ratio for the member U6U7_KY (corresponding to the Delta of Concern) Figure 3.9 Plot of the Log Likelihood ratio for the member U6U7_KY (corresponding to16*sigma) Figure 4.1 Gage Locations for Ironton Russell Bridge...42 Figure 4.2 Vibrating Wire Gage 45 Figure 4.3 Multiplexer...45 Figure 4.4 Data Logger Upgrade...46 Figure 4.5 Network Device Upgrade.47 Figure 4.6 Signal Processing Upgrade...48 Figure 4.7 Diagnostic Tool Frequency Response of Signal on the Gage..49 Figure 4.8 Wiring Diagram for Data Acquisition System.51 Figure 4.9 SDI-12 Bus...53 Figure 4.10 Timeline for Processing..56

8 Figure 4.11 Flow Chart for Active Connection.61 Figure 4.12 Severity Levels...67 Figure 4.13 Plot of the Log Likelihood ratio for the member U6U7_KY (corresponding to the 4*sigma) Figure 4.14 Strain Graph for Gage L14U14S_KY [4*sigma Violation]...68 Figure 4.15 Active Connection Data for L14U14S _KY..70 Figure 4.16 Strain Graph for M5L6B_OH [4*sigma Violation]...71 Figure 4.17 Active Connection Data for M5L6B_OH..72

9 CHAPTER 1 Introduction and Problem Statement 1.1 Overview of the Monitoring System at UCII: UCII has been involved with Bridge Health Monitoring for over a two decades and the aim objective in Bridge Monitoring is to find abnormalities in the functioning of the bridge and there by detect any changes or damage conditions at the bridge. Since its inception, UCII has been involved in the testing and evaluation of several structures including more than 60 bridges in the state of Ohio. UCII has developed and applied a number of unique experimental and analytic tools in the evaluation of both laboratory model and full-scale civil infrastructure systems including: modal testing, truckload testing, and field calibrated finite element modeling. [1] 1.2 Ironton Russell Bridge : The Ironton Russell Bridge was opened in 1922 along the Ohio River. The Bridge is about 2400 feet long, has two lanes and connects the cities Ironton, OH and Russell, KY. The Ironton Russell Bridge is shown in Figure 1.1. The main span consists of a three span cantilever through trusses supported by five concrete piers and a suspended center span [3]. A long term bridge monitoring system was developed by UCII for the Ironton Russell Bridge. 1

10 Figure 1.1 Ironton Russell Bridge [3] 1.3 Bridge Monitoring System Data Acquisition, Data Cleansing and Data Analysis are the steps in this Process. All three processes are important in the system and all three are inter dependent. Bridge Monitoring system collects the data from the sensors on the bridge, checks for any invalid data and immediately the processing is done with the cleansed data. It can be considered as a Real Time system, because the data collection, and processing are done scheduled times and the results of this processing are immediately available for use of ODOT officials. It is very useful because the system can alert the officials as soon as a change or damage is detected. Data Acquisition involves collecting Data from the sensors (which are placed at specific members of the bridge) at periodic intervals and storing them in a Database for further processing for Data Analysis. Data Cleansing ensures that there is no data corruption and all invalid data or any kind of noisy data is removed before the data is processed and analyzed to get information. Hence it is 2

11 important to cleanse the data to ensure that the data is not corrupt or invalid. To ensure that the data is valid, at various parts of the data collection system, we have check points which will validate the data. This way data is cleansed and only cleansed data is used for processing. FIGURE 1.2: MONITORING SYSTEM OVERVIEW FOR IRONTON RUSSELL BRIDGE [2] Data Analysis is the main essence of this process as it involves defining the normal behavior and making the decisions related to abnormalities. The present algorithm Linear Regression predicts a value based on the previous month behavior and compares the actual value to the predicted value and thereby making a decision regarding its abnormality. When an abnormality is detected according the algorithm, alarms are sent out to the Officials. Figure 1 gives a brief overview of the monitoring system on the Ironton- Russell Bridge. [2] 3

12 At UCII, a web-based bridge monitor was developed by Greg Kimmel for the Ironton- Russell Bridge which automatically collects the strain and temperature from the gages and stores them in a MySQL database and which detects anomalies by using Linear Regression Technique and also provides an easy to use interface for plotting and view the readings on a website. [3] Improvements have been to the system developed by Kimmel by implementing a Software Framework which is programmed in a programming Language called Python and runs on the servers at UCII. The Web Framework is presently implemented in web based programming Language called PHP. The data acquisition system on the bridge consists of dataloggers, multiplexers, sensors and a networking device. The data acquisition system collects the data from the 66 different vibrating wire gages placed at various locations on the Ironton Russell Bridge and sends it to the servers at UCII via a TCP/IP connection. Before Active Connection was implemented, this was a passive connection where the system was programmed to collect and send data to the servers every 30 minutes. This is depicted in the Figure 1.1. Once the data is collected on the server, various checks in the processing flow ensure that the data is valid and once the data is validated, a prediction is made on the strain measurement of the gage based on the linear regression method and the temperature of the gage at that particular time. The predicted value is compared to measured value and an assessment is made whether there is an anomaly or not in the behavior of the vibrating wire gage, if there is anomaly in the behavior of the gage, then an is sent out to the ODOT officials as illustrated in Figure 1.1. Also, all the information and the calculations made are stored in the MySQL database and also displayed on the website. 4

13 1.4 ACTIVE CONNECTION: This thesis is mainly focused on understanding the scenario in which alarms occur and to provide a feature called Active Connection to the existing system which can give additional information in the event of an alarm. FIGURE 1.3: ACTIVE CONNECTION OVERVIEW The reason this feature is called Active Connection is because the existing system is a passive system which can only send data back at regular intervals to the servers at UCII and it cannot send in commands that could ask for additional data or other information from the Data Acquisition System at times which are not scheduled for taking measurements from the gages. The new system can accomplish this task of being able to communicate with the bridge at times 5

14 between the measurement cycles and hence it is called Active Connection. Figure 1.2 gives a brief overview of Active Connection. [2] Active Connection aims to give additional data and also will help assess the severity of the alarm. To do so, additional readings are taken between measurement cycles in the event of an alarm. This is possible with upgraded Data Acquisition equipment and also additional programs running on the server. The programming flexibility in the data acquisition equipment and the programs on the server help establish a full handshake in order to make active connection possible. Chapter 4 of this thesis gives more detailed information about the Data Acquisition Hardware and also the additional Programs running on the server. The main focus of the thesis is to generate useful information from the additional data. The additional data obtained are considered Sequential measurements and a technique called Sequential Analysis is used to obtain useful information from the measurements and also assess the severity of the alarm. This process is automated so that there is an automatic follow up sent (based on the information obtained from Active connection) after an alarm is issued. In order to accomplish Active Connection and give an improved alarm report and also give a severity measure for the alarm, the following concepts were studied in detail. Data Acquisition Techniques Connectivity Techniques for Handshaking Mechanism Programming methods Analyzing Additional Data points Sequential Analysis Severity of the Alarms 6

15 Probability Calculations Alarm Thresholds Time Line for Processing Data Studying these topics in detail given the rich history of the data present over several years for Ironton Russell Bridge has helped analyze the alarm scenario and has helped in building a model which can analyze the alarms on a real time basis and give a more informed decision about the alarm by getting additional data. In order to implement the feature of Active Connection, it was important to understand the how the data acquisition system on the bridge works and how the data is collected and how the data is sent back to the servers at UCII. This involves understanding the working of the dataloggers like CR1000, CR10X and also all its peripheral devices like AVW200, NL115 which help in giving an improved signal, provide high speed connectivity (TCP/IP) respectively. It was also important to study and differentiate the various communication protocols for communication between the devices in the data acquisition system, so the best method could be chosen for this system. Also, the field trips and field experience helped in understanding the wiring and connections between the gages and the datalogger. With the knowledge of the data acquisition system, it was also important to understand the network connectivity between the data acquisition system and the servers at UCII via TCP/IP. This is important for Active Connection because we are trying to make this connection between the server and the data acquisition system more dynamic. This knowledge obtained from understanding the communication system between the networking equipment on the data acquisition side and the server, was used in implementing Active Connection. 7

16 After having a working knowledge of the data acquisition system and connectivity features, the next step taken was to understand the programming features present in the datalogger. The datalogger was programmed in CRBasic (A programming interface from Campbell Scientific for programming CR1000 datalogger)[5]. The various features of this programming language were taking into consideration and then the CR1000 data logger along with its peripheral equipment was programmed so as to be able to implement Active Connection. Once the data acquisition system on the Ironton Russell Bridge was programmed to implement the feature of Active Connection, then the other part was to implement another system on the server which could communicate to the data acquisition system to make a complete handshake. This was done by running scripts on the server which are triggered at particular instants (Like Alarms). These scripts are written in Python programming language. And this whole system has been integrated with the existing system for Ironton Russell Bridge. Once the system was implemented and the full handshake of communication was established, the system had the feature of active connection to collect additional data points, meaningful information was needed to be obtained from these additional measurements. To do so, various methods and techniques were studied which are discussed in Chapter 2. After looking into all the methods, the technique of sequential analysis was used to analyze these data points. A detailed discussion about sequential analysis and application of sequential analysis to this system is given in Chapter 3. Using sequential analysis for analyzing additional data points, a severity level based on the outcome of the probability ratio test in sequential analysis. A color indicating the severity is assigned to each gage for which active connection is triggered. This information is also present 8

17 in the (Sent to ODOT officials and members of UCII) which summarizes the results of active connection. Severity levels are discussed in detail in Chapter 4. In order to apply sequential analysis to additional data points obtained to active connection, a detailed study of the alarming system on Ironton Russell Bridge was the first step. The probability calculations obtained from Greg Kimmel s (For Linear Regression) work were used. Also, a clear understanding of the threshold violation system for generating alarms was studied. These probability calculations and theory regarding alarm thresholds are presented in Chapter 3. The main idea behind active connection is to use the time between the measurements to get additional data at instants where we get alarms (Threshold Violation). In order to ensure that the additional data measurements do not interfere with the actual measurement and processing cycle running every 30 minutes, it was important to understand the sequence of events occurring in the system for every 30 minutes. Data Collection, Data processing steps were taken into consideration and a timeline for these events was made. This timeline is used to approximately assign the time available for active connection. The additional measurements taken by using active connection only use this assigned time (calculated from the timeline of processing) where no other processing occurs during this time. By doing so, we are ensuring that there are no collisions or overlap of any sort. Also, the main data collection and processing for the system that occurs every 30 minutes is the main priority of the system. A detailed discussion related to the time line of the events for a 30 minute window is discussed in Chapter 4. This thesis contains various details ranging from data acquisition equipment, data processing elements, programming flow, networking aspects, probability calculations, sequential analysis, and alarm thresholds. It not only involves a thorough understanding of the system present on 9

18 Ironton- Russell Bridge, but to extend this system to have the feature of Active Connection. Since implementing this feature involves various changes and new techniques which is the implementation aspect of this thesis, but it also has the theoretical methods like sequential analysis and probability calculations which make this thesis a combination of theoretical and practical methodologies. This Thesis is organized as follows Chapter 1: Introduction & Problem Statement Chapter 2: Literature Review Chapter 3: Sequential Analysis Chapter 4: Hardware Upgrade and Active Connection Chapter 5: Conclusions & Future Work 10

19 CHAPTER 2 Literature Review: The work at UCII has been mainly focused on the Bridge Health Monitoring and analyzing the data from the sensors on the Bridges. To have a good understanding of the Bridge Health Monitoring systems, and to be able to understand the algorithms being used for this purpose, some of the real time systems used was studied in detail. To begin with the understanding of the Real time bridge health monitoring systems, the system on Ironton Russell Bridge initially developed by Greg Kimmel was studied in detail. In the thesis the The Turnkey solution for a web based long term health monitor utilizing low speed strain measurements and predictive models [3], the monitoring system on Ironton Russell Bridge was clearly explained. All the modules involved in the monitoring system were clearly explained and after looking into the details a clear understanding of the monitoring system on the Ironton Russell Bridge was obtained. This was important because the Active Connection feature was being integrated to this monitoring system as an extension to the existing system. Active Connection deals with understanding the post alarm scenario where the alarms are determined by this monitoring system. The various details in the Kimmel s thesis were studied in great detail. The layout of the gages, the data acquisition system on the Ironton Russell Bridge, connectivity to the servers at UCII from the Ironton Russell Bridge, programming concepts, analysis of the data with linear regression predictive model and residual distribution were understood by looking into this thesis. 11

20 An integrated monitor and warning system for Jeremiah Morrow Bridge was developed at UCII. It involved the wireless data collection system, data cleansing, archiving procedures and linear regression based predictive model. [4] The monitoring system on Jeremiah Morrow Bridge was also developed to detect any abnormal behavior. A review of this system helped in understanding the layout of the advanced data acquisition system which has wireless nodes and communication between the various modules of the data acquisition system occurs wirelessly. Also, the algorithm to detect the abnormalities in behavior of the gages was modeled in this system. By understanding the system on this bridge, the features of the advanced data acquisition was understood. The upgraded data acquisition system was implemented on Ironton Russell Bridge which has similar data acquisition system as Jeremiah Morrow Bridge. More details on the data acquisition system are provided in Chapter 4. After getting a clear understanding of the Bridge monitoring systems and a detailed understanding of the system on the Ironton Russell Bridge, the advanced data acquisition systems were studied in detailed. The features in the newly available data acquisition systems (provided by Campbell Scientific) were taken into consideration and a best match for the system on Ironton Russell Bridge was chosen. To do so a review of the features available was made for the available data acquisition system. The details of the data acquisition system were obtained from the Campbell scientific website [5]. Also, the specifications of the vibrating wire gages which are placed on various members of the Ironton Russell Bridge were looked at in detail. These gages are manufactured by Geokon Inc., 12

21 and the specifications for these gages were obtained by studying the manuals of the vibrating wire gages. [6] Also, to understanding the programming aspect for this system, the understanding of the programming languages Python [7], MySQL [8], PHP [9] and CRBasic [10] was necessary. Various programming concepts in regards to these programming languages were studied in great detail. Also, some programs need to work in correlation with the other systems, so the interaction between these programming systems was dealt in great detail. For example, the program written in Python needs to send web based command, interact with the MySQL database, send s etc., so the various modules of Python programming language which can accomplish these tasks were understood and used. [11], [12] The main feature of Active connection was to have an algorithm that could analyze the additional data points. Various concepts relating to Probabilistic Methods, Reliability, Markov Chains, Bayesian Inference, Neyman-Pearson criteria, Likelihood ratios, Probability distributions and sequential analysis were dealt in detail. By understanding the details from the above mentioned theories, some of the relevant concepts for this scenario of active connection were used. The theory of Reliability was studied in context to the age of the bridge. An attempt was made to study the effect of the age of the bridge taking the reliability into consideration was made. According to the Reliability theory, a structure is only stable in middle region (not in the beginning or end), and this period is called the useful life. It was observed that the reliability of the bridge decreases exponentially as the age of the bridge decreases during its useful life. Since Ironton Russell is an old bridge; the probability of the component failure is high. [13] 13

22 Figure 2.1 Reliability Curve during the Useful Life Markov Chains Markov chain can be described as a system which transitions from one state to another between a finite number of states. Markov Chain is a random process with the Markov property having the present state, future state and past states as independent. Markov Chains are used in various fields like economics, statistics, physics, information theory, queuing theory, chemistry, internet applications etc. [14] Markov Chains are also used in correlation to sequential probability ratio test for automatic track maintenance too [15]. An attempt was made to use Markov chains in this scenario by considering states as Alarm and No Alarm and using the probabilities obtained from the reliability theory calculations. And calculations were made by using the conditional probability. And the transition probability matrix was updated for each additional data point. But in this scenario, it did not work well, because the Markov chain is mostly used in prediction and here in this scenario of active connection we are looking for analyzing a state from the additional data obtained. 14

23 Bayesian Inference - In statistics, Bayesian inference is a method of inference in which Bayes' rule is used to update the probability estimate for a hypothesis as additional evidence is learned. Bayesian updating is especially important in the dynamic analysis of a sequence of data. [16] Bayesian Inference uses the updating of the priori probability to the posterior probability in correlation with new information obtained. [17] It is also called as the Bayesian confirmation theory because by giving new evidence, we update the probability to prove a Hypothesis. The probability updating occurs by taking into consideration the priori probability and the likelihood ratio. Bayesian Inference is used in various fields like statistics, computer science applications, courtroom scenarios, Bayesian epistemology etc. Sequential Bayesian techniques have been proven useful in measurement of software reliability too [18]. Bayesian techniques are used in parameter retrieval in thermal sciences [19]. In Active Connection, since new data gets new information and hence the probability can be updated and we can check if the probability reaches a particular threshold to declare that the gage has returned to its normal behavior and hence Bayesian Inference seemed relevant to be used in this scenario, but the application of the Bayesian inference to the scenario of active connection was not ideal, because it was observed that there were rapid changes in the probability with additional data and it did not reflect the actual alarm scenario. Sequential analysis involves logarithmic hypothesis testing which has a predefined stopping rule and the sample size is not fixed in advance. The data are evaluated as they are recorded and a conclusion is made based on the stopping rule. [20] Sequential analysis also gives a cumulative result of the data points obtained. Sequential analysis hence is relevant for the scenario of active 15

24 connection because the additional information is collected until a conclusion is made whether the hypothesis can be accepted or not. This theory was relevant because it makes a decision based on the sequential data. Sequential analysis was first introduced by Abraham Wald in 1945 in the paper Sequential test for Statistical hypotheses. [21] Sequential analysis is used in decision theory in fields of statistics, medical, manufacture, digital signal processing etc. Sequential analysis is used in Radar systems to make a decision regarding detecting a foreign object [22]. Sequential analysis has been used in the scenario of active connection and has given appropriate results. In Chapter 3, we discuss the sequential analysis in detail. 16

25 Chapter 3 Sequential Analysis 3.1 Sequential Analysis for Active Connection: In statistics, sequential analysis or sequential hypothesis testing is statistical analysis where the sample size is not fixed in advance. Data are evaluated as they are collected, and further sampling is stopped in accordance with a pre-defined stopping rule as soon as significant results are observed. Sequential Analysis was first introduced by Abraham Wald in the year 1944 and now Sequential Analysis has applications in medical diagnosis, manufacturing, quality control etc. [20] Sequential Analysis is well suited for Active Connection because it involves decision making based on a sequence of measurements obtained. In Active Connection, we obtain additional measurements from a particular gage when an alarm is declared. This is done in a limited time span where the time between measurement cycles is utilized to get the additional measurements. This gives us the ability to focus on the gage that has shown some abnormal behavior (according to the linear regression algorithm) which gave rise to an alarm. The limited time available can be utilized to get more data from that particular gage and extract information regarding the scenario of the alarm. This feature can be very useful because instead of the customary wait time of 30 minutes to get more data from the gage which alarmed, we can have information sent out immediately only for that particular gage by taking a sequence of measurements on that gage. This information is sent out as a follow up to the alarm which occurred. 17

26 These additional measurements can be interpreted as a sequence of measurements. We can use these additional measurements to decide if the alarm was severe and if it is of concern or not. Sequential Analysis takes into consideration the series of measurements and hence can give a cumulative decision based on the additional measurements. This information can be interpreted in different severity levels depending on the results obtained in the sequential analysis. Studying the alarms that have previously occurred on the Ironton-Russell Bridge, it was found that most alarms are due to sudden thermal changes, noisy spikes, unusual or heavy traffic. At these alarm instances, additional data points can help us understand the alarm scenario and also tell us if the alarm is persistent or not. Figure 3.1 Alarm on gage L23L24TW_OH due to a change in temperature. [23] 18

27 If the alarm is persistently occurring then it may be of a higher concern than an alarm which occurs just once and immediately returns to normal. By getting additional data points and analyzing them, we can understand if the severity of the alarm. In case of a thermal alarm, in general we have sudden change in the value of the strain for the gage accompanied by a corresponding change in temperature. In general thermal alarms fade away slowly as the temperature comes to normal conditions. The plot for the gage L23L24TW_OH on 2 nd February 2012 is shown in the figure 3.1. An alarm was declared on this gage and the plot shows the strain and temperature for the gage. The sharp rise in temperature caused the change in strain which caused a threshold violation according to the linear regression algorithm and hence an alarm was declared. From the plot, we can also see that the gage returns to its normal behavior as the temperature decreases. 3.2 Active Connection: To illustrate active connection with respect to the linear regression algorithm which is presently being used for bridge monitoring system at Ironton-Russell, see the figure 3.2. It indicates a linear regression plot for a gage for a period of one month. The point marked in red is an outlier which sends out an alarm. Without active connection, we have to wait for 30 minutes to see if the next measurement is an outlier too, but with active connection, by focusing on the gage which alarmed, we can take a 19

28 sequence of data points and see if the measurements continue to alarm. This figure shows the two ideal scenarios which may result from active connection. The orange data points indicate the scenario in which new data points obtained from active connection now do not cause outliers and the green data points indicate the scenario in which the new data points also cause outliers. This figure is a graphical representation of the possible scenarios in active connection. Figure 3.2 Linear Regression for a gage which has an outlier. [2] 20

29 The idea is to take the data points obtained from active connection and consider them as a sequence of data points and apply the sequential analysis so that we can take get some information of the behavior of the gage immediately after the alarm. Sequential analysis is appropriate here because it uses the Sequential Probability Ratio Test which compares two hypotheses and by comparing the cumulative probability ratio for the sequence of data points, we can make a decision regarding the alarm and behavior of the gage at that time. Using Sequential analysis we can also say if the alarm severity is decreasing or increasing as we take more data points. In case of an alarm corresponding to a thermal change, the alarm severity will decrease slowly. But in case of a noisy spike, we can get additional data points immediately and by applying sequential analysis so that we can get a clear idea and be sure it was spike. Since sequential analysis gives a cumulative effect, we can be sure that the alarm was only a noisy spike. In case of situations where there is an actual change or abnormality in the bridge, then the information obtained from applying sequential analysis to additional data can be used to decide if it is a possible damage condition and the same can be reported to the officials at ODOT who can take further action. For each additional data point, by using sequential analysis we put the updated cumulative result in one of the regions. Hypothesis H0 accepted Hypothesis H1 rejected Additional Data is taken before the decision is made If Hypothesis H0 is accepted or rejected, there is no further need to take additional data points and a decision is made, but if it falls in a region where the decision cannot be made, the 21

30 additional data is taken again till the decision is made. Since it takes into consideration the sequence of the data points, we can determine if the alarm is of concern or not. If residual value of the strain exceeds the threshold, then an alarm occurs, by keeping the track of the residual value of the strain which is obtained from the linear regression algorithm and giving it as an input to the sequential analysis, we can make a more informed decision about the alarm and this information is sent as an automated update to the alarm sent. So by using sequential analysis additional data points can be used in the cumulative decision process in order see the persistence and severity of the alarm. 3.3 Sequential Probability Ratio Test [21]: The Sequential Probability Ratio Test which is a Sequential Analysis method has been used for Active Connection. The following is a description of the method. [21] Consider is a random variable. Let denote the distribution of the random variable under consideration. And y is a parameter of the distribution. For example, y can be the mean of the distribution. For Hypothesis, [NULL HYPOTHESIS] The distribution of when is true is. For Hypothesis, [ALTERNATE HYPOTHESIS] The distribution of when is true is 22

31 If is a continuous random variable, then the probability density at and Hypothesis is And similarly the probability density at when Hypothesis H1 is true is Similarly if we have a sample set Where m is a positive integral value, Considering the samples are independent of each other, the probability density for this sample set when H0 is true: H1 is true: The sequential probability ratio test is for testing hypothesis H0 against H1. Hence the Probability ratio is = 23

32 Now, the sequential probability ratio test is defined as follows: Two positive constants A and B are chosen (B<A). At each stage of the test, the probability ratio is computed and if Then the test is continued by taking additional data points until With the rejection of or acceptance of And if With the acceptance of or rejection of The discussion of choosing and will be discussed later in this chapter. For Practical reasons, the logarithm of the ratio is easier to compute and convenient to use since it can be written as a sum of terms instead of a product of m terms. Hence the Ratio becomes: The term in the sum can be denoted as 24

33 , Since previously the upper and limits were A and B, now the limits become and. Hence the test procedure is modified as follows: [24] The cumulative sum of is computed and If Then the test is continued by taking additional data points If Then hypothesis is rejected and Hypothesis is accepted. And if Then hypothesis is accepted and Hypothesis is rejected. By giving us these decision rules, we can apply the sequential probability ratio test to the scenario of post alarm measurements to come to a decision regarding the severity and whether the alarm is of concern or not. Choosing and : Defining the hypotheses for using in Sequential Analysis, the parameter is important. In the definition. And is the random variable under consideration and is a parameter. And hence a value of can be fixed to understand the behavior of. For example, in Gaussian distribution, the mean of the distribution can be considered as In case of gages on Ironton- Russell Bridge, the residual distribution has a theoretical mean of zero which can be considered as the null hypothesis and becomes the null hypothesis. Since the residual of 25

34 zero is what we ideally expect, it is appropriate to be chosen as null hypothesis. Alternate hypothesis should be generally chosen where the behavior is of concern. Hence by comparing the null hypothesis (Ideal scenario) to an alternate hypothesis (abnormal behavior), we have an understanding of the scenario with respect to the ideal scenario. For the gages on Ironton Russell Bridge, the failure point is defined by the rating factors and is called the Delta of Concern. Hence, the most intuitive choice for the alternate hypothesis is. Because the delta of concern indicates the failure scenario, comparing the alternate hypothesis with respect to the null hypothesis gives a mathematical representation of the behavior of the gage itself. Also, the has been modified to a different value compared to the delta of concern, in the algorithm implemented in order to give sensitivity to the threshold violation system on Ironton Russell Bridge. A more detailed discussion choosing in the calculations of sequential analysis for active connection is given in Chapter 4. In general, to choose the null and alternate hypothesis, we should choose the ideal value for and a value for which indicates abnormal behavior or concern Gaussian distribution for Gages on Ironton Russell Bridge: The Bridge Monitoring System on Ironton Russell Bridge uses the algorithm proposed by Greg Kimmel in the thesis The turnkey solution for web based Long term health monitor utilizing low speed strain measurements and predictive models. According to this algorithm, the threshold for the alarm is based on the 8*σ violation for the residual distribution. The residual distribution was obtained from the strain measurements after removing the temperature effect. As illustrated in Figure 3.2, the measurements over a period of month are taken into consideration and it was 26

35 observed that there is a relationship between temperature and strain measurements and this relationship is modeled as a linear regression line. Based on this behavior a prediction is made for the strain measurement with respect to the temperature measurement and difference between the predicted value and the actual measurement is calculated and hence this difference is the strain residual. According to this algorithm, the behavior of the strain residuals over a period of one month is approximately Gaussian and hence the sigma from this distribution of residuals is calculated over the period is calculated and compared to the residual for the current measurement, if the measurement violates the 8*σ threshold, an alarm is declared indicating abnormal behavior. The gages are present on various locations on the bridge, and the response for each gage differs and its response depends on various factors. The gage behavior depends on the position of the gage, the weather, the temperature, exposure to sunlight, traffic etc. Since the strain measurement from the gage depends on various factors, the residual distribution might not exactly be Gaussian and for some gages, the distribution has tails. The tails might correspond to the various factors. John Essegbey s thesis A piece wise linear approximation for Improved detection in Structural Health Monitoring [25] focuses on understanding these tails gives more insight into the various factors causing the tails. While the current algorithm on for monitoring on Ironton Russell Bridge uses 30 days of data points (collected every 30 minutes) assumes the Gaussian distribution, and calculates the threshold (by using the Sigma value from the calculations), which when violated generates alarms. John looked into shorter time periods than the 30 day window to investigate the reasons concerning the tails in the Residual Distribution. 27

36 But, to understand the behavior of the gages over a period of time and see how it is changing with changing conditions, an attempt was made to look into the long term behavior of the gages. Since the Ironton Russell Bridge had the gages from many years; it was interesting to see how the behavior of gage has changed over this period of time. The pattern followed for every year is very similar and hence the variations can be attributed to the seasonal changes. To get a better understanding of how the residual distribution would be for a longer period of time, unlike the 30 day window, it was calculated for a longer periods of time. (a) (b) Figure 3.3 Residual Distribution (along with its shifted version at Delta of Concern) for Gage L18U18S_KY for June 2012(month) and for the whole year of June 2011 June It is clear from the above graphs that the residual distribution more Gaussian for year s data than for the month data because it captures the seasonal changes too. In Figure 3.3 (a), the residual distribution (along with its shifted version at Delta of Concern) is shown for the gage L18U18S_KY for the month of June Here we can see that the residual distribution is not exactly Gaussian in nature. The factors for this can be the structural characteristics of that member, geometric location of the gage, the season [25]. But when the residual was calculated for the same gage for a window of one year (June June 2012), then the residual 28

37 distribution looks like Figure 3.3 (b). From this figure we can see that the residual distribution for the year looks more Gaussian than the one compared to the residual distribution for a month s duration. Hence, we can consider that the residual distribution can be approximated to a Gaussian distribution. From the Figure 3.3, we can also see that the sigma of the residuals is larger in case of the one year residual distribution when compared to the one month residual distribution. However, it is not always the case that the residual distribution is not exactly Gaussian. The residual distribution is perfectly Gaussian for most gages on the Ironton Russell. In Figure 3.4, we can see that the Residual Distribution for the gage L11L12TE_OH is approximately Gaussian for the one year s worth of data. Figure 3.4 Residual Distribution (along with its shifted version at Delta of Concern) For the gage L11L12TE_OH Since the residual distribution for a gage is a Gaussian distribution, we can now use the Gaussian distribution for the Sequential Probability Ratio Test. However, since the threshold system is calculated based on the sigma for the one month window, the sigma for the one month residual 29

38 distribution is used in the calculations for the Sequential Probability Ratio test. Also, the sigma for the one month is lower than the sigma for the residual distribution for one year; hence by using the lower sigma, we can make the system more sensitive to changes. And hence for sequential analysis, the sigma of the residual distribution for one month window is used. Applying Gaussian distribution to sequential probability ratio test, we get. For Gaussian distribution: For Hypothesis, such that the probability density function for Gaussian distribution becomes [24] For Hypothesis, such that the probability density function for Gaussian distribution becomes Since When H0 is true and When H1 is true, we get =

39 Applying the Gaussian distribution to this equation we get =. From the above equation, we get the conditions for the decision of rejecting or accepting a Hypothesis: If -- EQ 3.1 Then hypothesis H0 is rejected and Hypothesis H1 is accepted If -- EQ 3.2 Then hypothesis H0 is accepted and Hypothesis H1 is rejected If --EQ 3.3 Then no hypothesis is accepted and additional data is taken into consideration and the sequential probability ratio (logarithm) is computed cumulatively and checked for the above conditions. [24] The above equations are obtained by using the Gaussian distribution and are used to implement Active Connection. The additional measurements obtained from active connection are used to 31

40 cumulatively decide whether a Hypothesis can be accepted or rejected or if any additional data should be taken in order to make a decision. 3.4 Choosing the Limits and : The choice of the limits A and B is important for the application of sequential to active connection because it defines the decision of accepting or rejecting a hypothesis which is the essence of the whole process. The principles formulated by Neyman and Pearson for the proper choice of a critical region constituted an advance of fundamental importance in the theory of testing hypotheses. [26] According to the Neyman and Pearson criteria for Hypotheses testing, by accepting or rejecting a Hypothesis we may commit two kinds of errors: Error of the first kind : Rejecting when it is true Error of the second kind : Accepting when is true The Error of the first kind is also known as the False Alarm and the Error of the second kind is known as the Missed Detection. A diagrammatic representation of the two errors, normal operation and fault detection is shown in the figure

41 HYPOTHESIS H 0 H 1 E V E 0 Normal Operation False Alarm E N T E 1 Missed Detection Fault Detected Figure 3.5 Representation of Decision Error [13] In general, the probability of false alarm (the error of first kind) is represented by and the probability of missed detection (the error of second kind) is represented by. The quantity is called the size of the critical region and the quantity is called the power of the critical region. Suppose we are considering a test procedure of strength to determine the Constants A and B. We shall say that the sample becomes type 0 on the sample if = 33

42 For m=1, 2 n-1, And. We shall say that the sample is of type 1 if = For m=1, 2 n-1 And. For any given sample of type 1 the probability of obtaining such a sample is at least times as large under hypothesis as under hypothesis. Therefore, the probability measure of the totality of all samples of type 1 is the same as the probability that the sequential process will terminate with the acceptance of. But the latter probability is equal to when is true and to when is true. [24] Thus, we obtain the inequality, And this inequality can be written as Thus, the value is an upper limit for A. 34

43 In a similar way, for any given sample of type 0 the probability of obtaining such a sample under is at most times as large as the probability of obtaining such a sample when is true. Therefore, the probability of accepting is at most times as large when is true as when is true. So, we obtain the inequality This inequality can be written as Thus, the term is a lower limit for B. [24] By using the above equations we can choose the values of A and B. Since for active connection, we used the logarithmic ratio, the lower and upper limits for the analysis would be and. To determine the actual values of and, we need to find the strength. 35

44 3.5 Probability Calculations: Since the analysis is being done on the residual distribution of the strain measurements for a particular gage, we need to determine the strength individually for each gage. Each gage has a residual distribution which is Gaussian, the alarm distribution is unknown because a failure has not yet been detected and it can be considered as a shifted version of the residual distribution with a different mean and hence we can say that the alarm distribution is Gaussian too. [3] A threshold of for the residual distribution is set from the linear regression algorithm to minimize false alarms and missed detections. Therefore, with these assumptions it is possible to calculate the strength and use it to calculate the limits for the sequential probability ratio test. Figure 3.6: Illustration of the Residual and Alarm distribution and it also indicates false alarms and missed detections. [3] 36

45 The figure 3.6 gives an illustration of the residual distribution obtained from linear regression which has a zero mean and has a Gaussian distribution and it also shows the alarm distribution which is also Gaussian. The threshold of is used in the present algorithm and hence by using these parameters, we can determine the probability of missed detection and probability of false alarm which in turn can be used to determine the limits and for Sequential Probability Ratio Test. On Ironton Russell Bridge on which this system of Active Connection is being implemented, there are 66 gages on various members of the bridge. Each gage has different behavior based on its location, exposure to sunlight, and various other factors. So this variation is reflected in the sigma of the residual distribution for each gage. The behavior of the gage is also dependent on temperature, seasonal effects and traffic on the bridge. And hence the standard deviation for each gage is different and hence the calculations for Sequential Probability Ratio Test have to be done separately for each gage. Also, the stress at which each member fails according to the ratings given to the bridge varies depending on the location, and hence there is a different Delta of Concern for each member. The Delta of Concern is the value of the residual at which the member fails. The delta of concern for various members on Ironton Russell Bridge is listed in Figure 3.5. These values are used to calculate the strength and hence the strength varies for each gage. The Residual distribution has a mean of zero and the alarm distribution has a mean of Delta of Concern which is different for each member, but both alarm and residual distribution have the same standard deviation. The Figure 3.6 tabularizes the delta of concern for different members on Ironton-Russell Bridge. 37

46 Figure 3.7 Table showing Delta of Concern for Members on Ironton Russell Bridge [3] Ideally, Since Delta of Concern is the mean of the failure distribution, and zero is the mean of the residual distribution as shown in Figure 3.35, for the sequential analysis equations above (EQ 3.1, EQ 3.2 and EQ 3.3), we can substitute = 0 = Delta of Concern By using these two values and using the Standard Deviation of Residuals (σ), obtained from the processing calculations, we can do the sequential analysis calculations. But in general, the Delta of Concern is different for each member as discussed and hence we might not get a sensitivity for the 8*σ alarm or any alarm. With Sequential Analysis, we get a better sensitivity with = 16*σ. 38

47 This can be better illustrated by the Figure 3.7 and Figure 3.8. In Figure 3.7, the plot for the Residual (Of Strain) versus the log likelihood ratio is shown for Member U6U7_KY for = 0 = Delta of Concern. We can see from this plot, that the residual of 8*σ has the value around -900, which is not ideal for sensitivity at 8*σ threshold. close to the 8*σ threshold. L O G L I K E L I H O O D Log A Log B R A T I O Delta of Concern 8 σ Residual in μe Figure 3.8 Plot of Log Likelihood Ratio for U6U7_KY ( = 0, = 719(Delta of Concern)) Because when additional readings above 8*σ are obtained, then the log likelihood ratio is still negative and the value of sequential probability ratio (which is a cumulative of the log likelihood 39

48 ratios) is also negative and hence not giving the 8*σ threshold a sensitivity. You can see that the yellow band is not near the 8*σ threshold and hence not giving sensitivity for this region. And it is also clear that for values even above 8*σ threshold, the log likelihood is in the green band, which indicates that the member is far from failure at the threshold violation too, but for us to be able to assess the threshold violation from additional data points obtained from active connection, this is not ideal because it does not give much information. But instead if we change to 16*σ from Delta of Concern, then the sensitivity is given to the region where the 8*σ. From the Figure 3.8 it is clear that by making this change, we can give more sensitivity to the 8*σ violation. Figure 3.9 Plot of Log Likelihood Ratio for U6U7_KY ( = 0, = 248(16*σ)) 40

49 At the Residual of 8*σ, we have a zero log likelihood ratio and if it additional readings also have the 8*σ violation, then the log likelihood ratio is positive which increases the sequential probability ratio and if the there is no violation of threshold for additional readings, then the sequential probability ratio decreases and eventually comes to the green region and hence indicating that the gage has returned to its normal behavior. Since this is a better indicator for the 8*σ violation scenario and hence it is used in the Active Connection algorithm. Also, we need to make sure that we are considering the Delta of Concern in to the calculations for Active Connection. In order to see if the sequential probability ratio crosses the log likelihood ratio at the Delta of Concern, we add another layer to this system. In case the severity for the gage becomes red, we also check if the sequential probability ratio crosses the log likelihood ratio at Delta of Concern, then the severity is indicated by Black. By this way, we take into to consider the Delta of Concern too. This color scheme for indicating severity is sent in the follow up which has information obtained from active connection; this is further discussed in Chapter 4. 41

50 Chapter 4 Implementation of Active Connection 4.1 Overview of the Monitoring System on Ironton Russell Bridge: The Ironton Russell Bridge has a monitoring system developed and maintained by UCII. The System consists of various modules, the data acquisition system, the software module, the website, the gages on the bridge etc. The gages are placed on various locations of the Ironton Russell Bridge from which measurements are collected and the data is processed and information regarding the health of the bridge is obtained. Figure 4.1 shows the gage locations on the Ironton Russell Bridge. [3] Figure 4.1 Gage Locations on the Ironton- Russell Bridge.[3] The data acquisition system on the Ironton Russell Bridge consisted of CR10X dataloggers, AW1, but this equipment was replaced with CR1000 and AVW200 which had enhanced features 42

51 which were useful for Active Connection. The details of this hardware upgrade are given in Section 4.2 of this chapter. 4.2 Hardware Upgrade: The Data logging Equipment used at Ironton Russell Bridge for Data Acquisition has been upgraded to take advantage of the improved features present in the new equipment. Overall, the upgraded equipment provides better noise rejection, enhanced networking features, programming flexibility and improved diagnostic and troubleshooting features. These features in the upgraded equipment make it possible for a feature like Active Connection. Active Connection needs to able to send commands from the server to the datalogger to get additional data points when Alarms occur. Hence, for Active Connection to be possible we need the Data logging equipment to be able to communicate with the servers via TCP/IP and web based APIs in order to make two way communication between the data logging equipment at the bridge and the servers which process this data possible, hence making the system Active. Also, for active connection, we need to able to modify the program at the data logger in order to respond to the requests for additional data at random instants. This programming flexibility is only available for the upgraded datalogger and such a feature was not present in the previous datalogger. The data logging equipment is consists of the DataLogger, Network Device and Digital Signal Processing Equipment. Given below is the detailed discussion about the features of the above mentioned equipment and the programming interface for the datalogger. 43

52 4.2.1 Vibrating Wire Gage: The vibrating wire gages (Model 4000) are used on the Ironton Russell Bridge monitoring system. The gages are manufactured by Geokon and are designed for arc welding to steel structures such as tunnels, bridges etc. The gage is constructed such that a steel wire is mounted between two mounting posts and the frequency of the wire when excited with a voltage changes with any change in the structure. [3] Figure 4.2 shows a Geokon Model 4000 Vibrating Wire gage. [27] Figure 4.2 Vibrating Wire Gage [27] There are 66 vibrating wire gages present on various members of the Ironton Russell Bridge. The gages are connected to Multiplexers which are in turn connected to the CR1000 datalogger Multiplexers: The multiplexers are used to connect the data logger to the gages. The Multiplexer (Model AM 16/32B) used here is manufactured by Campbell Scientific. It can support up to 32 sensors at a time. Figure 4.3 shows the AM 16/32B Multiplexer [28] 44

53 Figure 4.3 Multiplexer Data Logger: The Data Logger is the main part of the Data Acquisition System. It is an electronic device that records the measurements from the sensors. For the system at Ironton Russell Bridge, the Datalogger is connected to 6 Multiplexers which are connected to the gages. There are a total of 66 gages on this bridge and all of them are connected to the datalogger via the multiplexers. The datalogger needs to collect data from all the gages by sending out a signal. It repeats this process every 30 minutes and collects data in the 30 minute intervals. The Data Logger used previously was CR10X, which is now obsolete and did not have enough programming flexibility and enhanced connectivity features. The connectivity features help in establishing connection to the server and help to send the data to the servers. The CR1000 Data Logger which is being used now has enhanced programming features, additional data storage and diagnostic capabilities [29]. The Programming features present in this Data Logger enable to program the device to get additional measurements from a particular gage in between the measurement cycles which it makes it possible for engineering this feature of Active Connection. The diagnostic capabilities help detect noisy gages and help assessing the gage health. This Data 45

54 Logger also enables connectivity via Enhanced Network Devices and also supports sophisticated Digital Signal Processing equipment. Connectivity to an enhanced Network Device (Like NL115) assures high speed connectivity via TCP/IP connection with the servers at UCII. Figure 4.4: Data Logger Upgrade (CR10X to CR1000) [29] Some of Features of the CR1000 Data Logger: Supports Serial Communication with Serial Devices and Sensors. Supports various communication protocols such as PakBus, SDI-12, ModBus, and DNP3. Supports 4 GB memory with options to expand the memory. Communicates with various options like TCP/IP, , FTP, and Web Server. It has a built in SRAM which acts as a backup power in case of a power failure. It has flexible power control and communication options. Programmable with LoggerNet, PC400 or ShortCut. It also has built in ASIC (Application Oriented Integrated Circuit) which expands control capabilities and serial communication capabilities. [29] In general the CR1000 Data Logger is used in various applications like Structural Health Monitoring, Vehicle Testing, Air Quality, Time Domain Reflectometry, etc. [29] 46

55 4.2.4 Network Device: The Network Device NL 115 is an Ethernet module which ensures high speed connectivity to get data from the remote location of the bridge to the Servers at UCII at a high speed which is also an important factor for establishing Active Connection [30]. This Device also enables connection to the system on the bridge to run a web server which makes active connection possible. The NL 115 when interfaced to a CR1000 also runs a web server on the CR1000. When integrated with CR1000, NL115 also provides a Web API (Application Program Interface) is used to send commands to the Data Logger through a Web URL (User Resource Locator). This feature makes it possible to establish a handshaking system between the servers at UCII and Data Logging System through a web service. Real time data can be sent and received over the Web Based Link. Figure 4.5: Network Device Upgrade (NL100 to NL115) 47

56 4.2.5 Digital Signal Processing Equipment: The Digital Signal Processing Device AVW 200 is compatible with the CR1000 Data Logger and used to make the measurements from the Vibrating Wire Strain Gages. It has various advantages over the older DSP Interface. The AVW 200 provides better measurements by reducing the noise sources by the use of Signal Processing techniques. The AVW 200 also gives diagnostic features which can help detect noisy gages or give feedback about gage health [31]. Figure 4.6: Signal Processing Equipment Upgrade (DSP to AVW200) With these diagnostics is possible to see the frequency spectrum for the signal received from the gage. This can be helpful in the field to check for any noisy gages by looking at the spectrum it is easy for troubleshooting for any problems with the gages. In Figure 4.4 we can see the frequency spectrum for the signal from the gage on Ironton Russell Bridge. By analyzing these parameters and looking at the frequency spectrum we can get additional information for troubleshooting and understanding the scenarios. For example if a noisy frequency is higher than the signal frequency, then it can be assumed that the gage is in a noisy environment and that the signal is corrupted. Also, if the Peak Amplitude is very low, we should not consider those signals and there might be a problem with the gage or the wiring in this case. Also, the signal to noise 48

57 ratio needs to be very high for a good signal. Hence by checking for a higher signal to noise ratio, we can avoid noisy measurements. Figure 4.7: Diagnostic Tool showing the frequency response of the signal on the gage. 49

58 These Diagnostics include [31]: Frequency of the measurement Peak Amplitude Signal to Noise Ratio Noise Frequency Decay Ratio Thermistor Resistance 4.3 Programming the Data Acquisition System: The data acquisition system consists of the CR1000 datalogger which is the main unit (controller) for the entire system. By programming the datalogger, we can control the whole data acquisition system. CRBasic the programming language used for this purpose. It is developed by Campbell Scientfic CRBasic Programming: The CR1000 Datalogger can be programmed with the CRBasic Programming language. The previous datalogger CR10X, did not a have a support for this kind of programming feature. The CR10X was programmed with the software called Multilogger which is not very flexible and hence not useful for Active Connection. But with CRBasic, we can not only write basic measurement and store programs, but also we can write sophisticated programs[10], where you can give different clock triggers, make use of the time between measurements, receive commands from the web API, send s and various other features. CRBasic has programming flexibility which makes Active Connection possible from the software side. 50

59 For example, we can modify the Flags in the program on the CR1000 datalogger by sending the commands through the Web API. After the flags are modified, the program takes into account the modified flags and executes the part of the code for the changed flag value. To accomplish Active Connection, the Datalogger was programmed in CRBasic such that the time between measurements can be used. It s also easier to manage the multiplexers during the measurements and send the clock signals by using CRBasic. Given below is a discussion of how the equipment is connected to the gages via the multiplexers and how the communication between the datalogger and the gages occurs. 4.4 Layout of the Equipment (Wiring Diagram): Figure 4.8: The Wiring Diagram for the Equipment Upgrade. 51

60 The aim was to have improved Data Acquisition System without changing the existing wiring present on the bridge. The gages are placed at various members of the bridge and these gages are connected to the multiplexers which connect to the Data Logging System. The Figure 4.5 shows the wiring diagram for the data acquisition system on the Ironton Russell Bridge. This wiring diagram shows the connections between the Datalogger, AVW200, Multiplexers and gages. SDI- 12(Serial Data Interface at 1200 Baud) method was used to meet this purpose SDI- 12(Serial Data Interface at 1200 Baud) Protocol: SDI-12 is an asynchronous serial communication protocol in which communication is achieved by using digital communication over a single line. It is a standard to interface battery powered data recorders with micro-processor based sensors designed for data acquisition. [32] The devices in this configuration are given SDI-12 addresses by which they communicate. SDI- 12 protocol is an ideal choice for the Data Acquisition system on the Ironton - Russell Bridge, because it has low current drain, low cost, can handle multiple gages with a single data logger and also, it can also handle large distances between sensor and the data logger. [33]The SDI 12 protocol has various advantages: Power to the sensors is supplied through the interface to the datalogger; No separate power source is needed. SDI-12 data recorders can interface with a variety of sensors and also, the SDI-12 sensors interface with a variety of data recorders. This can be a very useful feature when a variety of sensors are involved in the data acquisition system. 52

61 Version compatibility; the sensors with older versions will work with future versions Reprogramming is not required for SDI-12 sensors in case of replacement [33] It s a very simple and easy to implement protocol for most data acquisition system. The electrical interface for SDI-12: The SDI-12 electrical interface is a very simple one. It consists of a SDI-12 bus which connects the sensor and the data logger and transmits the serial data between the sensor and the datalogger. The bus is capable of having around 10 sensors connected to it directly and the distances between the sensor and the datalogger can be large in few hundreds of feet. [33] The figure 4.10 shows the simple illustration for the SDI-12 electrical interface. DATALOGGER/DATA RECORDER SERIAL DATA LINE GROUND LINE SENSOR 12 VOLT LINE SENSOR Figure 4.9: The SDI-12 Bus [33] 53

62 As indicated in the Figure 4.5, the bus consists of 3 lines: The Serial Data line transmits the information bits in serial communication at 1200 baud rate. It is a bidirectional, 3 state, and data transfer line [33]. The ground line is connected to the circuit ground. The voltage line - The datalogger or data recorder should provide the 12 volts with respect to ground. After understanding the specifics of the SDI-12 protocol, the data acquisition system was configured accordingly. To implement the SDI-12 protocol, the AVW 200 (The Digital Signal Processing Equipment) is given a specific SDI-12 address which is used to communicate with the CR1000 (The Data Logger). The AVW200 in turn connects to the multiplexers which connect to the sensors. The advantage with this mode is that it has the ability to use a single available data channel for many sensors [34]. This feature of the SDI-12 protocol is used in this scenario by connecting 6 Multiplexers with just one signal processing device (AVW200). 4.5 Implementation: The CR100 Data Logger is connected to the AVW 200 by SDI-12 connection. The Communication channels of all the Multiplexers are daisy chained and connected to one channel of the AVW 200. The Clock and the Reset input for all multiplexers are connected to the control ports of the Data Logger. The Clock is a common wire that goes to all the multiplexers and the Reset is a different input for each multiplexer. Therefore, to communicate with a multiplexer, we need to select the appropriate control port for the reset of that multiplexer. 54

63 For example, to read the gages on the Multiplexer 3, the reset port corresponding to the multiplexer 3 should be chosen and this port is maintained at high (4V voltage) and the clock is common for all the multiplexers. This gives the signal processing equipment to access multiplexer 3 and which reads the frequency response of the gage and gives out a frequency measurement for each gage and also other corresponding parameters like amplitude of the signal, signal to noise ratio, noise frequency, decay ratio and thermistor resistance. Using these parameters, we can calculate the strain and temperature for each gage. This information is collected from all gages every 30 minutes and sent to the servers at UCII. Appropriate delays are given between the measurements to avoid any possible unintended transmission on other ports. The Data Logger runs in sequential mode in order to support the SDI-12 protocol for communication. The Data Logger was programmed to meet the above mentioned hardware configuration and wiring. The Data is collected for every 30 minutes and the data is sent to the servers at UCII which are scheduled to collect the data every 30 minutes and the data collected is then Cleansed and further processed to detect any abnormal behavior. The program which is written in CRBasic, also incorporates additional modules to be able to communicate with the UCII servers by using a TCP/IP (Transmission Control Protocol/Internet Protocol) connection and a Web Based API which make Active Connection possible. The program checks for a command to be received via Web URL (Uniform Resource Locator) and if the command is received, (For additional measurements for a gage), and if the program is not in its measurement cycle, then the additional measurements are taken. Hence the time between the measurements can be used to get additional measurements. It s also worth noting that the main measurement cycle of getting data every 30 55

64 minutes will not be interrupted in case the command is sent; the main measurement cycle has a higher priority. 4.6 Time Line for Processing: The main idea of active connection is to take advantage of the time between measurements when additional data is needed. To get an idea about the time available between the 30 minute measurements on the data logger, we should look at the processing time line for every 30 minutes. Active Connection is only triggered in case of an event like an alarm (Caused because of the threshold violation); hence we look at the scenario when an alarm occurs. By Using Active Connection, now the time between the measurement cycle of 30 min is taken into account and used to get additional data points in case of an alarm. Figure 4.10 Timeline for Data Processing on the Servers at UCII and available time for Active Connection In figure 4.11, we can see the timeline for the data processing on the servers at UCII. This figure shows the processing starting at 11:00:00 am and ending at 11:30:00 am. But the same timeline applies to any 30 minute time slot. 56

65 Every 30 minutes, the data is collected from the gages on Ironton-Russell Bridge and this data collected by the data logger is sent to the servers at UCII which stores and processes the data. The linear regression algorithm is applied to the data collected and checked for any abnormal behavior. And then if any abnormal behavior is detected, an alarm is sent and also, the active connection program is triggered which gives around 22 minutes for active connection. These 22 minutes can be utilized to get additional information for the gages for which alarm was issued and analyze the measurements. A detailed discussion of the time line: Data Collection on CR1000: Data is collected from all the Gages Every 30 minutes and a new record is made. It involves sending signals to all 66 Gages present on Ironton and collecting data from all 66 gages. The program on the CR1000 datalogger scans one multiplexer after another and in this way completes getting data from all 66 gages, making a new record for every 30 minutes. After the record is made, the DAT file is updated. [Total Time 3 Minutes, 30 Seconds] Processing on the Server: A program on the server is scheduled to check for any updated file. The updated file is obtained by connecting to the datalogger via TCP/IP connection. Once an updated file is 57

66 obtained, the data is entered into a MySQL database and then the new data present in the data is processed by running the data through a linear regression test and all gages are checked for any threshold violation as per the previous month data. All the calculations from the linear regression are also updated on the MySQL database also. If a threshold violation is present, then a flag is set for that particular gage and warning is sent to the ODOT officials and members of UCII informing them about the threshold violation (Alarm) for that particular gage. [Total Time 2 Minutes] Trigger for Active Connection: If threshold violation (alarms) occurs on any gages, then alarms are generated, and then a python script is triggered. The Python Script when triggered, gets the names of the gages which have the alarms first and then collects the Gage information from the database, for example which Multiplexer and which Channel the gage is positioned. Once this information is obtained, then a command is sent to the datalogger. [Total Time 1 Minute] Guard: In order to avoid Collisions (w.r.t to measurements and Data Collection) and to account for any other potential delays, A Guard band of 1 and a half minute is allocated. [Total Time 1 Minute and 30 Sec] 58

67 With this discussion, it is clear that there are 22 minutes left in each measurement cycle which can be utilized for active connection. 4.7 Active Connection and Equipment Upgrade: This upgrade to the Data Acquisition System makes it possible to Engineer the feature of Active Connection in a way that we can communicate with the Data Acquisition equipment at the bridge location at nonscheduled time to get additional measurements without interrupting the normal cycle of operation of collecting data every 30 minutes. The network connectivity aspects, programming flexibility, the Serial communication protocol and memory storage present in the new system make it possible for establishing Active Connection which would not be possible without the upgrade in the equipment. Also, to establish a full handshake between the Data Acquisition system at the bridge and the servers at UCII, various program scripts are written in Python Programming Language [7] which automates the process of Active Connection. The way this works is, the Python scripts which are on the servers at UCII are triggered at particular events (like Alarms for threshold violation) and these scripts send the appropriate commands to the Data Acquisition system on the bridge and then receives the data obtained by using these commands and also, the data is processed and the information obtained from this data is sent via . 59

68 4.8 Processes involved in implementation of active connection: To implement the feature of Active Connection on the Bridge Monitoring System for Ironton- Russell Bridge, various software and hardware tools are used. The various processes involved in the implementation of active connection are shown in the figure 4.6. The Bridge monitoring system for the Ironton-Russell Bridge uses the linear regression method to find abnormal behavior. The temperature and strain readings are collected from the gages on the bridge every 30 minutes and the linear regression algorithm is used to find any abnormal behavior with respect to a 30 day window. If the algorithm finds an abnormal behavior, then an alarm is sent out to the ODOT officials and the members of the UCII lab. Since more information from the gage is useful to give more information about the gage behavior, additional measurements are taken by using Active Connection. This event of an alarm from the main bridge monitoring system triggers a Python (a programming language) program on the server at the UCII lab. To illustrate the steps involved in Active Connection, the events corresponding to the blocks in the figure 4.6 are labeled. Event A: Active Connection focuses on getting more data for one or more gages and hence this information about which gage(s) have alarmed is used as a trigger to a python script which takes this information and sends commands to get more data. Active connection begins after the main processing program processes the data collected every 30 minutes. If any gages cause alarms (Threshold Violation), then those gages are marked for getting additional data. 60

69 A ALARM (THRESHOLD VIOLATION) GAGE(S) B SELECT GAGE BASED ON PRIORITY C GAGE SEND WEB COMMANDS TO CR1000 D CR1000 COLLECTS ADDITIONAL DATA E LOGGERNET COLLECTS DATA EVERY MINUTE Fig 4.11 Flow Chart MYSQL DATABASE G SEQUENTIAL ANALYSIS CALCULATIONS F PUSH DATA TO DATABASE NO H MORE DATA? OR TIME OUT? YES I SEND 61

70 The information corresponding to those gages (for example, Multiplexer Number and Channel Number) become the inputs for the next step. Event B: In the previous step, in case there is more than one gage which generates alarms, we need to prioritize which gages to choose and for which gage the additional data should be collected first. To make this decision, we select the gages which have a higher deviation from the threshold than other gages. So the gage with the maximum deviation from the threshold is prioritized and then the data is collected for this particular gage. Once a severity level is assigned to this gage, then the gage with maximum deviation in the remaining gages is selected for getting more data and so on. In this way, we can first get additional data for the gages with maximum deviation from the predicted value. Web Commands and Event C: The Python program takes the gage information (after considering its priority) as an input and sends out Web Based Commands to the Web Server running on the Data logger (CR1000). The enhanced networking features available in the data logger (CR1000) with the use of peripherals like NL115 make this possible. An IP (Internet Protocol) address is assigned to the data logger by the use of NL115, which becomes the base URL (Uniform Resource Locator) for the Data Logger. The web server also comes with certain security features which allow the use of username and password which are used for authentication. Only if the username and password are authenticated, the commands can be sent to the Data logger. These credentials (Username and 62

71 Password) are stored on the data logger. The command specifies the gage and number of measurements to be taken. The command is sent through a web URL (Uniform Resource Locator). It consists of [35] IP address and port number of the Datalogger Type of the command Value for the parameter(gage information like Mux number) The format of the URL (generally HTML). [32] Example for a URL for sending commands to the datalogger: The Python program creates the URL depending on the gage information and number of additional points to be taken. Because the datalogger has a web server running and has TCP/IP connection, this feature of actively sending commands to the datalogger is possible. Event D: Once the credentials are authenticated, the commands are executed and the additional readings for the particular gage are taken only if the data logger is not in its main measurement cycle. If the data logger is in its main measurement cycle, then the information received from the web based commands is stored on the data logger s memory and after the completion of the measurement cycle, the additional measurements are taken. 63

72 A measurement is taken every 10 seconds. Once these measurements are taken, they are stored in file of DAT format and sent to the Servers at the UCII lab. These files are sent to the server via a TCP/IP (Transmission Control Protocol/Internet Protocol) connection. Event E: The software called Logger Net is used to monitor the files received from the Data logger via the TCP/IP. Once the logger net detects any update in the DAT file, it triggers another Python script called Push to Database. Event F: The python script, Push to Database pushes the data from the DAT file to the MySQL database on the UCII Server. This data is later used to perform Sequential Analysis. Event G: The data stored in the MySQL database is retrieved and used as an input for sequential analysis. The theory of Sequential Analysis is discussed in Chapter 3. Sequential Analysis treats the measurements are sequential, independent measurements and a sequential probability ratio test is used for hypothesis testing. By using sequential probability ratio test, we can get information from the additional measurements and we can decide if the alarm is of concern or not. From Sequential analysis, a severity level (Color) is assigned after the processing is done on the measurements. The color corresponding to severity is assigned based on the value obtained for the sequential probability ratio test (SPRT). 64

73 Thresholds obtained from the sequential analysis as discussed in Chapter 3 are used for this purpose. Also, all the calculations and the severity level corresponding to the measurement are updated on the MySQL database so that complete information regarding the measurement is available. Event H: Once the sequence of measurements is analyzed by sequential analysis, then we check if the gage has gone back to its normal behavior or not. The color scheme for the severity levels is discussed below. Green: Indicates that the gage has returned to its normal behavior and sequential probability ratio is below the lower limit of and above the minimum value possible i.e. SPRT at a residual of zero. Yellow: Indicates that more data is needed to make a decision and the value of sequential probability ratio is between the thresholds and. Again, the whole process is repeated till a decision is made. Red: Indicates that the sequential probability ratio test has crossed the threshold of and indicates that the gage is not having a normal behavior. Black: Indicates that the sequential probability ratio has not only exceeded the threshold, but also exceeded the sequential probability ratio at Delta of Concern. Since the delta of concern 65

74 indicates a region for the failure for that particular member, it indicates that the gage is definitely not in its normal behavior. Figure 4.12 Severity levels (Colors) for Sequential Probability Ratio Test (SPRT) If the gages severity is concluded to be green, then that it indicates the gage has returned to its normal behavior and so there is no need to get additional measurements for this particular gage. But if more gages have an alarm at that point, then the other gage should be selected for active connection. So at this point the program checks if there are gages for which there is a need for additional data and if there are, the program also checks to see if there is time remaining for active connection. Because there is only a window of 22 minutes for active connection, it is important to check if there is enough time for left for the processing. If there is enough time left and the gages are marked for getting additional data, then the program calls the function which sends the commands (Event C) to get additional data to the data logger and the process repeats. 66

75 Event I: In the previous step, if there is no additional data to be collected or the time for active connection is exhausted, then the an is sent out indicating the severity level for each gage that was processed by using sequential analysis. The format is discussed below. Format: As the time available for active connection is 7 minutes, and since the next measurement cycle starts after this 7 minute window, all the data collected in this window is processed and the information extracted from this data is sent as a follow up to the ODOT officials and members of UCII. This contains: Name(s) of the gage(s) which caused the alarm Number of additional measurements Severity Level (Black, Red, Yellow or Green) Percentage of additional measurements below the threshold (According to the linear regression algorithm) A conclusion if the gage has returned to normal behavior or not. 67

76 4.9 RESULTS: This feature of Active Connection was implemented on Ironton Russell Bridge Monitoring system. According to the algorithm currently being used on this system, a threshold violation of 8*σ causes an alarm (Threshold Violation). But such instances of Alarms are not common. Among all 66 gages present on Ironton Russell Bridge, in the year 2012, there were only 38 such instances and in the year 2011, there were only 29 of them. In order to be able to illustrate the applicability of sequential analysis to Active Connection, the trigger for Active Connection was changed to 4*σ. In this way, by lowering the threshold, more instances for active connection were obtained. Figure 4.13 shows Log Likelihood Ratio plot [against Residuals] for the 4* σ violation. Figure 4.13 Plot of Log Likelihood Ratio for U6U7_KY ( = 0, = 124 (8*σ)) 68

77 The Calculations for sequential analysis were adjusted accordingly. In this way, we were able to downscale the system for a lower threshold. In the next section of this chapter, some of the cases for Active Connection are discussed. There was a 4*σ Threshold Violation on L14U14S_KY on 26 th January 2013 at 10:30 AM. The Raw Strain graph is shown in Figure [23] The violation is indicated by the point A. Figure 4.14 The 4*σ Threshold Violation on L14U14S_KY on 26 th January 2013 at 10:30 AM From the above graph, it is clear that there is spike in the Strain value for this gage at 10:30 AM on this gage. Since the triggered for Active Connection was at a lower threshold of 4*σ, additional measurements were taken for this violation and sequential analysis was applied for these measurements. Figure 4.15 illustrates the results obtained from Active Connection for the above mentioned Alarm. 69

78 As mentioned in Chapter 3, the sequential probability ratio test is applied for every measurement obtained using Active Connection. In Figure 4.15, we can clearly see the spike at occurs at 10:30 AM (Indicated here by a Black dot and letter A). After that, Active Connection is triggered and more data is obtained. From 10:35:40 AM onwards, we have a data point for every 10 seconds. As the additional data points are being collected, the sequential probability ratio is being updated too. As discussed in Chapter 3, the Sequential Probability Ratio (SPRT) is a result of consecutive data points and hence its trend is not exactly the same as the strain measurements itself. As mentioned in Chapter 3, the limits are set for SPRT and a color is assigned after the each measurement depending on these limits. Figure 4.15 Active Connection Data for Gage L14U14S_KY for 4σ Violation on at 10:30:00 AM. 70

79 In Figure 4.15 the color given for each data point actually represents the region where its corresponding SPRT lies. From the figure it is also clear that as the value of the Strain decreases, the SPRT also gets a decreasing slope. And the measurements get a Green severity level which indicates that the gage is now in its normal behavior. We can also observe this by looking at Figure 4.14, it is clear that the gage had a spike, but it returned to its normal behavior before 11 am. Active Connection could utilize the time between the measurement cycles and conclude the same before the next measurement cycle would begin. This may not always be the case that the gage will return to its normal behavior before the next measurement cycle, but if it does, then active connection can indicate that information. Another Similar case occurs on Gage - M5L6B_OH, on 24th January It is clearly seen that there is a spike in the Strain values at time 13:00:00 [Represented by A] in Figure 4.16 [23]. It was a 4*σ Violation. Figure 4.16 Strain Graph for Gage M5L6B_OH on [4σ Violation at 13:00:00] 71

80 Since this strain value was a 4*σ violation, Active Connection was triggered and additional data points were obtained and were analyzed. Figure 4.17 shows the data obtained from Active Connection for this 4*σ violation. From Figure 4.17, the data point at 13:00:00 is marked with Black (Represented by A) indicating the 4*σ violation. From the plot, it is clear that the raw strain is not very stable, but as the value of the strain decreases consistently, the value of the SPRT decreases too. And we can see that once the SPRT is below the lower threshold, it comes under the green region and hence the data points also get a green level. And hence we can say that the gage has gone back to its normal behavior. Figure 4.17 Active Connection Data for Gage M5L6B_OH for 4σ Violation on at 13:00:00. 72

Real-Time, Automatic and Wireless Bridge Monitoring System Based on MEMS Technology

Real-Time, Automatic and Wireless Bridge Monitoring System Based on MEMS Technology Journal of Civil Engineering and Architecture 10 (2016) 1027-1031 doi: 10.17265/1934-7359/2016.09.006 D DAVID PUBLISHING Real-Time, Automatic and Wireless Bridge Monitoring System Based on MEMS Technology

More information

STATISTICS (STAT) Statistics (STAT) 1

STATISTICS (STAT) Statistics (STAT) 1 Statistics (STAT) 1 STATISTICS (STAT) STAT 2013 Elementary Statistics (A) Prerequisites: MATH 1483 or MATH 1513, each with a grade of "C" or better; or an acceptable placement score (see placement.okstate.edu).

More information

MINI-PAPER A Gentle Introduction to the Analysis of Sequential Data

MINI-PAPER A Gentle Introduction to the Analysis of Sequential Data MINI-PAPER by Rong Pan, Ph.D., Assistant Professor of Industrial Engineering, Arizona State University We, applied statisticians and manufacturing engineers, often need to deal with sequential data, which

More information

Cpk: What is its Capability? By: Rick Haynes, Master Black Belt Smarter Solutions, Inc.

Cpk: What is its Capability? By: Rick Haynes, Master Black Belt Smarter Solutions, Inc. C: What is its Capability? By: Rick Haynes, Master Black Belt Smarter Solutions, Inc. C is one of many capability metrics that are available. When capability metrics are used, organizations typically provide

More information

This chapter explains two techniques which are frequently used throughout

This chapter explains two techniques which are frequently used throughout Chapter 2 Basic Techniques This chapter explains two techniques which are frequently used throughout this thesis. First, we will introduce the concept of particle filters. A particle filter is a recursive

More information

CANARY. Acquisition Solutions. What Do We Do? Software Systems. Network Management. Sensors Input. Dataloggers Measure. Communications Transfer

CANARY. Acquisition Solutions. What Do We Do? Software Systems. Network Management. Sensors Input. Dataloggers Measure. Communications Transfer What Do We Do? CANARY Acquisition Solutions Sensors Input Dataloggers Measure Communications Transfer Software Output We develop and sell electronic hardware and software to meet the needs of our customers

More information

Chapter 2 Modeling Distributions of Data

Chapter 2 Modeling Distributions of Data Chapter 2 Modeling Distributions of Data Section 2.1 Describing Location in a Distribution Describing Location in a Distribution Learning Objectives After this section, you should be able to: FIND and

More information

Error Analysis, Statistics and Graphing

Error Analysis, Statistics and Graphing Error Analysis, Statistics and Graphing This semester, most of labs we require us to calculate a numerical answer based on the data we obtain. A hard question to answer in most cases is how good is your

More information

Using Excel for Graphical Analysis of Data

Using Excel for Graphical Analysis of Data Using Excel for Graphical Analysis of Data Introduction In several upcoming labs, a primary goal will be to determine the mathematical relationship between two variable physical parameters. Graphs are

More information

The Optimal Discovery Procedure: A New Approach to Simultaneous Significance Testing

The Optimal Discovery Procedure: A New Approach to Simultaneous Significance Testing UW Biostatistics Working Paper Series 9-6-2005 The Optimal Discovery Procedure: A New Approach to Simultaneous Significance Testing John D. Storey University of Washington, jstorey@u.washington.edu Suggested

More information

Data Analyst Nanodegree Syllabus

Data Analyst Nanodegree Syllabus Data Analyst Nanodegree Syllabus Discover Insights from Data with Python, R, SQL, and Tableau Before You Start Prerequisites : In order to succeed in this program, we recommend having experience working

More information

FMA901F: Machine Learning Lecture 3: Linear Models for Regression. Cristian Sminchisescu

FMA901F: Machine Learning Lecture 3: Linear Models for Regression. Cristian Sminchisescu FMA901F: Machine Learning Lecture 3: Linear Models for Regression Cristian Sminchisescu Machine Learning: Frequentist vs. Bayesian In the frequentist setting, we seek a fixed parameter (vector), with value(s)

More information

Technical Report of ISO/IEC Test Program of the M-DISC Archival DVD Media June, 2013

Technical Report of ISO/IEC Test Program of the M-DISC Archival DVD Media June, 2013 Technical Report of ISO/IEC 10995 Test Program of the M-DISC Archival DVD Media June, 2013 With the introduction of the M-DISC family of inorganic optical media, Traxdata set the standard for permanent

More information

Performance Characterization in Computer Vision

Performance Characterization in Computer Vision Performance Characterization in Computer Vision Robert M. Haralick University of Washington Seattle WA 98195 Abstract Computer vision algorithms axe composed of different sub-algorithms often applied in

More information

Data Analyst Nanodegree Syllabus

Data Analyst Nanodegree Syllabus Data Analyst Nanodegree Syllabus Discover Insights from Data with Python, R, SQL, and Tableau Before You Start Prerequisites : In order to succeed in this program, we recommend having experience working

More information

Mean Tests & X 2 Parametric vs Nonparametric Errors Selection of a Statistical Test SW242

Mean Tests & X 2 Parametric vs Nonparametric Errors Selection of a Statistical Test SW242 Mean Tests & X 2 Parametric vs Nonparametric Errors Selection of a Statistical Test SW242 Creation & Description of a Data Set * 4 Levels of Measurement * Nominal, ordinal, interval, ratio * Variable Types

More information

Slides 11: Verification and Validation Models

Slides 11: Verification and Validation Models Slides 11: Verification and Validation Models Purpose and Overview The goal of the validation process is: To produce a model that represents true behaviour closely enough for decision making purposes.

More information

Introduction to Computational Mathematics

Introduction to Computational Mathematics Introduction to Computational Mathematics Introduction Computational Mathematics: Concerned with the design, analysis, and implementation of algorithms for the numerical solution of problems that have

More information

A METHOD TO MODELIZE THE OVERALL STIFFNESS OF A BUILDING IN A STICK MODEL FITTED TO A 3D MODEL

A METHOD TO MODELIZE THE OVERALL STIFFNESS OF A BUILDING IN A STICK MODEL FITTED TO A 3D MODEL A METHOD TO MODELIE THE OVERALL STIFFNESS OF A BUILDING IN A STICK MODEL FITTED TO A 3D MODEL Marc LEBELLE 1 SUMMARY The aseismic design of a building using the spectral analysis of a stick model presents

More information

Weka ( )

Weka (  ) Weka ( http://www.cs.waikato.ac.nz/ml/weka/ ) The phases in which classifier s design can be divided are reflected in WEKA s Explorer structure: Data pre-processing (filtering) and representation Supervised

More information

Bootstrapping Method for 14 June 2016 R. Russell Rhinehart. Bootstrapping

Bootstrapping Method for  14 June 2016 R. Russell Rhinehart. Bootstrapping Bootstrapping Method for www.r3eda.com 14 June 2016 R. Russell Rhinehart Bootstrapping This is extracted from the book, Nonlinear Regression Modeling for Engineering Applications: Modeling, Model Validation,

More information

Learner Expectations UNIT 1: GRAPICAL AND NUMERIC REPRESENTATIONS OF DATA. Sept. Fathom Lab: Distributions and Best Methods of Display

Learner Expectations UNIT 1: GRAPICAL AND NUMERIC REPRESENTATIONS OF DATA. Sept. Fathom Lab: Distributions and Best Methods of Display CURRICULUM MAP TEMPLATE Priority Standards = Approximately 70% Supporting Standards = Approximately 20% Additional Standards = Approximately 10% HONORS PROBABILITY AND STATISTICS Essential Questions &

More information

Implementing Operational Analytics Using Big Data Technologies to Detect and Predict Sensor Anomalies

Implementing Operational Analytics Using Big Data Technologies to Detect and Predict Sensor Anomalies Implementing Operational Analytics Using Big Data Technologies to Detect and Predict Sensor Anomalies Joseph Coughlin, Rohit Mital, Shashi Nittur, Benjamin SanNicolas, Christian Wolf, Rinor Jusufi Stinger

More information

Pre-control and Some Simple Alternatives

Pre-control and Some Simple Alternatives Pre-control and Some Simple Alternatives Stefan H. Steiner Dept. of Statistics and Actuarial Sciences University of Waterloo Waterloo, N2L 3G1 Canada Pre-control, also called Stoplight control, is a quality

More information

Application of Characteristic Function Method in Target Detection

Application of Characteristic Function Method in Target Detection Application of Characteristic Function Method in Target Detection Mohammad H Marhaban and Josef Kittler Centre for Vision, Speech and Signal Processing University of Surrey Surrey, GU2 7XH, UK eep5mm@ee.surrey.ac.uk

More information

You ve already read basics of simulation now I will be taking up method of simulation, that is Random Number Generation

You ve already read basics of simulation now I will be taking up method of simulation, that is Random Number Generation Unit 5 SIMULATION THEORY Lesson 39 Learning objective: To learn random number generation. Methods of simulation. Monte Carlo method of simulation You ve already read basics of simulation now I will be

More information

Graphical Analysis of Data using Microsoft Excel [2016 Version]

Graphical Analysis of Data using Microsoft Excel [2016 Version] Graphical Analysis of Data using Microsoft Excel [2016 Version] Introduction In several upcoming labs, a primary goal will be to determine the mathematical relationship between two variable physical parameters.

More information

Parallel Gibbs Sampling From Colored Fields to Thin Junction Trees

Parallel Gibbs Sampling From Colored Fields to Thin Junction Trees Parallel Gibbs Sampling From Colored Fields to Thin Junction Trees Joseph Gonzalez Yucheng Low Arthur Gretton Carlos Guestrin Draw Samples Sampling as an Inference Procedure Suppose we wanted to know the

More information

Table of Contents (As covered from textbook)

Table of Contents (As covered from textbook) Table of Contents (As covered from textbook) Ch 1 Data and Decisions Ch 2 Displaying and Describing Categorical Data Ch 3 Displaying and Describing Quantitative Data Ch 4 Correlation and Linear Regression

More information

Louis Fourrier Fabien Gaie Thomas Rolf

Louis Fourrier Fabien Gaie Thomas Rolf CS 229 Stay Alert! The Ford Challenge Louis Fourrier Fabien Gaie Thomas Rolf Louis Fourrier Fabien Gaie Thomas Rolf 1. Problem description a. Goal Our final project is a recent Kaggle competition submitted

More information

1. Assumptions. 1. Introduction. 2. Terminology

1. Assumptions. 1. Introduction. 2. Terminology 4. Process Modeling 4. Process Modeling The goal for this chapter is to present the background and specific analysis techniques needed to construct a statistical model that describes a particular scientific

More information

MASTER OF ENGINEERING PROGRAM IN INFORMATION

MASTER OF ENGINEERING PROGRAM IN INFORMATION MASTER OF ENGINEERING PROGRAM IN INFORMATION AND COMMUNICATION TECHNOLOGY FOR EMBEDDED SYSTEMS (INTERNATIONAL PROGRAM) Curriculum Title Master of Engineering in Information and Communication Technology

More information

* Hyun Suk Park. Korea Institute of Civil Engineering and Building, 283 Goyangdae-Ro Goyang-Si, Korea. Corresponding Author: Hyun Suk Park

* Hyun Suk Park. Korea Institute of Civil Engineering and Building, 283 Goyangdae-Ro Goyang-Si, Korea. Corresponding Author: Hyun Suk Park International Journal Of Engineering Research And Development e-issn: 2278-067X, p-issn: 2278-800X, www.ijerd.com Volume 13, Issue 11 (November 2017), PP.47-59 Determination of The optimal Aggregation

More information

Clustering Analysis based on Data Mining Applications Xuedong Fan

Clustering Analysis based on Data Mining Applications Xuedong Fan Applied Mechanics and Materials Online: 203-02-3 ISSN: 662-7482, Vols. 303-306, pp 026-029 doi:0.4028/www.scientific.net/amm.303-306.026 203 Trans Tech Publications, Switzerland Clustering Analysis based

More information

UNIVERSITY OF CINCINNATI

UNIVERSITY OF CINCINNATI UNIVERSITY OF CINCINNATI Date: I,, hereby submit this work as part of the requirements for the degree of: in: It is entitled: This work and its defense approved by: Chair: Management, Analysis and Display

More information

Robotics. Lecture 5: Monte Carlo Localisation. See course website for up to date information.

Robotics. Lecture 5: Monte Carlo Localisation. See course website  for up to date information. Robotics Lecture 5: Monte Carlo Localisation See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information. Andrew Davison Department of Computing Imperial College London Review:

More information

Themes in the Texas CCRS - Mathematics

Themes in the Texas CCRS - Mathematics 1. Compare real numbers. a. Classify numbers as natural, whole, integers, rational, irrational, real, imaginary, &/or complex. b. Use and apply the relative magnitude of real numbers by using inequality

More information

Chapter 16. Microscopic Traffic Simulation Overview Traffic Simulation Models

Chapter 16. Microscopic Traffic Simulation Overview Traffic Simulation Models Chapter 6 Microscopic Traffic Simulation 6. Overview The complexity of traffic stream behaviour and the difficulties in performing experiments with real world traffic make computer simulation an important

More information

Workshop 8: Model selection

Workshop 8: Model selection Workshop 8: Model selection Selecting among candidate models requires a criterion for evaluating and comparing models, and a strategy for searching the possibilities. In this workshop we will explore some

More information

Using Excel for Graphical Analysis of Data

Using Excel for Graphical Analysis of Data EXERCISE Using Excel for Graphical Analysis of Data Introduction In several upcoming experiments, a primary goal will be to determine the mathematical relationship between two variable physical parameters.

More information

CHAPTER 2 Modeling Distributions of Data

CHAPTER 2 Modeling Distributions of Data CHAPTER 2 Modeling Distributions of Data 2.2 Density Curves and Normal Distributions The Practice of Statistics, 5th Edition Starnes, Tabor, Yates, Moore Bedford Freeman Worth Publishers Density Curves

More information

Curve fitting. Lab. Formulation. Truncation Error Round-off. Measurement. Good data. Not as good data. Least squares polynomials.

Curve fitting. Lab. Formulation. Truncation Error Round-off. Measurement. Good data. Not as good data. Least squares polynomials. Formulating models We can use information from data to formulate mathematical models These models rely on assumptions about the data or data not collected Different assumptions will lead to different models.

More information

A Neural Network for Real-Time Signal Processing

A Neural Network for Real-Time Signal Processing 248 MalkofT A Neural Network for Real-Time Signal Processing Donald B. Malkoff General Electric / Advanced Technology Laboratories Moorestown Corporate Center Building 145-2, Route 38 Moorestown, NJ 08057

More information

Effectiveness of Element Free Galerkin Method over FEM

Effectiveness of Element Free Galerkin Method over FEM Effectiveness of Element Free Galerkin Method over FEM Remya C R 1, Suji P 2 1 M Tech Student, Dept. of Civil Engineering, Sri Vellappaly Natesan College of Engineering, Pallickal P O, Mavelikara, Kerala,

More information

Chapter 10. Conclusion Discussion

Chapter 10. Conclusion Discussion Chapter 10 Conclusion 10.1 Discussion Question 1: Usually a dynamic system has delays and feedback. Can OMEGA handle systems with infinite delays, and with elastic delays? OMEGA handles those systems with

More information

DESCRIPTION AND INTERPRETATION OF THE RESULTS

DESCRIPTION AND INTERPRETATION OF THE RESULTS CHAPTER 4 DESCRIPTION AND INTERPRETATION OF THE RESULTS 4.1 INTRODUCTION In this chapter the results of the laboratory experiments performed are described and interpreted. The research design and methodology

More information

SPSS QM II. SPSS Manual Quantitative methods II (7.5hp) SHORT INSTRUCTIONS BE CAREFUL

SPSS QM II. SPSS Manual Quantitative methods II (7.5hp) SHORT INSTRUCTIONS BE CAREFUL SPSS QM II SHORT INSTRUCTIONS This presentation contains only relatively short instructions on how to perform some statistical analyses in SPSS. Details around a certain function/analysis method not covered

More information

Adobe Marketing Cloud Data Workbench Controlled Experiments

Adobe Marketing Cloud Data Workbench Controlled Experiments Adobe Marketing Cloud Data Workbench Controlled Experiments Contents Data Workbench Controlled Experiments...3 How Does Site Identify Visitors?...3 How Do Controlled Experiments Work?...3 What Should I

More information

Snell s Law. Introduction

Snell s Law. Introduction Snell s Law Introduction According to Snell s Law [1] when light is incident on an interface separating two media, as depicted in Figure 1, the angles of incidence and refraction, θ 1 and θ 2, are related,

More information

FMA901F: Machine Learning Lecture 6: Graphical Models. Cristian Sminchisescu

FMA901F: Machine Learning Lecture 6: Graphical Models. Cristian Sminchisescu FMA901F: Machine Learning Lecture 6: Graphical Models Cristian Sminchisescu Graphical Models Provide a simple way to visualize the structure of a probabilistic model and can be used to design and motivate

More information

Microscopic Traffic Simulation

Microscopic Traffic Simulation Microscopic Traffic Simulation Lecture Notes in Transportation Systems Engineering Prof. Tom V. Mathew Contents Overview 2 Traffic Simulation Models 2 2. Need for simulation.................................

More information

AM205: lecture 2. 1 These have been shifted to MD 323 for the rest of the semester.

AM205: lecture 2. 1 These have been shifted to MD 323 for the rest of the semester. AM205: lecture 2 Luna and Gary will hold a Python tutorial on Wednesday in 60 Oxford Street, Room 330 Assignment 1 will be posted this week Chris will hold office hours on Thursday (1:30pm 3:30pm, Pierce

More information

Part I. Hierarchical clustering. Hierarchical Clustering. Hierarchical clustering. Produces a set of nested clusters organized as a

Part I. Hierarchical clustering. Hierarchical Clustering. Hierarchical clustering. Produces a set of nested clusters organized as a Week 9 Based in part on slides from textbook, slides of Susan Holmes Part I December 2, 2012 Hierarchical Clustering 1 / 1 Produces a set of nested clusters organized as a Hierarchical hierarchical clustering

More information

Spatial Patterns Point Pattern Analysis Geographic Patterns in Areal Data

Spatial Patterns Point Pattern Analysis Geographic Patterns in Areal Data Spatial Patterns We will examine methods that are used to analyze patterns in two sorts of spatial data: Point Pattern Analysis - These methods concern themselves with the location information associated

More information

Data analysis using Microsoft Excel

Data analysis using Microsoft Excel Introduction to Statistics Statistics may be defined as the science of collection, organization presentation analysis and interpretation of numerical data from the logical analysis. 1.Collection of Data

More information

ECONOMIC DESIGN OF STATISTICAL PROCESS CONTROL USING PRINCIPAL COMPONENTS ANALYSIS AND THE SIMPLICIAL DEPTH RANK CONTROL CHART

ECONOMIC DESIGN OF STATISTICAL PROCESS CONTROL USING PRINCIPAL COMPONENTS ANALYSIS AND THE SIMPLICIAL DEPTH RANK CONTROL CHART ECONOMIC DESIGN OF STATISTICAL PROCESS CONTROL USING PRINCIPAL COMPONENTS ANALYSIS AND THE SIMPLICIAL DEPTH RANK CONTROL CHART Vadhana Jayathavaj Rangsit University, Thailand vadhana.j@rsu.ac.th Adisak

More information

Statistical Analysis of MRI Data

Statistical Analysis of MRI Data Statistical Analysis of MRI Data Shelby Cummings August 1, 2012 Abstract Every day, numerous people around the country go under medical testing with the use of MRI technology. Developed in the late twentieth

More information

Business: Administrative Information Services Crosswalk to AZ Math Standards

Business: Administrative Information Services Crosswalk to AZ Math Standards Page 1 of 1 August 1998 2M-P1 Construct and draw inferences including measures of central tendency, from charts, tables, graphs and data plots that summarize data from real-world situations. PO 4 2.0 Manage

More information

Big Mathematical Ideas and Understandings

Big Mathematical Ideas and Understandings Big Mathematical Ideas and Understandings A Big Idea is a statement of an idea that is central to the learning of mathematics, one that links numerous mathematical understandings into a coherent whole.

More information

3 Feature Selection & Feature Extraction

3 Feature Selection & Feature Extraction 3 Feature Selection & Feature Extraction Overview: 3.1 Introduction 3.2 Feature Extraction 3.3 Feature Selection 3.3.1 Max-Dependency, Max-Relevance, Min-Redundancy 3.3.2 Relevance Filter 3.3.3 Redundancy

More information

How to Win With Non-Gaussian Data: Poisson Imaging Version

How to Win With Non-Gaussian Data: Poisson Imaging Version Comments On How to Win With Non-Gaussian Data: Poisson Imaging Version Alanna Connors David A. van Dyk a Department of Statistics University of California, Irvine a Joint Work with James Chaing and the

More information

Verification and Validation of X-Sim: A Trace-Based Simulator

Verification and Validation of X-Sim: A Trace-Based Simulator http://www.cse.wustl.edu/~jain/cse567-06/ftp/xsim/index.html 1 of 11 Verification and Validation of X-Sim: A Trace-Based Simulator Saurabh Gayen, sg3@wustl.edu Abstract X-Sim is a trace-based simulator

More information

This blog addresses the question: how do we determine the intersection of two circles in the Cartesian plane?

This blog addresses the question: how do we determine the intersection of two circles in the Cartesian plane? Intersecting Circles This blog addresses the question: how do we determine the intersection of two circles in the Cartesian plane? This is a problem that a programmer might have to solve, for example,

More information

QstatLab: software for statistical process control and robust engineering

QstatLab: software for statistical process control and robust engineering QstatLab: software for statistical process control and robust engineering I.N.Vuchkov Iniversity of Chemical Technology and Metallurgy 1756 Sofia, Bulgaria qstat@dir.bg Abstract A software for quality

More information

CHAPTER 4. Numerical Models. descriptions of the boundary conditions, element types, validation, and the force

CHAPTER 4. Numerical Models. descriptions of the boundary conditions, element types, validation, and the force CHAPTER 4 Numerical Models This chapter presents the development of numerical models for sandwich beams/plates subjected to four-point bending and the hydromat test system. Detailed descriptions of the

More information

Neuro-fuzzy admission control in mobile communications systems

Neuro-fuzzy admission control in mobile communications systems University of Wollongong Thesis Collections University of Wollongong Thesis Collection University of Wollongong Year 2005 Neuro-fuzzy admission control in mobile communications systems Raad Raad University

More information

Data Mining. ❷Chapter 2 Basic Statistics. Asso.Prof.Dr. Xiao-dong Zhu. Business School, University of Shanghai for Science & Technology

Data Mining. ❷Chapter 2 Basic Statistics. Asso.Prof.Dr. Xiao-dong Zhu. Business School, University of Shanghai for Science & Technology ❷Chapter 2 Basic Statistics Business School, University of Shanghai for Science & Technology 2016-2017 2nd Semester, Spring2017 Contents of chapter 1 1 recording data using computers 2 3 4 5 6 some famous

More information

Montana City School GRADE 5

Montana City School GRADE 5 Montana City School GRADE 5 Montana Standard 1: Students engage in the mathematical processes of problem solving and reasoning, estimation, communication, connections and applications, and using appropriate

More information

Wireless Nodes for Active Structural Monitoring in Extreme Environments

Wireless Nodes for Active Structural Monitoring in Extreme Environments Wireless Nodes for Active Structural Monitoring in Extreme Environments Monitoring & Evaluation Technology Integration System M.E.T.I.-System Suite of Damage Detection Devices Seth S. Kessler, Ph.D. skessler@metisdesign.com

More information

Product Engineering Optimizer

Product Engineering Optimizer CATIA V5 Training Foils Product Engineering Optimizer Version 5 Release 19 January 2009 EDU_CAT_EN_PEO_FI_V5R19 1 About this course Objectives of the course Upon completion of this course, you will learn

More information

Markov Chains and Multiaccess Protocols: An. Introduction

Markov Chains and Multiaccess Protocols: An. Introduction Markov Chains and Multiaccess Protocols: An Introduction Laila Daniel and Krishnan Narayanan April 8, 2012 Outline of the talk Introduction to Markov Chain applications in Communication and Computer Science

More information

Computational Databases: Inspirations from Statistical Software. Linnea Passing, Technical University of Munich

Computational Databases: Inspirations from Statistical Software. Linnea Passing, Technical University of Munich Computational Databases: Inspirations from Statistical Software Linnea Passing, linnea.passing@tum.de Technical University of Munich Data Science Meets Databases Data Cleansing Pipelines Fuzzy joins Data

More information

Machine Learning. Topic 5: Linear Discriminants. Bryan Pardo, EECS 349 Machine Learning, 2013

Machine Learning. Topic 5: Linear Discriminants. Bryan Pardo, EECS 349 Machine Learning, 2013 Machine Learning Topic 5: Linear Discriminants Bryan Pardo, EECS 349 Machine Learning, 2013 Thanks to Mark Cartwright for his extensive contributions to these slides Thanks to Alpaydin, Bishop, and Duda/Hart/Stork

More information

System Identification Algorithms and Techniques for Systems Biology

System Identification Algorithms and Techniques for Systems Biology System Identification Algorithms and Techniques for Systems Biology by c Choujun Zhan A Thesis submitted to the School of Graduate Studies in partial fulfillment of the requirements for the degree of Doctor

More information

Journal of Emerging Trends in Computing and Information Sciences

Journal of Emerging Trends in Computing and Information Sciences Method of Fault Data Analysis of Sensor Node based on Wireless Data Communication Protocols Seung-Ki Ryu * Research Fellow, Korea Institute of Civil Engineering and Building Technology, Korea (*corresponding

More information

Voluntary State Curriculum Algebra II

Voluntary State Curriculum Algebra II Algebra II Goal 1: Integration into Broader Knowledge The student will develop, analyze, communicate, and apply models to real-world situations using the language of mathematics and appropriate technology.

More information

MySQL Performance Analysis with Percona Toolkit and TCP/IP Network Traffic

MySQL Performance Analysis with Percona Toolkit and TCP/IP Network Traffic MySQL Performance Analysis with Percona Toolkit and TCP/IP Network Traffic A Percona White Paper By Baron Schwartz February 2012 Abstract The TCP network traffic conversation between a client and a MySQL

More information

Christos Papadopoulos

Christos Papadopoulos CS557: Measurements Christos Papadopoulos Adapted by Lorenzo De Carli Outline End-to-End Packet Dynamics - Paxon99b Wireless measurements - Aguayo04a Note: both these studies are old, so the results have

More information

An Introduction to Markov Chain Monte Carlo

An Introduction to Markov Chain Monte Carlo An Introduction to Markov Chain Monte Carlo Markov Chain Monte Carlo (MCMC) refers to a suite of processes for simulating a posterior distribution based on a random (ie. monte carlo) process. In other

More information

packet-switched networks. For example, multimedia applications which process

packet-switched networks. For example, multimedia applications which process Chapter 1 Introduction There are applications which require distributed clock synchronization over packet-switched networks. For example, multimedia applications which process time-sensitive information

More information

Transducers and Transducer Calibration GENERAL MEASUREMENT SYSTEM

Transducers and Transducer Calibration GENERAL MEASUREMENT SYSTEM Transducers and Transducer Calibration Abstracted from: Figliola, R.S. and Beasley, D. S., 1991, Theory and Design for Mechanical Measurements GENERAL MEASUREMENT SYSTEM Assigning a specific value to a

More information

Name: Date: Period: Chapter 2. Section 1: Describing Location in a Distribution

Name: Date: Period: Chapter 2. Section 1: Describing Location in a Distribution Name: Date: Period: Chapter 2 Section 1: Describing Location in a Distribution Suppose you earned an 86 on a statistics quiz. The question is: should you be satisfied with this score? What if it is the

More information

Detector Data Extractor V3.5 User Manual

Detector Data Extractor V3.5 User Manual TDRL, DetectorExtractor V3.5 Manual, Page 1 Detector Data Extractor V3.5 User Manual Provided by Transportation Data Research Laboratory A Division of Northland Advanced Transportation Systems Research

More information

Part I, Chapters 4 & 5. Data Tables and Data Analysis Statistics and Figures

Part I, Chapters 4 & 5. Data Tables and Data Analysis Statistics and Figures Part I, Chapters 4 & 5 Data Tables and Data Analysis Statistics and Figures Descriptive Statistics 1 Are data points clumped? (order variable / exp. variable) Concentrated around one value? Concentrated

More information

Effects of PROC EXPAND Data Interpolation on Time Series Modeling When the Data are Volatile or Complex

Effects of PROC EXPAND Data Interpolation on Time Series Modeling When the Data are Volatile or Complex Effects of PROC EXPAND Data Interpolation on Time Series Modeling When the Data are Volatile or Complex Keiko I. Powers, Ph.D., J. D. Power and Associates, Westlake Village, CA ABSTRACT Discrete time series

More information

Wireless Vehicular Blind-Spot Monitoring Method and System Progress Report. Department of Electrical and Computer Engineering University of Manitoba

Wireless Vehicular Blind-Spot Monitoring Method and System Progress Report. Department of Electrical and Computer Engineering University of Manitoba Wireless Vehicular Blind-Spot Monitoring Method and System Progress Report Department of Electrical and Computer Engineering University of Manitoba Prepared by: Chen Liu Xiaodong Xu Faculty Supervisor:

More information

Basic Concepts of Reliability

Basic Concepts of Reliability Basic Concepts of Reliability Reliability is a broad concept. It is applied whenever we expect something to behave in a certain way. Reliability is one of the metrics that are used to measure quality.

More information

1 Methods for Posterior Simulation

1 Methods for Posterior Simulation 1 Methods for Posterior Simulation Let p(θ y) be the posterior. simulation. Koop presents four methods for (posterior) 1. Monte Carlo integration: draw from p(θ y). 2. Gibbs sampler: sequentially drawing

More information

Statistical Techniques in Robotics (16-831, F10) Lecture #02 (Thursday, August 28) Bayes Filtering

Statistical Techniques in Robotics (16-831, F10) Lecture #02 (Thursday, August 28) Bayes Filtering Statistical Techniques in Robotics (16-831, F10) Lecture #02 (Thursday, August 28) Bayes Filtering Lecturer: Drew Bagnell Scribes: Pranay Agrawal, Trevor Decker, and Humphrey Hu 1 1 A Brief Example Let

More information

IMAGE DE-NOISING IN WAVELET DOMAIN

IMAGE DE-NOISING IN WAVELET DOMAIN IMAGE DE-NOISING IN WAVELET DOMAIN Aaditya Verma a, Shrey Agarwal a a Department of Civil Engineering, Indian Institute of Technology, Kanpur, India - (aaditya, ashrey)@iitk.ac.in KEY WORDS: Wavelets,

More information

Descriptive Statistics, Standard Deviation and Standard Error

Descriptive Statistics, Standard Deviation and Standard Error AP Biology Calculations: Descriptive Statistics, Standard Deviation and Standard Error SBI4UP The Scientific Method & Experimental Design Scientific method is used to explore observations and answer questions.

More information

Time Series Analysis by State Space Methods

Time Series Analysis by State Space Methods Time Series Analysis by State Space Methods Second Edition J. Durbin London School of Economics and Political Science and University College London S. J. Koopman Vrije Universiteit Amsterdam OXFORD UNIVERSITY

More information

Response to API 1163 and Its Impact on Pipeline Integrity Management

Response to API 1163 and Its Impact on Pipeline Integrity Management ECNDT 2 - Tu.2.7.1 Response to API 3 and Its Impact on Pipeline Integrity Management Munendra S TOMAR, Martin FINGERHUT; RTD Quality Services, USA Abstract. Knowing the accuracy and reliability of ILI

More information

Quality Control-2 The ASTA team

Quality Control-2 The ASTA team Quality Control-2 The ASTA team Contents 0.1 OC curves............................................... 1 0.2 OC curves - example......................................... 2 0.3 CUSUM chart.............................................

More information

Analytical Techniques for Anomaly Detection Through Features, Signal-Noise Separation and Partial-Value Association

Analytical Techniques for Anomaly Detection Through Features, Signal-Noise Separation and Partial-Value Association Proceedings of Machine Learning Research 77:20 32, 2017 KDD 2017: Workshop on Anomaly Detection in Finance Analytical Techniques for Anomaly Detection Through Features, Signal-Noise Separation and Partial-Value

More information

CHAPTER 2 Modeling Distributions of Data

CHAPTER 2 Modeling Distributions of Data CHAPTER 2 Modeling Distributions of Data 2.2 Density Curves and Normal Distributions The Practice of Statistics, 5th Edition Starnes, Tabor, Yates, Moore Bedford Freeman Worth Publishers Density Curves

More information

Lecture: Simulation. of Manufacturing Systems. Sivakumar AI. Simulation. SMA6304 M2 ---Factory Planning and scheduling. Simulation - A Predictive Tool

Lecture: Simulation. of Manufacturing Systems. Sivakumar AI. Simulation. SMA6304 M2 ---Factory Planning and scheduling. Simulation - A Predictive Tool SMA6304 M2 ---Factory Planning and scheduling Lecture Discrete Event of Manufacturing Systems Simulation Sivakumar AI Lecture: 12 copyright 2002 Sivakumar 1 Simulation Simulation - A Predictive Tool Next

More information

Estimating Noise and Dimensionality in BCI Data Sets: Towards Illiteracy Comprehension

Estimating Noise and Dimensionality in BCI Data Sets: Towards Illiteracy Comprehension Estimating Noise and Dimensionality in BCI Data Sets: Towards Illiteracy Comprehension Claudia Sannelli, Mikio Braun, Michael Tangermann, Klaus-Robert Müller, Machine Learning Laboratory, Dept. Computer

More information

DESCRIPTION OF THE LABORATORY RESEARCH FACILITY:

DESCRIPTION OF THE LABORATORY RESEARCH FACILITY: DESCRIPTION OF THE LAORATORY RESEARCH FACILITY: Introduction Drexel researchers have designed and built a laboratory (Figure 1) in order to support several ongoing field research projects and to facilitate

More information

Chapter 15 Introduction to Linear Programming

Chapter 15 Introduction to Linear Programming Chapter 15 Introduction to Linear Programming An Introduction to Optimization Spring, 2015 Wei-Ta Chu 1 Brief History of Linear Programming The goal of linear programming is to determine the values of

More information