Test Program Debug using Data Analysis

Size: px
Start display at page:

Download "Test Program Debug using Data Analysis"

Transcription

1 Test Program Debug using Data Analysis Introduction Test program development and debug can be a difficult, time intensive process. Several variables must be accounted for and eliminated in the quest for a stable, repeatable, and effective test. Elements like design marginality, loadboard performance, tester accuracy, multi-site interaction, environmental noise and the like must be isolated and quantified so that the test software s true capabilities and performance can be understood. Fundamental to the process is the collection and analysis of test data. A good data analysis package and data collection techniques are a fundamental part of the test engineer s tool box. Measurements Semiconductor device test programs take lots of measurements. The measurements are typically physical quantities (voltage, current, time, etc) that are related to design constraints and are tested relative to targeted design specifications. One goal is to ensure the measured value is accurate and that the measurement meets the design specifications (within the specification limits ). The tester and test program evaluate the measured value relative to the specification limits and categorize the device as either passing or failing. While quality is paramount, the primary goal of any business is to make a profit. Cutting-edge specifications, difficult to test parameters, and program issues can result in significant yield loss, which result in lower profit margins. Understanding and balancing the competing goals of quality and yield are an everyday part of the test engineer s duties. Figure1: Trend Chart test marginal to upper spec limit Passing Bad Parts/Failing Good Parts Error in the measurement process, either due to marginal product performance or because of problems with the measurement system can result in two possible negative outcomes. The first is potentially categorizing a failing device as good or passing, because the error is sufficient to cause a failing measurement to register within the specification limits. The second negative outcome is due to a passing value registering as a failing measurement, resulting in good product being discarded because of error. A test program may contain tens-ofthousands of tests the potential for these types of errors exist for each and every one. Figure 2 shows a histogram of a test with a marginal value. A significant amount of variation in the test measurement can result in the value testing on either side of the lower limit (noted with the green LLimit line).

2 A single run of this device may pass one time and fail the next. With an understanding of the amount of error in the system and an analysis of the typical process performance, limits can be generated to ensure this problem doesn t result in a halting of the manufacturing process (QA gate failure). Figure 2: Marginal Histogram single value close to lower limit Measurement Variance The semiconductor test industry exists because of device variation. If there was a guaranteed method for ensuring every device shipped met every design spec, test engineers would be out of work! Measure a parameter for several different devices and you ll likely get several different values. Measure a parameter many times for a single device and you re also likely to get several different values. Figure 3 illustrates this concept. The variation in both scenarios comes from several sources and it is helpful to have an understanding of the types of error. Variations and some of the sources include: Device Variation differences in the measurements between typical, representative devices Manufacturing issues or differences process problems, contamination, production at multiple fabs Stringent or cutting-edge designs designs that push the technological boundaries of the fab process Testability problems non-deterministic operations that work fine in the application but are difficult to test Measurement Variation differences in measurements due to conditions in the measurement system Device Interface Board (DIB, Loadboard) integrity of board layout, proper use of components Test equipment (ATE) instrument capability, correct usage of instrument Test Program programmable ranges, operating conditions, settling time, bugs Environmental noise electromagnetic interference, multi-site cross talk, unaccounted inductance/capacitance During development and debug, it is useful to quantify and identify the different sources of variation. With the exception of trimming, test engineers can do little more than relay significant sources of device variance back to the design team for future improvements. Measurement variation, once understood, can then be managed by improvements to the test program, equipment, or environment. Understanding the source of error can reduce development cycle time by preventing the test engineer from wasting time trying to test away a design or equipment issue.

3 Figure 3: Histogram variation of two devices relative to population Correlation Units A good selection of correlation units is necessary for many data analysis and characterization activities. Early in the development and debug phase, a smaller set of units is typically used for developing new tests and to ensure basic repeatability. The proverbial bin1 development phase is commonly achieved using a handful of samples from the first batch of silicon. The initial debug samples may not represent the true range of operation or manufacturing variance, which may be useful for reducing at least one variable from the debug equation. Many analysis techniques require the correlation units to be ordered, using a physical serial numbered on the exterior (where possible) or best-case, some mechanism for identifying them via the test program by reading and logging a unique serial number programmed in an on-chip EEPROM, ECID, fuses, etc. Analysis of wafer probe data has the benefit of being able to reliably locate a device by the wafer ID and X/Y coordinates, which are typically logged in the data files. Recognizing and quantifying product variability, even for small, single-lot groups, relies on being able to identify and distinguish between the serial numbers of your correlation units during data analysis. Having an ECID or EEPROM on-chip is ideal but not necessary for proper data collection and analysis. Most ATE systems allow you to set the device-id parameter for each run of the test program and you can use this mechanism to properly identify each device serial number while collecting data.

4 Data Collection and Experiments Most data analysis activities require some amount of up-front consideration before collecting data. The proper construction and setup of an experiment is critical to the quality and usefulness of the data. Data collection must be done in a way that allows the different variables of interest to be isolated so that they may be extracted and understood. Typical parameters of interest when collecting data may include: Device Interface Board (DIB) or Loadboard - The loadboard can be a significant source of error, especially when additional hardware is required on the DIB, when testing high-speed digital, high power, or when making very precise measurements. Multi-Site Testing- If more than one site is tested at a time, then it is common to quantify the differences in the performances between different sites. Tester (ATE) - Final manufacturing is almost always done with a fleet of testers. Understanding and quantifying the effects of the tester is frequently required to ensure operation from one tester to another. Voltage Effects Many tests are performed at different operating voltages. Looking for differences across voltages may require special consideration when collecting data, such as test name and number conventions, continuing on failures, etc. Temperature Testing Studying the effects of temperature likely requires serializing devices and, like voltage testing, may require continuing when failures occur to get a complete set of data. The order and method used in the collection of the analysis data is critical and can easily mislead if done incorrectly. Generally, data should be taken so that each variable of interest can be isolated and evaluated. This is done by developing experiments that allow for dissecting the different parameters of interest. As an example, take the histogram shown in Figure 4. Does the separation shown between the four different sites indicate a site problem, loadboard problem, tester problem, voltage difference, or significant device variation? Without an understanding of the methodology used to take the data, the question cannot be answered.. Figure 4: Histogram, Quad-site program Is the variance related to loadboard, tester, sites, or devices? This specific example consists of four devices, tested 10 times each in four different sites, using a single loadboard and tester. Since each device was tested multiple times in each site, some of the possible theories can

5 be eliminated. An additional experiment, consisting of multiple loadboards was used to eliminate the site theory and expose the problem to be due to a dirty contactors (sockets) therefore revealing the issue to be related to the loadboard. Data collection is critical for isolating sources of error. Generally, it is helpful to ensure the device variation is distributed across any parameters that are suspected sources of variance or error. In the example above, four devices were tested multiple times in each site, for each loadboard. Variations between the four devices were distributed across the parameters of interest, namely site and loadboard, by moving these devices through every possible combination of site and loadboard. Under these conditions, we can quantify the effects of the variables of interest (site and loadboard) and not get side-tracked chasing device variation. Analysis Techniques There is no single analysis technique for identifying all potential test issues. Typically, a suite or hierarchy of techniques provides the best coverage and can be used to pare down large amounts of test data. Many times, one technique will identify a host of similar issues that can be adjusted or accounted for by changes to the test program, loadboard, or tester hardware. In other situations, it may not be possible to correct an issue entirely and the suite of analysis techniques can be used to justify modifications to the product s design. GROUPING Before doing any type of analysis, it is necessary to group the device data records based on the parameters of interest. As an example, in a multi-site program, each test record that was performed in Site 1 should be logically grouped together, and likewise groups should be created for all other sites. Imagine the Trend chart in Figure 5 without the color coded groups. It is obvious an issue exists but it might take some time to figure out the problem is related to sites. Figure 5: Trend Chart What happened to sites 1 and 2? Grouping allows the statistics, graphs, and other parameters of interest to be calculated for the subset of device records in the group. In the site example, this allows the effects of each site to be measured and compared to the other sites and can be used to quickly identify differences between similar groups. Data analysis programs differ in how grouping is done but it is a common feature and required for most types of analysis. Figure 6 shows a typical set of groups created for isolating 4 sites, 2 loadboards, 2 testers, and 8 individual devices. The

6 data presented in this figure was taken for a Gage R&R exercise, which commonly requires associating like parameters in groups. This type of grouping allows for analysis and isolation of any of the grouped parameters. Figure 6: Data Grouping Create groups for each parameter of interest (site, loadboard, tester, & device) OUTLIERS Statistical outliers are observation points that are distant from other observations (Wikipedia, 2015). Depending on your analysis goals, there is a high likelihood that you will want to identify and potentially isolate statistical outliers. In a controlled experiment like repeatability and reproducibility, the very presence of outliers may be an easy to identify, direct indication of a problem that needs to be addressed. Under these conditions, outliers are likely unexpected and their associated context (data group or device group) can be quickly identified. The conditions that resulted in the outliers can be recreated (get the parts, sites, boards, testers, etc. that resulted in the outliers) and then perform more experiments to flush out potential issues. In the suite or hierarchy of analysis techniques, outlier detection resides at the top and is often one of the first tools brought to bear. In other situations like characterization, where parameters like process, temperature, and voltage (PVT) are varied, outliers are commonplace and somewhat expected. The corners in a PVT characterization process often

7 result in operational regions that can cause unusual conditions. The resulting outliers may again be used to identify and isolate these conditions as in the previous discussion, resulting in useful knowledge about the operating range of the device. However, these distant observations may pollute the data and make any further analysis difficult or meaningless. Under these conditions, outlier detection and removal is likely needed to ensure the integrity of subsequent analysis techniques. Figure 7: Histograms Before and after outlier removal, using Tukey Limits The presence of outliers can also be a direct consequence of choices made during the test program development. Say a test is developed to measure a reference voltage. The test requires the device to be setup and configured properly ( boot-up ) to make the measurement. If the device does not boot properly, then the measurement is meaningless because the chip is not in the correct state. For this example, when the chip boots properly and gets to the known state, the resulting reference voltage is about 1.2v. If the chip does not boot properly, the measured voltage could be any value. The test programmer may choose to verify the chip booted correctly, and therefore have some confidence in the measurement value. Perhaps the test engineer chooses a default, failing value to ensure the part is binned as a failure if the boot-up fails, something very far outside of the limits. Alternatively, the engineer may make the measurement anyway, regardless of the boot-up performance, and test and log the result as-is (likely in a characterization scenario). These programming cases are valid, acceptable practices but the resulting data and its analysis are likely significantly different, as follows: Data is not logged on boot failure If the program aborts the measurement when the boot fails, then the measurement is never made and is never logged. Outliers are never introduced into the population because the program was configured to skip the measurement. This may complicate some analysis techniques because the measurement counts may be different from test to test and device to device. Default value is logged on boot failure The program is configured to datalog a default, typically failing, value when the boot-up fails. This technique may be used when an unknown error, alarm, or failure occurs and the programmer wants to ensure the test fails. This will introduce a consistent, known outlier value that is easy to identify and handle. Force the measurement, regardless of boot failure - This technique is typical in characterization tests where a measurement is almost always required. The situation can be difficult to analyze because there may be no way of knowing anything about the integrity of the measurements. In the example, the out-of-state measurement could be anything even a value that is coincidentally aligned or near the expected measurement. This condition can result in a data analysis mystery that may take months to investigate. It may potentially mask a device configuration problem (setup pattern) as a marginal, high-variance measurement problem.

8 STANDARD DEVIATION, SIGMA Many of the common statistics and manufacturing quality metrics have in their base calculation a reference to standard deviation. Standard deviation is a measure that is used to quantify the amount of variation or dispersion of a set of data values (Wikipedia S. D., 2015). Several common decisions within the test program development can make this very useful statistic misleading. In the case of the reference voltage example, choosing to log a large default value will result in a high sigma, unless the outliers are removed. A very easy and obvious analysis technique is to sort tests by sigma, then look for exceptionally high values. A slightly less intuitive technique is to look for tests with zero or very small standard deviation. There are several valid situations that might result in a sigma of zero, such as a measurement that is rounded to an integer, trimming values, or functional (go/no-go) tests logged as parametric (1/0) values. There are other cases however, where a standard deviation of zero indicates a problem with a test. PROCESS CAPABILITY INDEX, CPK One of the mainstays of manufacturing quality metrics, Cpk is a measurement of the amount of variation (standard deviation) and of how centered the measurement process is, relative to the design specification limits., 3 3 While the calculation is a quick and effective way of reducing the evaluation process down to a single number, the values can be misleading. In addition to large standard deviation situations described earlier, Cpk relies on the test limits being accurate and representative of the actual test process. Arbitrarily set limits, as opposed to statistically significant limits, can result in extremely large Cpk values. A change in the fabrication or test process may result in a shift in the data an event that should trigger an examination of the process to ensure quality hasn t been sacrificed. This shift may cause an extremely large Cpk value to be reduced to what would still be considered a reasonable level. Depending on Cpk alone may result in this situation being masked and may result in a missed opportunity to identify a quality problem. GROUP AND TEST COMPARISONS A common and easy to perform technique for identifying issues is to look for shifts between similar-type groups. As an example, suppose you want to identify the largest shift related to temperature testing. An experiment for this analysis may be as simple as taking a batch of correlation units and testing them at three different temperatures. After grouping the device records according to their test temperature, several analysis techniques can be performed to find the tests that are most susceptible to temperature changes. In Figure 8, you can see a simple analysis for finding the largest shift between the means of the three different temperatures: hot, room, and cold. The difference between means is expressed as a percentage of the limits, using the following equation: % 100

9 Figure 8: Group Comparison Mean shift due to temperature, expressed as % of limits This comparison technique helps reduce the number of suspect tests and helps direct the analysis and debug towards potential issues. With the data set reduced, more advanced techniques can be used to drill deeper into quality and yield questions. As an example, Figure 9 contains the top four tests from the group comparison exercise of Figure 8. We can then use the trend chart for further dissection and analysis. Figure 9: Trend Charts Tests with significant mean shifts across temperature Similarly, these comparison techniques can be done to find differences between tests. For example, if a group of tests are done at minimum and maximum operating voltage levels (Min/Max Vdd), the sets of tests can be checked for large shifts between the different voltage levels. In Figure 10, you can see two sets of tests performed at Min Vdd and Max Vdd. The largest shift between means was less than 4% of the limit spread.

10 Figure 10: Test Comparison Mean shift due to voltage, expressed as a % of limits Like Cpk, these techniques rely on valid, meaningful limits for identifying significant shifts between groups and tests. The same process however can be used with different statistics and analysis techniques. Something as simple as taking the difference between the means, standard deviation, number of pass/fail devices, etc may shed light on a problem and lead the way to more advanced analysis techniques. Likewise, expressing the difference between the statistics of interest as a percentage of a baseline measurement may be useful, using the equation below: % STUDENT T-TEST, INDEPENDENT AND PAIRED Another effective indicator for isolating differences between groups of devices and tests is the Student t-test. The t-test works by first creating a hypothesis about the means of the two groups of measurements, called the null hypothesis. Typically, the null hypothesis concerns whether or not the means are equivalent. The result of the t-test provides a probability for whether or not the means are equivalent for the two sets of data. As an example, if you re trying to determine if the test results for two different sites are the same (a null hypothesis that the site means are the same), a t-test result of 0.3 would indicate a 30% likelihood the two groups are similar not necessarily a definitive answer. On the other hand, a probability of 5% might be a strong indicator that there is a difference. In either case, the probability is only an indicator and should not be misconstrued as proof of equivalence or difference. Two commonly used types of t-tests are independent and paired. An independent t-test is used when there is no direct relationship between the sets of data. An example of independent data would be the production test data from two different lots of devices. Perhaps you re investigating a performance difference between lots manufactured at two different sites. An independent t-test can be used to see if there is a similarity (or difference) between the production data for each of the tests. The results can then be sorted based on the probabilities and further analysis can be used to dive deeper into the differences. An independent t-test does not require the two groups of parts have the same number of samples. The paired t-test technique can be used when the data is ordered (serialized) and there are values in each set for each device. Looking for similarities or differences between like-tests performed at different voltages is a good example of paired data. As an example, a test program may contain Idd tests at minimum and maximum voltage levels. A paired t-test could be used to assign a probability to a suspected difference between the operating currents at the two different voltages. Another example might be a controlled experiment where a set of correlation units are serialized and tested on two different loadboards. The null hypothesis might be that there is no performance difference between the two boards (equivalent means). To use the paired t-test technique in this example, the relationship between the data taken on each board must be linked on a device-by-

11 device basis, typically with a datalogged serial number. This is done by either encoding and reading back the serial numbers from the device or ensuring the serial numbers are correct and identifiable in the data (see the Data Collection section). Both t-test methods begin with the calculation of the test statistic, although the calculations differ slightly. The test statistic is compared to a reference distribution (the t-distribution ). If the test statistic is more extreme than the value obtained from the t-distribution, then the null hypothesis is rejected. The t-distribution is nothing more than a family of curves, each resembling a normal distribution, with the curve shape related to the number of samples used in the analysis. The larger the sample size, the closer the t-distribution begins to compare in shape to a normal distribution. The relationship between the number of samples and the t distribution is referred to as degree of freedom. The test statistic comparison to the t-distribution is based on a confidence interval, which is chosen based on the desired statistical significance (usually, 0.1, 0.05, or 0.01). The confidence interval and the number of degrees of freedom result in a term called the p-value. If the p-value is below the threshold set by the confidence interval, then the null hypothesis should be rejected (Wikipedia, Student's t-test, 2015). Figure 11 below shows a paired t-test performed on an experiment designed to identify possible test damage by checking leakage currents at the beginning and end of the test program. The null hypothesis is that there is no difference between the pre and post leakage data (equivalent means); a confidence interval of 0.01 was chosen. Notice all tests indicate the null hypothesis should be rejected (all p-values are less than the confidence interval). The results led to an investigation of the test program revealing an improper range setting on the initial set of leakage tests. Figure 11: T-Test Results & Histograms pre/post leakage measurements

12 It is important to note that the t-test relies on the data being normally (or nearly normally) distributed. It can also be used for very large sample sizes due to the properties of the Central Limit Theorem (Wikipedia, Central Limit theorem, 2015) which basically states that large sample sizes will approach a normal distribution. Nonnormal data and experimental error, such as the range error found in the example demonstrated in Figure 11 can make the results hard to interpret. CORRELATION Correlation activities are common in many test engineering tasks. Correlation is a statistical technique for describing the direction and magnitude of a relationship between a set of variables. Describing relationships between different setups, such as bench versus tester measured devices, occur frequently and a common method for expressing the relationships is with the use of scatter plots. Like a paired Student t-test, scatter plots require paired data. The pair of data are used as the X and Y axis values and plotted for each device tested, resulting in a single point for the pair of measurements. Since the results of each pair are plotted as an X/Y coordinate, a perfect correlation where every X value equaled its corresponding Y value would result in a 45 line that passes through the intersection of the X/Y axes (0, 0). Figure 12: Scatter Plot One test compared to three other tests, positive correlation Figure 12 shows an example of a scatter plot for a single independent variable versus three different dependent variables. This example indicates a strong, positive correlation between the independent variable (test 100) and the three dependent variables (tests 104, 105, and 106). As the values increase for test 100, the dependent tests follow. This example also shows there is a small amount of offset (positive offset for tests 104 & 105; negative offset for test 106) between the dependent and independent tests. The graph has the X and Y axes locked to the same scale and the dashed line represents a perfect correlation line (Y = X). This scatter plot contains another common correlation activity representing the data points with a best-fit line equation. The line equation quickly identifies the direction of the correlation via the slope as well as any possible offset between the variables of interest. Figure 13 shows the same independent variable, test 100, relative to two other tests of interest. This example shows a possibly strong, negative correlation.

13 Figure 13: Scatter Plot negative correlation example Scatter plots and correlation activities are helpful for investigating suspected links between sets of data. When trying to validate a newly developed test, it is often helpful to correlate the test with another method. In the case of test time optimization or platform conversion, correlating results between the original solution and the new solution can be a strong indicator that the work has been done properly. Likewise, correlation can be used as a predictive mechanism. In the above examples, we can use the results from one test to predict the likely outcome of other tests. This may be a useful tool for lobbying the elimination of tests to reduce test time (optimization). GAGE REPEATABILITY & REPRODUCIBILITY A gage repeatability and reproducibility study (Gage R&R) consists of a data collection process as well as an analysis procedure. It is often used as a gating mechanism for releasing a test program to manufacturing and is used to ensure a test solution is stable and reliable across the testing process. Many times, it is one of the last hurdles a test engineer will encounter before releasing a test program and it can be tedious, time consuming, and difficult. It can also be an effective tool for flushing out potential problems, especially when utilized earlier in the development process. Incremental or mini-gage procedures can be used effectively to identify problems between sites, boards, and testers as well as unstable tests and/or design marginalities. Gage R&R studies can target multiple variables, such as sites, loadboards, and testers but the same process can be used to analyze the effects of temperature, voltage, handler/probers, and others. Most analysis packages that support Gage R&R analysis use either the Range & Average or ANOVA (Analysis of Variance) approach to analyzing the data. The example shown in Figure 14 is a section of a Gage R&R used to study the effects of loadboards and testers. This example uses the ANOVA analysis method and represents each of the gage parameters as a percentage of the total system variation. Two testers and two loadboards were used in this study with the goal of isolating any differences between these variables.

14 Figure 14: Gage R&R Results expressed as a percentage of the total system variation Interpreting the results of a Gage R&R report requires the understanding of a few underlying principles. First, the Gage study is designed to measure the effectiveness of the measurement system. The calculation quantifies measurement error in relation to product (device) variation. Smaller values for Repeatability, Reproducibility, Part Interaction, and R&R are considered better, as these values represent the percentage of error attributed to these parameters. Repeatability is related to the spread or width of the probability density for the measurements of each device, as measured by each gage parameter. A test that is considered more repeatable has a tighter distribution. Figure 15 shows two tests; the upper test would be considered less repeatable than the lower. Figure 15: Repeatability Upper test is less repeatable than the lower test Reproducibility is related to the ability of a measurement to be made consistently (repeatability) across the measurement equipment. As an example, say you re trying to determine the reproducibility of a set of measurements between two testers. If the measurements on one tester form a tight, narrow distribution and the measurements for the second tester has a wide, high standard deviation distribution, then the reproducibility of the measurement is lower. See Figure 16 below for an example of lower reproducibility.

15 Figure 16: Reproducibility Tester to tester differences for a single device One quick indicator of the validity of the results is to calculate the Number of Distinct Categories (NDC). The NDC is basically a representation of the effectiveness of the measurement system to distinguish one device from another the higher the number, the better the resolution of the measurement system. As a rule-of-thumb, an NDC of less than 5 indicates the measurement system does not have the necessary resolution. This can be due to several factors, such as the selection of devices used has very little variation. Additionally, an R&R% greater than 10% should be inspected, a recommendation based on the Automotive Industry Action Group (AIAG) that developed the gage process (Wheeler). Testing your Test Program, Regression Testing Regression testing is a common practice in software development but infrequently used with test program development. Regression testing is used to verify changes and bug-fixes have not changed the software s operation or introduced new, unintended features (bugs). In the software development world, when bugs or problems are uncovered, the input, variables, or conditions that expose the problematic code are typically saved as a new test case. The set of test conditions used for verifying the code grows over time and generally the overall quality of the software improves. The test cases may also be used as a proof of quality and in some software development models regression test cases are used in lieu of architectural and design documentation (Wikipedia R. T., 2015). Regression testing is done by recording output from the software s operation and using this output to verify the operation. The initial set of output has to be fully verified for correctness and completeness, a process that can be tedious and time consuming. Once the initial output has been deemed sufficient, automated tools can be used to verify that subsequent runs of the software match the original baseline output. Later, when a change is made to the test program, the failure of a regression test can be used to validate that the change was effective, resulting in a difference between the current run and the baseline run. If the regression failure has been investigated and found to be a result of the modifications, the new output is stored as the baseline for future regression testing. Additionally, the absence of other failing regression tests provides some assurance that the changes implemented for a specific bug fix have not had unintended consequences in other areas of the software s operation.

16 Regression testing requires capturing and saving the input and output conditions that result in the software repeating or replicating the operation to be verified. For test programs, the input conditions and output resources that may be required are described below. REGRESSION TESTING INPUT CONDITIONS Correlation Units Unlike a pure software application, test programs are designed to interact with and test devices. The operation of a particular device may cause a potential problem or interesting condition within the test program. These devices-of-interest can be used to develop a suite of regression tests to ensure the problem or condition has be corrected or accounted for. The correlation units used for regression tests don t necessarily need to be passing devices. They should be devices that result in the test program s unusual or problematic behavior. Loadboards Many times the device interface board may contribute to a performance issue that needs to be identified and corrected. Regression testing for test programs may require a specific loadboard or set of boards to cause a condition of interest. In this case, a regression test may specify that a loadboard is to be used to ensure that a test condition is operating as expected. Likewise, regression testing can be used to do a quick verification for a newly developed board. A small set of correlation units (possibly only one unit) can be used to quickly verify that the operation of the new board is similar to an existing (baseline) board. Testers The performance or operation of a tester may also contribute to a specific condition of interest. The use of a specific tester may be necessary to replicate a condition and may therefore require the inclusion of a requirement to use that tester during a set of regression tests. Similar to loadboards, regression testing can be used as a first-line mechanism to determine a tester is operating as expected. Data from a few units can be compared to the baseline operation of another tester, where the operation has been verified, to provide a quick sanity check. This verification isn t meant to replace a full-blown correlation exercise but may save time and resources by identifying a problem before an extensive correlation exercise is done. Operator Interface and Flow Conditions The flow and operation of most test programs can be modified by selections done for a particular test insertion or point in the testing flow. These input conditions affect the operation of the test program and should be configured and accounted for while regression testing. The selection of these conditions may be automated using additional testing code (code that does not relate to the verification of the DUT but used for the testing of the test program software). REGRESSION TESTING OUTPUT RESOURCES Datalogs The primary output from a test program is the data generated from the testing process. This data can be used to verify the operation of the test program after changes have been made. Baseline datalogs should be categorized and stored relative to the specific input conditions used for their generation (correlation units, tester, loadboard, flow conditions, temperature, etc). Test Time Profiles Another output that may be useful for identifying problematic situations is a test time profile. A test time profile is a log of the time associated with each step of the test program. Depending on the tester being used, this information can be very granular and can be useful for identifying a host of conditions that effect quality. Changes in the tester s OS, program changes related to settling or stability, and hardware differences can cause variations in the operation and performance of the test program. By having a baseline time profile, changes related to time can be quickly identified and investigated. Binning Summary Verification using high-level binning information is actually somewhat common in the semiconductor test industry. Program optimizations such as multi-site conversions or test time reduction activities are often verified by running a set of devices that have been tested previously, then a comparison is done to see that all the parts are binned the same as the baseline operation. Parts that passed the baseline and fail the new operation

17 are scrutinized for potential yield impacts. Parts that failed the baseline and pass the current run are analyzed for potential test escapes and quality issues. This type of testing usually requires a large number of devices, an automated handler, and a significant amount of test time. A smaller-scale regression test can be done using the same technique, only with a much smaller quantity of devices. While this isn t meant to replace a full-scale binning correlation activity, it can be used more frequently to ensure program changes aren t effecting the binning operation. Debug or Status Output Another common practice in software development is the inclusion of a Debug mode that may provide more information about a program s operation. Debug statements that would normally be excluded or hidden may be enabled and logged to a file. Many times programmers will include debug output that include function entry or exit tags, program meta-data, or intermediate results that normally would not be part of the output. This information can be used to track the operation of a test program and can also be useful for tracking quality issues. HOW-TO Test program regression testing is more difficult because of the other system variables as well as the fact that the output can vary. As an example, the reading for a particular test of a correlation unit may be roughly 1.2v. One time the value may be 1.201v and another time the test may measure 1.199v. While the variance may be well within the expected tolerance of the measurement, doing a diff of the two values will fail. Verifying differences such of these requires some amount of guard-banding on the amount of variance found to be acceptable. Due to the size and nature of test programs, manually verifying these subtle differences is labor intensive and not practical. Commercial software packages are available that allow for this type of guardbanding and will report or highlight differences between parametric values when they exceed the guard-bands. Since data like units, limits, test numbers, and test names are included in most datalogs, this information can and should also be verified for every regression test. The data collection and analysis required for regression testing may seem excessive but in its simplest form, requires very little planning or forethought. As an example, when a program modification is done a single correlation unit can be tested once and logged. The original or baseline test program can then be loaded and the same device tested and logged. A quick comparison between to two datalogs can identify changes and can highlight inadvertent consequences of the change. Figure 17 shows an example of a quick regression test done using a single correlation unit, tested twice. Test record 0 was tested using the baseline test program; Record 1 was tested with a newly modified program. The comparison done for this regression test is percent difference, defined as % For this example, the equation above was used to compare the values for each test in the test program. Likewise, a comparison of the test names, numbers, flow position, and limits was done. The results have been sorted to show the largest shift between the two records and values over 5% are highlighted red. Also, the limits for flow item 27 indicates the high limit was a changed from 0.055v to 0.057v, as indicated in red. In this example, changes were made to the leakage and Idd tests, as well as the limits for the Gain test (flow item 27). These changes were expected and therefore the results make sense. The datalog used for record 1would therefore be saved as the baseline and the test program can be released with the assurance that nothing else has changed.

18 Figure 17: Regression Test single unit, tested with baseline and modified test program Conclusion Data analysis can be effective for sorting through the mounds of data associated with most test engineering tasks. The techniques presented here are not meant to be comprehensive, but merely a sampling of some of the easy to use, effective tools for helping the test engineer get their work done. The use of several different techniques is often required, depending on the problem at hand. No one procedure works for every situation and many times a suite of techniques is required to cross-check and augment the results of other methods. When used effectively, analyzing test data early in the process results in faster debug/development, better quality, and potentially more efficient testing. Works Cited Wheeler, D. J. (n.d.). Problems With Gauge R&R Studies. Retrieved from Wikipedia. (2015). Central Limit theorem. Wikipedia. (2015). Outlier. Retrieved from Wikipedia. Wikipedia. (2015). Student's t-test. Wikipedia, R. T. (2015). Retrieved from Regression Testing. Wikipedia, S. D. (2015). Retrieved from Wikipedia.

Graphical Analysis of Data using Microsoft Excel [2016 Version]

Graphical Analysis of Data using Microsoft Excel [2016 Version] Graphical Analysis of Data using Microsoft Excel [2016 Version] Introduction In several upcoming labs, a primary goal will be to determine the mathematical relationship between two variable physical parameters.

More information

Error Analysis, Statistics and Graphing

Error Analysis, Statistics and Graphing Error Analysis, Statistics and Graphing This semester, most of labs we require us to calculate a numerical answer based on the data we obtain. A hard question to answer in most cases is how good is your

More information

Descriptive Statistics, Standard Deviation and Standard Error

Descriptive Statistics, Standard Deviation and Standard Error AP Biology Calculations: Descriptive Statistics, Standard Deviation and Standard Error SBI4UP The Scientific Method & Experimental Design Scientific method is used to explore observations and answer questions.

More information

Table of Contents (As covered from textbook)

Table of Contents (As covered from textbook) Table of Contents (As covered from textbook) Ch 1 Data and Decisions Ch 2 Displaying and Describing Categorical Data Ch 3 Displaying and Describing Quantitative Data Ch 4 Correlation and Linear Regression

More information

Using Excel for Graphical Analysis of Data

Using Excel for Graphical Analysis of Data Using Excel for Graphical Analysis of Data Introduction In several upcoming labs, a primary goal will be to determine the mathematical relationship between two variable physical parameters. Graphs are

More information

Chapter 9. Software Testing

Chapter 9. Software Testing Chapter 9. Software Testing Table of Contents Objectives... 1 Introduction to software testing... 1 The testers... 2 The developers... 2 An independent testing team... 2 The customer... 2 Principles of

More information

RSM Split-Plot Designs & Diagnostics Solve Real-World Problems

RSM Split-Plot Designs & Diagnostics Solve Real-World Problems RSM Split-Plot Designs & Diagnostics Solve Real-World Problems Shari Kraber Pat Whitcomb Martin Bezener Stat-Ease, Inc. Stat-Ease, Inc. Stat-Ease, Inc. 221 E. Hennepin Ave. 221 E. Hennepin Ave. 221 E.

More information

Chapter 5. Track Geometry Data Analysis

Chapter 5. Track Geometry Data Analysis Chapter Track Geometry Data Analysis This chapter explains how and why the data collected for the track geometry was manipulated. The results of these studies in the time and frequency domain are addressed.

More information

Using Excel for Graphical Analysis of Data

Using Excel for Graphical Analysis of Data EXERCISE Using Excel for Graphical Analysis of Data Introduction In several upcoming experiments, a primary goal will be to determine the mathematical relationship between two variable physical parameters.

More information

WELCOME! Lecture 3 Thommy Perlinger

WELCOME! Lecture 3 Thommy Perlinger Quantitative Methods II WELCOME! Lecture 3 Thommy Perlinger Program Lecture 3 Cleaning and transforming data Graphical examination of the data Missing Values Graphical examination of the data It is important

More information

A Novel Methodology to Debug Leakage Power Issues in Silicon- A Mobile SoC Ramp Production Case Study

A Novel Methodology to Debug Leakage Power Issues in Silicon- A Mobile SoC Ramp Production Case Study A Novel Methodology to Debug Leakage Power Issues in Silicon- A Mobile SoC Ramp Production Case Study Ravi Arora Co-Founder & CTO, Graphene Semiconductors India Pvt Ltd, India ABSTRACT: As the world is

More information

Department of Industrial Engineering. Chap. 8: Process Capability Presented by Dr. Eng. Abed Schokry

Department of Industrial Engineering. Chap. 8: Process Capability Presented by Dr. Eng. Abed Schokry Department of Industrial Engineering Chap. 8: Process Capability Presented by Dr. Eng. Abed Schokry Learning Outcomes: After careful study of this chapter, you should be able to do the following: Investigate

More information

DataView Features. Input Data Formats. Current Release

DataView Features. Input Data Formats. Current Release DataView Features Input Data Formats STDF, ATDF NI-CSV, generic CSV, others WAT (fab parameters) Open Compressed (GZip) versions of any of the above Merge data files of any of the above types Link to existing

More information

Data Analyst Nanodegree Syllabus

Data Analyst Nanodegree Syllabus Data Analyst Nanodegree Syllabus Discover Insights from Data with Python, R, SQL, and Tableau Before You Start Prerequisites : In order to succeed in this program, we recommend having experience working

More information

Optimal Clustering and Statistical Identification of Defective ICs using I DDQ Testing

Optimal Clustering and Statistical Identification of Defective ICs using I DDQ Testing Optimal Clustering and Statistical Identification of Defective ICs using I DDQ Testing A. Rao +, A.P. Jayasumana * and Y.K. Malaiya* *Colorado State University, Fort Collins, CO 8523 + PalmChip Corporation,

More information

Evaluating Robot Systems

Evaluating Robot Systems Evaluating Robot Systems November 6, 2008 There are two ways of constructing a software design. One way is to make it so simple that there are obviously no deficiencies. And the other way is to make it

More information

More Summer Program t-shirts

More Summer Program t-shirts ICPSR Blalock Lectures, 2003 Bootstrap Resampling Robert Stine Lecture 2 Exploring the Bootstrap Questions from Lecture 1 Review of ideas, notes from Lecture 1 - sample-to-sample variation - resampling

More information

Real Time Contact Resistance Measurement & Control. Tony Schmitz Erwin Barret

Real Time Contact Resistance Measurement & Control. Tony Schmitz Erwin Barret Real Time Contact Resistance Measurement & Control Tony Schmitz Erwin Barret Introduction Agenda Why is CRES control important? Objectives of Real Time Method Offline Method Limitations & Goals of Real

More information

Cpk: What is its Capability? By: Rick Haynes, Master Black Belt Smarter Solutions, Inc.

Cpk: What is its Capability? By: Rick Haynes, Master Black Belt Smarter Solutions, Inc. C: What is its Capability? By: Rick Haynes, Master Black Belt Smarter Solutions, Inc. C is one of many capability metrics that are available. When capability metrics are used, organizations typically provide

More information

THE L.L. THURSTONE PSYCHOMETRIC LABORATORY UNIVERSITY OF NORTH CAROLINA. Forrest W. Young & Carla M. Bann

THE L.L. THURSTONE PSYCHOMETRIC LABORATORY UNIVERSITY OF NORTH CAROLINA. Forrest W. Young & Carla M. Bann Forrest W. Young & Carla M. Bann THE L.L. THURSTONE PSYCHOMETRIC LABORATORY UNIVERSITY OF NORTH CAROLINA CB 3270 DAVIE HALL, CHAPEL HILL N.C., USA 27599-3270 VISUAL STATISTICS PROJECT WWW.VISUALSTATS.ORG

More information

3 Graphical Displays of Data

3 Graphical Displays of Data 3 Graphical Displays of Data Reading: SW Chapter 2, Sections 1-6 Summarizing and Displaying Qualitative Data The data below are from a study of thyroid cancer, using NMTR data. The investigators looked

More information

Product / Process Change Notification

Product / Process Change Notification Product / Process Change Notification N 2016-067-A Dear Customer, Please find attached our INFINEON Technologies PCN: Introduction of new tester platform for final test for PROFET BTS432E2 E3062A, BTS442E2

More information

Software Testing CS 408

Software Testing CS 408 Software Testing CS 408 1/09/18 Course Webpage: http://www.cs.purdue.edu/homes/suresh/408-spring2018 1 The Course Understand testing in the context of an Agile software development methodology - Detail

More information

BIO 360: Vertebrate Physiology Lab 9: Graphing in Excel. Lab 9: Graphing: how, why, when, and what does it mean? Due 3/26

BIO 360: Vertebrate Physiology Lab 9: Graphing in Excel. Lab 9: Graphing: how, why, when, and what does it mean? Due 3/26 Lab 9: Graphing: how, why, when, and what does it mean? Due 3/26 INTRODUCTION Graphs are one of the most important aspects of data analysis and presentation of your of data. They are visual representations

More information

3 Graphical Displays of Data

3 Graphical Displays of Data 3 Graphical Displays of Data Reading: SW Chapter 2, Sections 1-6 Summarizing and Displaying Qualitative Data The data below are from a study of thyroid cancer, using NMTR data. The investigators looked

More information

Cluster Analysis Gets Complicated

Cluster Analysis Gets Complicated Cluster Analysis Gets Complicated Collinearity is a natural problem in clustering. So how can researchers get around it? Cluster analysis is widely used in segmentation studies for several reasons. First

More information

Data Analysis and Solver Plugins for KSpread USER S MANUAL. Tomasz Maliszewski

Data Analysis and Solver Plugins for KSpread USER S MANUAL. Tomasz Maliszewski Data Analysis and Solver Plugins for KSpread USER S MANUAL Tomasz Maliszewski tmaliszewski@wp.pl Table of Content CHAPTER 1: INTRODUCTION... 3 1.1. ABOUT DATA ANALYSIS PLUGIN... 3 1.3. ABOUT SOLVER PLUGIN...

More information

NCSS Statistical Software

NCSS Statistical Software Chapter 245 Introduction This procedure generates R control charts for variables. The format of the control charts is fully customizable. The data for the subgroups can be in a single column or in multiple

More information

Data Analyst Nanodegree Syllabus

Data Analyst Nanodegree Syllabus Data Analyst Nanodegree Syllabus Discover Insights from Data with Python, R, SQL, and Tableau Before You Start Prerequisites : In order to succeed in this program, we recommend having experience working

More information

Data can be in the form of numbers, words, measurements, observations or even just descriptions of things.

Data can be in the form of numbers, words, measurements, observations or even just descriptions of things. + What is Data? Data is a collection of facts. Data can be in the form of numbers, words, measurements, observations or even just descriptions of things. In most cases, data needs to be interpreted and

More information

Multiple Regression White paper

Multiple Regression White paper +44 (0) 333 666 7366 Multiple Regression White paper A tool to determine the impact in analysing the effectiveness of advertising spend. Multiple Regression In order to establish if the advertising mechanisms

More information

DESIGNER S NOTEBOOK Proximity Detection and Link Budget By Tom Dunn July 2011

DESIGNER S NOTEBOOK Proximity Detection and Link Budget By Tom Dunn July 2011 INTELLIGENT OPTO SENSOR Number 38 DESIGNER S NOTEBOOK Proximity Detection and Link Budget By Tom Dunn July 2011 Overview TAOS proximity sensors operate by flashing an infrared (IR) light towards a surface

More information

Hands-On Activities + Technology = Mathematical Understanding Through Authentic Modeling

Hands-On Activities + Technology = Mathematical Understanding Through Authentic Modeling Session #176, Beatini Hands-On Activities + Technology = Mathematical Understanding Through Authentic Modeling Hands-On Activities + Technology = Mathematical Understanding Through Authentic Modeling NCTM

More information

7 Fractions. Number Sense and Numeration Measurement Geometry and Spatial Sense Patterning and Algebra Data Management and Probability

7 Fractions. Number Sense and Numeration Measurement Geometry and Spatial Sense Patterning and Algebra Data Management and Probability 7 Fractions GRADE 7 FRACTIONS continue to develop proficiency by using fractions in mental strategies and in selecting and justifying use; develop proficiency in adding and subtracting simple fractions;

More information

Lecture 4a. CMOS Fabrication, Layout and Simulation. R. Saleh Dept. of ECE University of British Columbia

Lecture 4a. CMOS Fabrication, Layout and Simulation. R. Saleh Dept. of ECE University of British Columbia Lecture 4a CMOS Fabrication, Layout and Simulation R. Saleh Dept. of ECE University of British Columbia res@ece.ubc.ca 1 Fabrication Fabrication is the process used to create devices and wires. Transistors

More information

Drexel University Electrical and Computer Engineering Department. ECEC 672 EDA for VLSI II. Statistical Static Timing Analysis Project

Drexel University Electrical and Computer Engineering Department. ECEC 672 EDA for VLSI II. Statistical Static Timing Analysis Project Drexel University Electrical and Computer Engineering Department ECEC 672 EDA for VLSI II Statistical Static Timing Analysis Project Andrew Sauber, Julian Kemmerer Implementation Outline Our implementation

More information

SQL Tuning Reading Recent Data Fast

SQL Tuning Reading Recent Data Fast SQL Tuning Reading Recent Data Fast Dan Tow singingsql.com Introduction Time is the key to SQL tuning, in two respects: Query execution time is the key measure of a tuned query, the only measure that matters

More information

QuickLoad-Central. User Guide

QuickLoad-Central. User Guide QuickLoad-Central User Guide Contents Introduction... 4 Navigating QuickLoad Central... 4 Viewing QuickLoad-Central Information... 6 Registering a License... 6 Managing File Stores... 6 Adding File Stores...

More information

MATH11400 Statistics Homepage

MATH11400 Statistics Homepage MATH11400 Statistics 1 2010 11 Homepage http://www.stats.bris.ac.uk/%7emapjg/teach/stats1/ 1.1 A Framework for Statistical Problems Many statistical problems can be described by a simple framework in which

More information

Lecture 3: Linear Classification

Lecture 3: Linear Classification Lecture 3: Linear Classification Roger Grosse 1 Introduction Last week, we saw an example of a learning task called regression. There, the goal was to predict a scalar-valued target from a set of features.

More information

Control Charts. An Introduction to Statistical Process Control

Control Charts. An Introduction to Statistical Process Control An Introduction to Statistical Process Control Course Content Prerequisites Course Objectives What is SPC? Control Chart Basics Out of Control Conditions SPC vs. SQC Individuals and Moving Range Chart

More information

Cognalysis TM Reserving System User Manual

Cognalysis TM Reserving System User Manual Cognalysis TM Reserving System User Manual Return to Table of Contents 1 Table of Contents 1.0 Starting an Analysis 3 1.1 Opening a Data File....3 1.2 Open an Analysis File.9 1.3 Create Triangles.10 2.0

More information

DRVerify: The Verification of Physical Verification

DRVerify: The Verification of Physical Verification DRVerify: The Verification of Physical Verification Sage Design Automation, Inc. Santa Clara, California, USA Who checks the checker? DRC (design rule check) is the most fundamental physical verification

More information

Adaptive Robotics - Final Report Extending Q-Learning to Infinite Spaces

Adaptive Robotics - Final Report Extending Q-Learning to Infinite Spaces Adaptive Robotics - Final Report Extending Q-Learning to Infinite Spaces Eric Christiansen Michael Gorbach May 13, 2008 Abstract One of the drawbacks of standard reinforcement learning techniques is that

More information

Verification and Validation of X-Sim: A Trace-Based Simulator

Verification and Validation of X-Sim: A Trace-Based Simulator http://www.cse.wustl.edu/~jain/cse567-06/ftp/xsim/index.html 1 of 11 Verification and Validation of X-Sim: A Trace-Based Simulator Saurabh Gayen, sg3@wustl.edu Abstract X-Sim is a trace-based simulator

More information

March 4-7, 2018 Hilton Phoenix / Mesa Hotel Mesa, Arizona Archive

March 4-7, 2018 Hilton Phoenix / Mesa Hotel Mesa, Arizona Archive March 4-7, 2018 Hilton Phoenix / Mesa Hotel Mesa, Arizona Archive 2018 BiTS Workshop Image: pilgrims49 / istock COPYRIGHT NOTICE The presentation(s)/poster(s) in this publication comprise the Proceedings

More information

CROWNE: Current Ratio Outliers With Neighbor Estimator

CROWNE: Current Ratio Outliers With Neighbor Estimator OWNE: Current Ratio Outliers With Neighbor Estimator Sagar S. Sabade D. M. H. Walker Department of Computer Science Texas A&M University College Station, TX 77843-32 Tel: (979) 862-4387 Fax: (979) 847-8578

More information

Screening Design Selection

Screening Design Selection Screening Design Selection Summary... 1 Data Input... 2 Analysis Summary... 5 Power Curve... 7 Calculations... 7 Summary The STATGRAPHICS experimental design section can create a wide variety of designs

More information

Testing Principle Verification Testing

Testing Principle Verification Testing ECE 553: TESTING AND TESTABLE DESIGN OF DIGITAL SYSTES Test Process and Test Equipment Overview Objective Types of testing Verification testing Characterization testing Manufacturing testing Acceptance

More information

Question 1: What is a code walk-through, and how is it performed?

Question 1: What is a code walk-through, and how is it performed? Question 1: What is a code walk-through, and how is it performed? Response: Code walk-throughs have traditionally been viewed as informal evaluations of code, but more attention is being given to this

More information

Learner Expectations UNIT 1: GRAPICAL AND NUMERIC REPRESENTATIONS OF DATA. Sept. Fathom Lab: Distributions and Best Methods of Display

Learner Expectations UNIT 1: GRAPICAL AND NUMERIC REPRESENTATIONS OF DATA. Sept. Fathom Lab: Distributions and Best Methods of Display CURRICULUM MAP TEMPLATE Priority Standards = Approximately 70% Supporting Standards = Approximately 20% Additional Standards = Approximately 10% HONORS PROBABILITY AND STATISTICS Essential Questions &

More information

Section 4 General Factorial Tutorials

Section 4 General Factorial Tutorials Section 4 General Factorial Tutorials General Factorial Part One: Categorical Introduction Design-Ease software version 6 offers a General Factorial option on the Factorial tab. If you completed the One

More information

MicroSurvey Users: How to Report a Bug

MicroSurvey Users: How to Report a Bug MicroSurvey Users: How to Report a Bug Step 1: Categorize the Issue If you encounter a problem, as a first step it is important to categorize the issue as either: A Product Knowledge or Training issue:

More information

MONTE CARLO SIMULATION AND DESIGN OF EXPERIMENTS FOR IMPROVED PIN-PAD ALIGNMENT

MONTE CARLO SIMULATION AND DESIGN OF EXPERIMENTS FOR IMPROVED PIN-PAD ALIGNMENT MONTE CARLO SIMULATION AND DESIGN OF EXPERIMENTS FOR IMPROVED PIN-PAD ALIGNMENT by John DeBauche, Engineering Manager Johnstech International Johnstech is known for providing high performance and lowest

More information

What s New in Spotfire DXP 1.1. Spotfire Product Management January 2007

What s New in Spotfire DXP 1.1. Spotfire Product Management January 2007 What s New in Spotfire DXP 1.1 Spotfire Product Management January 2007 Spotfire DXP Version 1.1 This document highlights the new capabilities planned for release in version 1.1 of Spotfire DXP. In this

More information

Using Simulation to Understand Bottlenecks, Delay Accumulation, and Rail Network Flow

Using Simulation to Understand Bottlenecks, Delay Accumulation, and Rail Network Flow Using Simulation to Understand Bottlenecks, Delay Accumulation, and Rail Network Flow Michael K. Williams, PE, MBA Manager Industrial Engineering Norfolk Southern Corporation, 1200 Peachtree St., NE, Atlanta,

More information

IQC monitoring in laboratory networks

IQC monitoring in laboratory networks IQC for Networked Analysers Background and instructions for use IQC monitoring in laboratory networks Modern Laboratories continue to produce large quantities of internal quality control data (IQC) despite

More information

Chapter 3. Bootstrap. 3.1 Introduction. 3.2 The general idea

Chapter 3. Bootstrap. 3.1 Introduction. 3.2 The general idea Chapter 3 Bootstrap 3.1 Introduction The estimation of parameters in probability distributions is a basic problem in statistics that one tends to encounter already during the very first course on the subject.

More information

CHAPTER 3 AN OVERVIEW OF DESIGN OF EXPERIMENTS AND RESPONSE SURFACE METHODOLOGY

CHAPTER 3 AN OVERVIEW OF DESIGN OF EXPERIMENTS AND RESPONSE SURFACE METHODOLOGY 23 CHAPTER 3 AN OVERVIEW OF DESIGN OF EXPERIMENTS AND RESPONSE SURFACE METHODOLOGY 3.1 DESIGN OF EXPERIMENTS Design of experiments is a systematic approach for investigation of a system or process. A series

More information

This research aims to present a new way of visualizing multi-dimensional data using generalized scatterplots by sensitivity coefficients to highlight

This research aims to present a new way of visualizing multi-dimensional data using generalized scatterplots by sensitivity coefficients to highlight This research aims to present a new way of visualizing multi-dimensional data using generalized scatterplots by sensitivity coefficients to highlight local variation of one variable with respect to another.

More information

EE434 ASIC & Digital Systems Testing

EE434 ASIC & Digital Systems Testing EE434 ASIC & Digital Systems Testing Spring 2015 Dae Hyun Kim daehyun@eecs.wsu.edu 1 Introduction VLSI realization process Verification and test Ideal and real tests Costs of testing Roles of testing A

More information

Bootstrapping Method for 14 June 2016 R. Russell Rhinehart. Bootstrapping

Bootstrapping Method for  14 June 2016 R. Russell Rhinehart. Bootstrapping Bootstrapping Method for www.r3eda.com 14 June 2016 R. Russell Rhinehart Bootstrapping This is extracted from the book, Nonlinear Regression Modeling for Engineering Applications: Modeling, Model Validation,

More information

Why Quality Depends on Big Data

Why Quality Depends on Big Data Why Quality Depends on Big Data Korea Test Conference Michael Schuldenfrei, CTO Who are Optimal+? 2 Company Overview Optimal+ provides Manufacturing Intelligence software that delivers realtime, big data

More information

Vocabulary Unit 2-3: Linear Functions & Healthy Lifestyles. Scale model a three dimensional model that is similar to a three dimensional object.

Vocabulary Unit 2-3: Linear Functions & Healthy Lifestyles. Scale model a three dimensional model that is similar to a three dimensional object. Scale a scale is the ratio of any length in a scale drawing to the corresponding actual length. The lengths may be in different units. Scale drawing a drawing that is similar to an actual object or place.

More information

THREE THINGS TO CONSIDER WHEN DESIGNING ELECTRONIC PRODUCTS WITH HIGH-SPEED CONSTRAINTS BY: PATRICK CARRIER, MENTOR GRAPHICS CORP.

THREE THINGS TO CONSIDER WHEN DESIGNING ELECTRONIC PRODUCTS WITH HIGH-SPEED CONSTRAINTS BY: PATRICK CARRIER, MENTOR GRAPHICS CORP. THREE THINGS TO CONSIDER WHEN DESIGNING ELECTRONIC PRODUCTS WITH HIGH-SPEED CONSTRAINTS BY: PATRICK CARRIER, MENTOR GRAPHICS CORP. P A D S W H I T E P A P E R w w w. p a d s. c o m INTRODUCTION Designing

More information

Survey of Math: Excel Spreadsheet Guide (for Excel 2016) Page 1 of 9

Survey of Math: Excel Spreadsheet Guide (for Excel 2016) Page 1 of 9 Survey of Math: Excel Spreadsheet Guide (for Excel 2016) Page 1 of 9 Contents 1 Introduction to Using Excel Spreadsheets 2 1.1 A Serious Note About Data Security.................................... 2 1.2

More information

8 th Grade Pre Algebra Pacing Guide 1 st Nine Weeks

8 th Grade Pre Algebra Pacing Guide 1 st Nine Weeks 8 th Grade Pre Algebra Pacing Guide 1 st Nine Weeks MS Objective CCSS Standard I Can Statements Included in MS Framework + Included in Phase 1 infusion Included in Phase 2 infusion 1a. Define, classify,

More information

Using the DATAMINE Program

Using the DATAMINE Program 6 Using the DATAMINE Program 304 Using the DATAMINE Program This chapter serves as a user s manual for the DATAMINE program, which demonstrates the algorithms presented in this book. Each menu selection

More information

CHAPTER 2 DESCRIPTIVE STATISTICS

CHAPTER 2 DESCRIPTIVE STATISTICS CHAPTER 2 DESCRIPTIVE STATISTICS 1. Stem-and-Leaf Graphs, Line Graphs, and Bar Graphs The distribution of data is how the data is spread or distributed over the range of the data values. This is one of

More information

IC Testing and Development in Semiconductor Area

IC Testing and Development in Semiconductor Area IC Testing and Development in Semiconductor Area Prepare by Lee Zhang, 2004 Outline 1. Electronic Industry Development 2. Semiconductor Industry Development 4Electronic Industry Development Electronic

More information

Data Mining Chapter 3: Visualizing and Exploring Data Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University

Data Mining Chapter 3: Visualizing and Exploring Data Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University Data Mining Chapter 3: Visualizing and Exploring Data Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University Exploratory data analysis tasks Examine the data, in search of structures

More information

3.2 What Do We Gain from the mr Chart?

3.2 What Do We Gain from the mr Chart? Guide to Data Analysis While it is true that the X Chart tells the story, and that we will undoubtedly find the X Chart to be the most interesting part of the XmR Chart, there are times when the mr Chart

More information

Data Mining: Exploring Data. Lecture Notes for Chapter 3

Data Mining: Exploring Data. Lecture Notes for Chapter 3 Data Mining: Exploring Data Lecture Notes for Chapter 3 1 What is data exploration? A preliminary exploration of the data to better understand its characteristics. Key motivations of data exploration include

More information

CCSSM Curriculum Analysis Project Tool 1 Interpreting Functions in Grades 9-12

CCSSM Curriculum Analysis Project Tool 1 Interpreting Functions in Grades 9-12 Tool 1: Standards for Mathematical ent: Interpreting Functions CCSSM Curriculum Analysis Project Tool 1 Interpreting Functions in Grades 9-12 Name of Reviewer School/District Date Name of Curriculum Materials:

More information

Data Mining: Exploring Data. Lecture Notes for Chapter 3

Data Mining: Exploring Data. Lecture Notes for Chapter 3 Data Mining: Exploring Data Lecture Notes for Chapter 3 Slides by Tan, Steinbach, Kumar adapted by Michael Hahsler Look for accompanying R code on the course web site. Topics Exploratory Data Analysis

More information

Selecting PLLs for ASIC Applications Requires Tradeoffs

Selecting PLLs for ASIC Applications Requires Tradeoffs Selecting PLLs for ASIC Applications Requires Tradeoffs John G. Maneatis, Ph.., President, True Circuits, Inc. Los Altos, California October 7, 2004 Phase-Locked Loops (PLLs) are commonly used to perform

More information

Supplementary Figure 1. Decoding results broken down for different ROIs

Supplementary Figure 1. Decoding results broken down for different ROIs Supplementary Figure 1 Decoding results broken down for different ROIs Decoding results for areas V1, V2, V3, and V1 V3 combined. (a) Decoded and presented orientations are strongly correlated in areas

More information

Mean Tests & X 2 Parametric vs Nonparametric Errors Selection of a Statistical Test SW242

Mean Tests & X 2 Parametric vs Nonparametric Errors Selection of a Statistical Test SW242 Mean Tests & X 2 Parametric vs Nonparametric Errors Selection of a Statistical Test SW242 Creation & Description of a Data Set * 4 Levels of Measurement * Nominal, ordinal, interval, ratio * Variable Types

More information

Integrated Math I High School Math Solution West Virginia Correlation

Integrated Math I High School Math Solution West Virginia Correlation M.1.HS.1 M.1.HS.2 M.1.HS.3 Use units as a way to understand problems and to guide the solution of multi-step problems; choose and interpret units consistently in formulas; choose and interpret the scale

More information

Acquisition Description Exploration Examination Understanding what data is collected. Characterizing properties of data.

Acquisition Description Exploration Examination Understanding what data is collected. Characterizing properties of data. Summary Statistics Acquisition Description Exploration Examination what data is collected Characterizing properties of data. Exploring the data distribution(s). Identifying data quality problems. Selecting

More information

This paper was presented at DVCon-Europe in November It received the conference Best Paper award based on audience voting.

This paper was presented at DVCon-Europe in November It received the conference Best Paper award based on audience voting. This paper was presented at DVCon-Europe in November 2015. It received the conference Best Paper award based on audience voting. It is a very slightly updated version of a paper that was presented at SNUG

More information

Finding Firmware Defects Class T-18 Sean M. Beatty

Finding Firmware Defects Class T-18 Sean M. Beatty Sean Beatty Sean Beatty is a Principal with High Impact Services in Indianapolis. He holds a BSEE from the University of Wisconsin - Milwaukee. Sean has worked in the embedded systems field since 1986,

More information

Building Better Parametric Cost Models

Building Better Parametric Cost Models Building Better Parametric Cost Models Based on the PMI PMBOK Guide Fourth Edition 37 IPDI has been reviewed and approved as a provider of project management training by the Project Management Institute

More information

1. Assumptions. 1. Introduction. 2. Terminology

1. Assumptions. 1. Introduction. 2. Terminology 4. Process Modeling 4. Process Modeling The goal for this chapter is to present the background and specific analysis techniques needed to construct a statistical model that describes a particular scientific

More information

Black-box Testing Techniques

Black-box Testing Techniques T-76.5613 Software Testing and Quality Assurance Lecture 4, 20.9.2006 Black-box Testing Techniques SoberIT Black-box test case design techniques Basic techniques Equivalence partitioning Boundary value

More information

To make sense of data, you can start by answering the following questions:

To make sense of data, you can start by answering the following questions: Taken from the Introductory Biology 1, 181 lab manual, Biological Sciences, Copyright NCSU (with appreciation to Dr. Miriam Ferzli--author of this appendix of the lab manual). Appendix : Understanding

More information

CHAPTER-13. Mining Class Comparisons: Discrimination between DifferentClasses: 13.4 Class Description: Presentation of Both Characterization and

CHAPTER-13. Mining Class Comparisons: Discrimination between DifferentClasses: 13.4 Class Description: Presentation of Both Characterization and CHAPTER-13 Mining Class Comparisons: Discrimination between DifferentClasses: 13.1 Introduction 13.2 Class Comparison Methods and Implementation 13.3 Presentation of Class Comparison Descriptions 13.4

More information

CITS4009 Introduc0on to Data Science

CITS4009 Introduc0on to Data Science School of Computer Science and Software Engineering CITS4009 Introduc0on to Data Science SEMESTER 2, 2017: CHAPTER 3 EXPLORING DATA 1 Chapter Objec0ves Using summary sta.s.cs to explore data Exploring

More information

CS/ECE 5780/6780: Embedded System Design

CS/ECE 5780/6780: Embedded System Design CS/ECE 5780/6780: Embedded System Design John Regehr Lecture 18: Introduction to Verification What is verification? Verification: A process that determines if the design conforms to the specification.

More information

Data Mining: Exploring Data. Lecture Notes for Chapter 3. Introduction to Data Mining

Data Mining: Exploring Data. Lecture Notes for Chapter 3. Introduction to Data Mining Data Mining: Exploring Data Lecture Notes for Chapter 3 Introduction to Data Mining by Tan, Steinbach, Kumar What is data exploration? A preliminary exploration of the data to better understand its characteristics.

More information

Linear and Quadratic Least Squares

Linear and Quadratic Least Squares Linear and Quadratic Least Squares Prepared by Stephanie Quintal, graduate student Dept. of Mathematical Sciences, UMass Lowell in collaboration with Marvin Stick Dept. of Mathematical Sciences, UMass

More information

If the active datasheet is empty when the StatWizard appears, a dialog box is displayed to assist in entering data.

If the active datasheet is empty when the StatWizard appears, a dialog box is displayed to assist in entering data. StatWizard Summary The StatWizard is designed to serve several functions: 1. It assists new users in entering data to be analyzed. 2. It provides a search facility to help locate desired statistical procedures.

More information

Analytical model A structure and process for analyzing a dataset. For example, a decision tree is a model for the classification of a dataset.

Analytical model A structure and process for analyzing a dataset. For example, a decision tree is a model for the classification of a dataset. Glossary of data mining terms: Accuracy Accuracy is an important factor in assessing the success of data mining. When applied to data, accuracy refers to the rate of correct values in the data. When applied

More information

UNIT OBJECTIVE. Understand what system testing entails Learn techniques for measuring system quality

UNIT OBJECTIVE. Understand what system testing entails Learn techniques for measuring system quality SYSTEM TEST UNIT OBJECTIVE Understand what system testing entails Learn techniques for measuring system quality SYSTEM TEST 1. Focus is on integrating components and sub-systems to create the system 2.

More information

An Automated System for Data Attribute Anomaly Detection

An Automated System for Data Attribute Anomaly Detection Proceedings of Machine Learning Research 77:95 101, 2017 KDD 2017: Workshop on Anomaly Detection in Finance An Automated System for Data Attribute Anomaly Detection David Love Nalin Aggarwal Alexander

More information

Getting Started with Minitab 17

Getting Started with Minitab 17 2014, 2016 by Minitab Inc. All rights reserved. Minitab, Quality. Analysis. Results. and the Minitab logo are all registered trademarks of Minitab, Inc., in the United States and other countries. See minitab.com/legal/trademarks

More information

Post Silicon Electrical Validation

Post Silicon Electrical Validation Post Silicon Electrical Validation Tony Muilenburg 1 1/21/2014 Homework 4 Review 2 1/21/2014 Architecture / Integration History 3 1/21/2014 4 1/21/2014 Brief History Of Microprocessors 5 1/21/2014 6 1/21/2014

More information

Reliable programming

Reliable programming Reliable programming How to write programs that work Think about reliability during design and implementation Test systematically When things break, fix them correctly Make sure everything stays fixed

More information

Tips and Guidance for Analyzing Data. Executive Summary

Tips and Guidance for Analyzing Data. Executive Summary Tips and Guidance for Analyzing Data Executive Summary This document has information and suggestions about three things: 1) how to quickly do a preliminary analysis of time-series data; 2) key things to

More information

Lecture 15: Segmentation (Edge Based, Hough Transform)

Lecture 15: Segmentation (Edge Based, Hough Transform) Lecture 15: Segmentation (Edge Based, Hough Transform) c Bryan S. Morse, Brigham Young University, 1998 000 Last modified on February 3, 000 at :00 PM Contents 15.1 Introduction..............................................

More information