Discrete-Event Simulation:

Similar documents
Section 4.2: Discrete Data Histograms

Section 2.3: Monte Carlo Simulation

Package simed. November 27, 2017

BASIC SIMULATION CONCEPTS

Programming Assignment #4 Arrays and Pointers

You ve already read basics of simulation now I will be taking up method of simulation, that is Random Number Generation

SIMULATION FUNDAMENTALS

C Functions. 5.2 Program Modules in C

Probability Models.S4 Simulating Random Variables

Scientific Computing: An Introductory Survey

Probability and Statistics for Final Year Engineering Students

Using the DATAMINE Program

Chapter 3. Bootstrap. 3.1 Introduction. 3.2 The general idea

ACCURACY AND EFFICIENCY OF MONTE CARLO METHOD. Julius Goodman. Bechtel Power Corporation E. Imperial Hwy. Norwalk, CA 90650, U.S.A.

So..to be able to make comparisons possible, we need to compare them with their respective distributions.

Lecture Notes 3: Data summarization

Stat 528 (Autumn 2008) Density Curves and the Normal Distribution. Measures of center and spread. Features of the normal distribution

Ch6: The Normal Distribution

Chapter 2 Random Number Generators

CHAPTER 2: Describing Location in a Distribution

Section 9: One Variable Statistics

Section 6.5: Random Sampling and Shuffling

Section 5.2: Next Event Simulation Examples

Sets MAT231. Fall Transition to Higher Mathematics. MAT231 (Transition to Higher Math) Sets Fall / 31

Testing Random- Number Generators

What We ll Do... Random

Review Chapter 6 in Bravaco. Short Answers 1. This type of method does not return a value. a. null b. void c. empty d. anonymous

Discrete Mathematics 2 Exam File Spring 2012

Model validation T , , Heli Hiisilä

Functions. Computer System and programming in C Prentice Hall, Inc. All rights reserved.

Descriptive and Graphical Analysis of the Data

1. To condense data in a single value. 2. To facilitate comparisons between data.

CHAPTER 5 NEXT-EVENT SIMULATION

Model selection and validation 1: Cross-validation

Math 120 Introduction to Statistics Mr. Toner s Lecture Notes 3.1 Measures of Central Tendency

STA 570 Spring Lecture 5 Tuesday, Feb 1

Integrated Math I. IM1.1.3 Understand and use the distributive, associative, and commutative properties.

Lecture 19. Topics: Chapter 9. Simulation and Design Moving to graphics library Unit Testing 9.5 Other Design Techniques

Last Time: Rolling a Weighted Die

Student Responsibilities. Mat 2170 Week 9. Notes About Using Methods. Recall: Writing Methods. Chapter Six: Objects and Classes

Distributions of Continuous Data

We extend SVM s in order to support multi-class classification problems. Consider the training dataset

Name: Date: Period: Chapter 2. Section 1: Describing Location in a Distribution

Lecture 27: Review. Reading: All chapters in ISLR. STATS 202: Data mining and analysis. December 6, 2017

UNIT 9A Randomness in Computation: Random Number Generators

Stochastic Algorithms

Frequency Distributions

Tutorial 12 Craps Game Application: Introducing Random Number Generation and Enumerations

Chapter 6: DESCRIPTIVE STATISTICS

CSE 230 Computer Science II (Data Structure) Introduction

Design of Experiments

Random Numbers. Chapter 14

IT 403 Practice Problems (1-2) Answers

CPSC 427: Object-Oriented Programming

VARIANCE REDUCTION TECHNIQUES IN MONTE CARLO SIMULATIONS K. Ming Leung

Week 3. Sun Jun with slides from Hans Petter Langtangen

Dealing with Categorical Data Types in a Designed Experiment

1.1 - Introduction to Sets

Monte Carlo Simula/on and Copula Func/on. by Gerardo Ferrara

Simulation. Chapter Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

BIOL Gradation of a histogram (a) into the normal curve (b)

arxiv: v1 [cs.ds] 3 Oct 2017

Data Collection, Preprocessing and Implementation

Lecture: Simulation. of Manufacturing Systems. Sivakumar AI. Simulation. SMA6304 M2 ---Factory Planning and scheduling. Simulation - A Predictive Tool

EE109 Lab Need for Speed

Chapter Four: Loops II

Spatial Analysis and Modeling (GIST 4302/5302) Guofeng Cao Department of Geosciences Texas Tech University

Things you ll know (or know better to watch out for!) when you leave in December: 1. What you can and cannot infer from graphs.

Chpt 3. Data Description. 3-2 Measures of Central Tendency /40

Practical Questions CSCA48 Winter 2018 Week 11

Data Preprocessing. S1 Teknik Informatika Fakultas Teknologi Informasi Universitas Kristen Maranatha

lecture 19: monte carlo methods

Mat 2170 Week 9. Spring Mat 2170 Week 9. Objects and Classes. Week 9. Review. Random. Overloading. Craps. Clients. Packages. Randomness.

Downloaded from

Lecture 04 FUNCTIONS AND ARRAYS

Chapter Goals. Contents LOOPS

Data Handling. Moving from A to A* Calculate the numbers to be surveyed for a stratified sample (A)

UNIT 1A EXPLORING UNIVARIATE DATA

Math 574 Review Exam #1

Sampling and Monte-Carlo Integration

Using AIMMS for Monte Carlo Simulation

Lecture 6: Chapter 6 Summary

COMP 110 Programming Exercise: Simulation of the Game of Craps

Objective 1: To simulate the rolling of a die 100 times and to build a probability distribution.

hp calculators HP 9g Probability Random Numbers Random Numbers Simulation Practice Using Random Numbers for Simulations

Corso di Identificazione dei Modelli e Analisi dei Dati

More Summer Program t-shirts

LAB #2: SAMPLING, SAMPLING DISTRIBUTIONS, AND THE CLT

Next-Event Simulation

Robotics. Lecture 5: Monte Carlo Localisation. See course website for up to date information.

Bootstrap confidence intervals Class 24, Jeremy Orloff and Jonathan Bloom

Data can be in the form of numbers, words, measurements, observations or even just descriptions of things.

Monte Carlo Simulations

Computer Experiments. Designs

Business: Administrative Information Services Crosswalk to AZ Math Standards

Notes on Simulations in SAS Studio

Functions. Angela Chih-Wei Tang ( 唐之瑋 ) Department of Communication Engineering National Central University JhongLi, Taiwan.

Problem One: A Quick Algebra Review

Random Number Generation and Monte Carlo Simulation

Generating random samples from user-defined distributions

Transcription:

Discrete-Event Simulation: A First Course Section 4.2:

Section 4.2: Given a discrete-data sample multiset S = {x 1, x 2,...,x n } with possible values X, the relative frequency is ˆf (x) = the number of x i S with x i = x n A discrete-data histogram is a graphical display of ˆf (x) versus x If n = S is large relative to X then values will appear multiple times

Example 4.2.1 Program galileo was used to replicate n = 1000 rolls of three dice Discrete-data sample is S = {x 1, x 2,...,x 1000 } Each x i is an integer between 3 and 18, so X = {3, 4,...,18} Theoretical probabilities: s (limit as n ) ˆf(x) 0.15 0.10 0.05 0.00 3 8 13 18 x 3 8 13 18 x Since X is known a priori, use an array

Example 4.2.2 Suppose 2n = 2000 balls are placed at random into n = 1000 boxes: Example 4.2.2 n = 1000; for (i = 1; i<= n; i++) /* i counts boxes */ x i = 0; for (j = 1; i<= 2*n; i++) { /* j counts balls */ i = Equilikely(1, n); /* pick a box at random */ x i ++; /* then put a ball in it */ } return x 1,x 2,...,x n ;

Example 4.2.2 S = {x 1, x 2,...,x n }, x i is the number of balls placed in box i x = 2.0 Some boxes will be empty, some will have one ball, some will have two balls, etc. For seed 12345: 0.3 ˆf(x) 0.2 0.1 0.0 0 1 2 3 4 5 6 7 8 9 x 0 1 2 3 4 5 6 7 8 9 x

Histogram Mean and Standard Deviation The discrete-data histogram mean is x = x xˆf (x) The discrete-data histogram standard deviation is s = (x x) 2ˆf ( ) (x) or s = x 2ˆf (x) x 2 x x The discrete-data histogram variance is s 2

Histogram Mean versus Sample Mean By definition, ˆf (x) 0 for all x X and ˆf (x) = 1 From the definition of S and X, x n x i = x xnˆf (x) and n (x x) 2 nˆf (x) i=1 i=1(x i x) 2 = x The sample mean/standard deviation is mathematically equivalent to the discrete-data histogram mean/standard deviation If frequencies ˆf ( ) have already been computed, x and s should be computed using discrete-data histogram equations

Example 4.2.3 For the data in Example 4.2.1 (three dice): 18 ÚÙØ 18 x = xˆf (x) = 10.609 and s = x=3 x=3 (x x) 2ˆf (x) = 2.925 For the data in Example 4.2.2 (balls placed in boxes) 9 ÚÙØ 9 x = xˆf (x) = 2.0 and s = x=0 x=0 (x x) 2ˆf (x) = 1.419

Algorithm 4.2.1 Given integers a, b and integer-valued data x 1, x 2,... the following computes a discrete-data histogram: Algorithm 4.2.1 long count[b-a+1]; n = 0; for (x = a; x <= b; x++) count[x-a] = 0; outliers.lo = 0; outliers.hi = 0; while ( more data ) { x = GetData(); n++; if ((a <= x) and (x <= b)) count[x-a]++; else if (a > x) outliers.lo++; else outliers.hi++; } return n, count[], outliers /* ˆf (x) is (count[x-a] / n) */

Outliers Algorithm 4.2.1 allows for outliers Occasional x i outside the range a x i b Necessary due to array structure Outliers are common with some experimentally measured data Generally, valid discrete-event simulations should not produce any outlier data

General-Purpose Discrete-Data Histogram Algorithm Algorithm 4.2.1 is not a good general purpose algorithm: For integer-valued data, a and b must be chosen properly, or else Outliers may be produced without justification The count array may needlessly require excessive memory For data that is not integer-valued, algorithm 4.2.1 is not applicable We will use a linked-list histogram

Linked-List head value count next value count next...... value count next Algorithm 4.2.2 Initialize the first list node, where value = x 1 and count = 1 For all sample data x i, i = 2, 3,...,n Traverse the list to find a node with value == x i If found, increase corresponding count by one Else add a new node, with value = x i and count = 1

Example 4.2.4 Discrete data sample S = {3.2, 3.7, 3.7, 2.9, 3.7, 3.2, 3.7, 3.2} Algorithm 4.2.2 generates the linked list: head 3.2 3 next 3.7 4 next...... 2.9 1 next Node order is determined by data order in the sample Alternatives: Use count to maintain the list in order of decreasing frequency Use value to sort the list by data value

Program ddh Generates a discrete-data histogram, based on algorithm 4.2.2 and the linked-list data structure Valid for integer-valued and real-valued input No outlier checks No restriction on sample size Supports file redirection Assumes X is small, a few hundred or less Requires O( X ) computation per sample value Should not be used on continuous samples (see Section 4.3)

Example 4.2.5 Construct a histogram of the inventory level (prior to review) for the simple inventory system Remove summary statistics from sis2 Print the inventory within the while loop in main: Printing Inventory. index++; printf( % ld \ n, inventory); /* This line is new */ if (inventory < MINIMUM) {. If the new executable is sis2mod, the command sis2mod ddh > sis2.out will produce a discrete-data histogram file sis2.out

Example 4.2.6 Using sis2mod to generate 10 000 weeks of sample data, the inventory level histogram from sis2.out can be constructed ˆf(x) 0.020 0.015 0.010 0.005 0.000 x ± 2s 30 20 10 0 10 20 30 40 50 60 70 x x denotes the inventory level prior to review x = 27.63 and s = 23.98 About 98.5% of the data falls within x ± 2s

Example 4.2.7 Change the demand to Equilikely(5, 25)+Equilikely(5, 25) in sis2mod and construct the new inventory level histogram ˆf(x) 0.020 0.015 0.010 0.005 0.000 x ± 2s 30 20 10 0 10 20 30 40 50 60 70 x More tapered at the extreme values x = 27.29 and s = 22.59 About 98.5% of the data falls within x ± 2s

Example 4.2.8 Monte Carlo simulation was used to generate 1000 point estimates of the probability of winning the dice game craps Sample is S = {p 1, p 2,...,p 1000 } p i = # wins in N plays N X = {0/N, 1/N,...,N/N} with Compare N = 25 and N = 100 plays per estimate X is small, can use discrete-data histograms

Histograms of Probability of Winning Craps 0.20 0.15 ˆf(p) 0.10 N = 25 0.05 0.00 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.10 ˆf(p) 0.05 N = 100 0.00 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 p p N = 25: p = 0.494 s = 0.102 N = 100: p = 0.492 s = 0.048 Four-fold increase in number of replications produces two-fold reduction in uncertainty

Empirical Cumulative Distribution Functions Histograms estimate distributions Sometimes, the cumulative version is preferred: Useful for quantiles ˆF(x) = the number of x i S with x i x n

Example 4.2.9 The empirical cumulative distribution function for N = 100 from Example 4.2.8 (in four different styles): ˆF(p) 1.0 0.5 0.0 0.3 0.4 0.5 0.6 0.7 p ˆF(p) 1.0 0.5 0.0 0.3 0.4 0.5 0.6 0.7 p 1.0 1.0 ˆF(p) 0.5 ˆF(p) 0.5 0.0 0.3 0.4 0.5 0.6 0.7 p 0.0 0.3 0.4 0.5 0.6 0.7 p