Joint quantification of uncertainty on spatial and non-spatial reservoir parameters

Size: px
Start display at page:

Download "Joint quantification of uncertainty on spatial and non-spatial reservoir parameters"

Transcription

1 Joint quantification of uncertainty on spatial and non-spatial reservoir parameters Comparison between the Method and Distance Kernel Method Céline Scheidt and Jef Caers Stanford Center for Reservoir Forecasting, Stanford University Abstract The experimental design methodology is widely used to quantify uncertainty in the oil and gas industry. This technique is adapted for uncertainty quantification on non-spatial parameters, such as bubble point pressure, oil viscosity, and aquifer strength. However, it is not well adapted for the case of geostatistical (spatial) uncertainty, due to the discrete nature of many input parameters as well as the potential nonlinear response with respect to those parameters. One way to handle this type of uncertainty, derived from the experimental design theory, is called the joint modeling method (JMM). This method, originally proposed in a petroleum context by Zabalza (), incorporates both non-spatial and spatial parameters within an experimental design framework. The method consists of the construction of two models, a mean model which accounts for the non-spatial parameters and a dispersion model which accounts for the spatial uncertainty. Classical Monte-Carlo simulation is then applied to obtain the probability density and quantiles of the response of interest (for example the cumulative oil production). Another method to quantify spatial uncertainty is the distance kernel method (DKM) proposed recently by Scheidt and Caers (7), which defines a realization-based model of uncertainty. Based on a distance measure between realizations, the methodology uses kernel methods to select a small subset of representative realizations which have the same characteristics as the entire set. Flow simulations are then run on the subset, allowing for an efficient and accurate quantification of uncertainty. In this work, we extend the DKM to address uncertainty in both spatial and non-spatial parameters, and propose it as an alternative to the joint JMM. Both methods are applied to a synthetic test case which has spatial uncertainty on the channel representation of the facies, and non-spatial uncertainties on the channel permeability, porosity, and connate water saturation. The results show that the DKM provides for a more accurate quantification of uncertainty with fewer reservoir simulations. Finally, we propose a third method which combines aspects of the DKM and the JMM. This third method again shows improvement in efficiency compared to the JMM alone.

2 . Introduction Uncertainty in reservoir performance is often very significant due to the small amount of data available to describe the reservoir. Reservoirs are modeled using a combination of spatial and non-spatial parameters. Spatial parameters describe properties that are correlated spatially, such as facies type, and are modeled using geostatistical methods. Non-spatial parameters describe phenomena that do not vary spatially such as water-oil contact depth, or bubble point pressure of the oil. Uncertainty exists in both type of parameters, and due to their distinct nature, the quantification of uncertainty of these parameters is done using differing and often incompatible approaches. First, uncertainty in spatial parameters (which we will denote as spatial uncertainty) is assessed using traditional ranking techniques (Ballin, 99). Starting with multiple, alternatives realizations generated using any geostatistical algorithm, traditional ranking techniques aim at selecting the P, P5 and P9 flow responses using static properties of the realizations (e.g. OOIP). Thus, spatial uncertainty is measured by evaluating multiple geostatistical realizations created by varying the input parameters of the geostatistical algorithms. Uncertainty in non-spatial parameters is often quantified using the experimental design methodology. Experimental design selects optimal combinations of parameter values for flow simulation, and then models the response using response surface methodology. Classical experimental design has been widely used in reservoir uncertainty management and has proved its efficiency to take into account various reservoir parameters (Damslet et al., 99, Manceau et al.,, Venkataraman, ). This technique works well in the case of non-spatial parameters because the response (for example, the cumulative oil production) often behaves linearly. The experimental design and ranking approaches are efficient in modeling non-spatial and spatial uncertainty, respectively. However, these approaches have difficulty in accounting for all types of uncertainty. Since ranking is often performed using static properties of the realizations, one cannot use this technique to model uncertainty in nonspatial parameters, such as possibly differing oil and water relative permeability curves. In addition, classical experimental designs are often not suited to model spatial uncertainty, due to the discrete nature of some of the input geostatistical parameters, and the possible non-linear variation of the production response. However, a simultaneous method for treating both spatial and non-spatial uncertainty is desirable for an effective uncertainty quantification of reservoir performance. One approach which attempts to model both parameters simultaneously is the joint modeling method (JMM). The joint modeling method, initially proposed in a statistical context (McCullagh and Nelder, 989), was applied in the context of petroleum engineering by Zabalza () to incorporate both non-spatial and spatial parameters within an experimental design methodology for uncertainty quantification. Zabalza () and Manceau et al. () used the joint modeling methodology to quantify the

3 effect of varying geostatistical realizations on uncertainty analysis. Just as in experimental design, this approach requires running a set of simulations to capture the behavior of the production response as a function of the non-spatial parameters, using response surface methodology. Note that in the context of JMM, non-spatial parameters are also denoted deterministic parameters, because they are parameters that are varied in the deterministic experiment. Spatial uncertainty is often denoted stochastic uncertainty, and describes the uncertainty derived from stochastic modeling methods such as geostatistical algorithms. To be consistent with previous papers on JMM, we will employ the same terminology here. Thus, spatial uncertainty will be described more generally as stochastic uncertainty. Deterministic uncertainty is the uncertainty of the response of non-spatial parameters in a deterministic experiment (i.e. numerical flow simulation). Recently, the distance-kernel method (Scheidt and Caers, 7) has been proposed as an improvement to ranking techniques. This new method examines a large set of realizations and defines a realization-based model of uncertainty which is parameterized by distances. The goal is to select a small subset of representative reservoir models by analyzing the properties of the models as characterized by the distance. This subset has similar properties to the entire set of realizations. Uncertainty quantification can then be analyzed within that subset of realizations thus reducing significantly computation time. In this paper, we extend the distance-kernel method (DKM) to quantify uncertainty in both deterministic and spatial (stochastic) parameters. We compare the results of both JMM and DKM on a synthetic data set. Finally, we combine the advantages of the JMM and DKM to present a third method to quantify uncertainty in non-spatial (deterministic) and spatial (stochastic) parameters. In the following section, we first provide an overview of experimental design (ED) theory and give an illustrative example to show how experimental design and response surface methods (RSM) are used to assess uncertainty for deterministic parameters. Then, we introduce the joint modeling method which allows quantifying uncertainty not only for deterministic parameters, but also for spatial/stochastic parameters. We then review briefly the theory of the distance-kernel method. Subsequently, JMM and DKM are compared using a facies model with both spatial (stochastic) and non-spatial uncertainty, totaling 5 possible reservoir models. Results from these two different approaches are also compared with a random selection of models. We then present initial results using an approach which combines the JMM and DKM methods. The paper ends with some conclusions. Given the same input parameter values, deterministic experiments always result in the same response value. Numerical experiments, such as flow simulation, are deterministic experiments. McCullagh and Nelder used yet another terminology, describing the response of an experiment to include systematic effects, and random effects.

4 . Review of Experimental Design (ED) and Methodology (JMM) This section presents the basics of the ED and JMM theories, as well as simple examples to illustrate the principle of both techniques and their application. We consider first an example where we have a facies model on which we have uncertainty on: - spatial or stochastic parameters, treated with stochastic methods, such as the geological representation of the reservoir (multiple realizations obtained using geostatistical algorithms with differing training images, channels configuration, etc.) - non-spatial or deterministic properties of the realizations (such as mean permeability and porosity of the facies, fluid properties, etc.). The JMM extends the ED theory to treat spatial (stochastic) uncertainty, thus to consider multiple geostatistical realizations. Before going into more detail with the JMM, we summarize quickly the theory of experimental design as it is applied to uncertainty quantification in reservoir simulation... Experimental design methodology description An experimental design defines n different configurations of parameter values in our case flow simulation parameters. Given those n configurations of the k (deterministic) parameters, n simulations are performed and used to fit a response surface (RSM), usually a regression model (Myers and Montgomery, ). The aim of an experimental design is to define the minimum number of configurations in order to obtain the best fit of the RSM. An experimental design is represented by a matrix D [n x k] giving the coordinates of each simulation, i.e. the rows represent the values of the uncertain parameters which define the simulations and the columns represent the values of a parameter. A regression model of the second order for k factors = ( x,, ) form: Y( x) = β + k k x can be written in the x k βi xi + βii xi + βij xi x j + i= i= i i< j Applied to all n simulations from the experimental design, we have the following equation: Y = X +

5 where: - Y is the n vector of production response of interest (for example, the cumulative oil production at a given time) - X is the n p model matrix, which depends on the experimental design and on the regression model. - is a unknown p vector of coefficients of the regression - is a n vector of errors with E[ ε ] = and E [ ε ] = Iσ An alternative way of writing this model is: E [ Y ] = X, since [ ε ] = E. In uncertainty quantification, the production response is usually modeled by a second order polynomial model. To adjust a second order polynomial model, the simulations are often performed following a Central Composite Design (CCD), and the coefficients of ˆ =. This T T the regression are usually obtained by the Least Square method: ( X X) X Y regression model can also be used to evaluate the response Y for a new configuration of x =,, x, x, then a point estimate for x, p the parameters, say x, x, x p. If [ ] the new observation is computed by: Yˆ ( x ) x ˆ = () The resulting proxy model Ŷ is used to obtain a probability density of the production through Monte Carlo sampling, where the production response is evaluated by Eq.. ED methodology is widely used in reservoir management, and has proved its efficiency when applied to deterministic parameters (Damslet et al., 99). We now apply ED to a simple case assuming that there is no uncertainty in the geological representation of the reservoir... Application of the ED methodology to a simple case To illustrate the uncertainty assessment workflow using the ED methodology, we apply the method to a synthetic case, borrowed from Suzuki and Caers (6). We consider a channel system, composed of mud and sand. The channel sands have uniform porosity and permeability. However, these deterministic parameters are considered as uncertain. The mud is treated as inactive cells. The reservoir model is a 8x8 D grid containing producers and injectors, all penetrating channel sand. The well locations are given in Figure. For a more detailed description of the case, see Suzuki and Caers (6). However, this case is slightly different from the original case, since we assume that the permeability and the porosity are uncertain and that we have only one geostatistical realization of the reservoir (no spatial uncertainty). 5

6 Figure : Example of reservoir model realization with well locations. Red is channel and blue is shale The channel configuration of the reservoir is presented in Figure A. The value of the channel permeability may vary between 5mD and 75mD, and the value of the porosity may vary between.5 and.. A central composite design consisting of 9 simulations is constructed for these two parameters, illustrated on Figure B. Simulations of the cumulative oil production at 6 days were performed at each point of the experimental design (Figure C), and a second order polynomial model is fitted to the prediction response (Figure D). The last step of the method consists of using the response surface model through Monte Carlo sampling to compute the probability density of the cumulative oil production at 6 days (Figure E). In this case, values uniformly distributed are generated for each parameter, in order to compute the density function. For this example, the P, P5 and P9 quantiles of the production at 6 days are respectively.5,.5 and 5.8 Mm. 8 FOPR 6 (A) + 9 simulations.5 PORO (C) -.5 PERMX.5 Frequency 7 x Density Plot PORO PERMX (B) FOPR PORO (D) -.5 PERMX FOPR (E) Figure : Workflow of the Experimental Design methodology 6

7 Note that the workflow described in Figure is only valid for a single timestep. To compute the probability density of the production at different times, steps C to E need to be recomputed. The next section considers the same example, but we account for stochastic uncertainty in the channel configuration using more than one geostatistical realization and applying the joint modeling method. The JMM is similar to experimental design, except that it uses two regression models to model the variation of the production response as a function of the deterministic and stochastic parameters. It consists of the construction of a mean model to describe the production response as a function of deterministic parameters, and a dispersion model to account for the stochastic parameters. These two regression models are linked using generalized linear models (GLM). The production forecasts can then be quantified as a function of both deterministic and stochastic uncertainties... Methodology description In the presence of two types of uncertainty (deterministic and stochastic), an experimental design is constructed with the deterministic parameters only. Then, the stochastic parameters are modeled by creating multiple geostatistical realizations for each configuration of deterministic parameters given by the experimental design. We introduce a few important notations for a better understanding of the theory. Let us denote: Y: the production response of interest (for example, cumulative oil production at a given time) k: the number of deterministic uncertain parameters (e.g. uniform channel permeability, porosity) n: the number of points of the experimental design; each point or configuration of u the k deterministic parameters is represented by x, u =,..., n r: the number of geostatistical realizations (number of replications for each point of the experimental design) Y ui, i =,..., r : r evaluations of Y(x u ) for different geostatistical realizations. Contrary to the traditional approach where a single evaluation of the response at each of the n points of the experimental design is performed, a series of simulations are u u performed at a given point x of the experimental design. For each of the n points x, r simulations are performed, one for each of the r geostatistical realizations, resulting in n x r simulations. These repeated simulations allow the construction of a model of the dispersion of the response due to the geostatistical uncertainty. 7

8 Figure illustrates this principle for the case of k = deterministic parameters and r = possible geostatistical realizations. We employ a Central Composite Design for the deterministic parameters, resulting in 9 different parameter combinations. Thus, a total of 6 (x9) simulation of the response (cumulative oil production at 6 days in this example) were performed. Figure A represents the response in the D space formed by the two deterministic parameters and Figure B is a D version of this D space, i.e. the x-axis represents the index u of the simulation u x, u =,..., n. 5 FOPR PORO PERMX.5 FOPR at 6 days for different realizations Simulation # (A) (B) Figure : Cumulative oil values at 6 days for each point of the experimental design, (A) D view, (B) D view The objective of the JMM is to construct intervals of possible values of the responses at each point of the experimental design. The simulations Y ui are required to be normally distributed. From the simulation results, the JMM constructs two models, a mean model and a dispersion model of the response. These two models are fitted using generalized linear models (GLM), which expressions are given by Eq. and Eq.. Appendix A provides more details about generalized linear models. For the mean model, we have the following equations (Zabalza, ): E [ Y ] = = and Cov[ Y ] = diag[ ] u =,..., n and i = r X () ui ui ui ui ui,... where: - Y ui is the response at point x u for realization i - µ ui is the mean of the response for realization i at point x u - is a unknown vector of coefficients of the regression - X = [ X ui,, u=,...,n and i=,...,r] is the model matrix. Each row X ui of X refers to a different observation, each column to a different covariate The variance model is a function of the dispersion D of the response, which is given by: 8

9 D( u u ( Y ( x ) ( )), u =,, n u x ) = µ x or D ( )) ui Yui ui = () Under the assumption that the Y ui are normally distributed, it can be shown that the dispersion D ui follows a Gamma law. The variance model is then given by Eq. : E [ D ] = ; ln( ) = and Cov[ D ] = diag τ ( ) ui ui ui ui ui [ ui ] u =,..., n and i =,..., r H () where: - D ui is the dispersion of the response for realization i at point x u - is a unknown vector of coefficients of the regression - H = [H ui,, u=,...,n and i=,...,r] is the model matrix. Each row H ui of H refers to a different observation, each column to a different covariate - τ is a scaling parameter Here we use H ui as opposed to X ui to emphasize the possible differences between mean model and dispersion model terms. As we can see, the models () and () are dependant. The estimation of the mean model ˆ x σ x to be used as a prior weight, requires an estimate σ ( ) of the dispersion model ( ) while the dispersion model requires an estimate ( x) ˆ of mean model ( x) in order to form the dispersion response variable D ui. For fitting these models, an iterative algorithm is required, whereby we alternate between fitting the model mean and then fitting the dispersion using the variable D ui. Note that since the response Y ui is assumed to be normal in the JMM, the dispersion D ui has a Gamma distribution: special case of a Chi distribution (see Myers and Montgomery, ). D ui ~ σ ui χ Following are the different steps of the methodology: (). Use an ordinary least square estimator ˆ for the mean model (Eq. ). ˆ ( = and thus the model of the mean is: ˆ µ ) given by ( X' X) X' Y. Use. Consider ( ) ˆ () ui = X () () () ˆ to calculate the n residuals or dispersion: D ( ) ui Y ui ˆui µ dispersion of Y ui. Determine = (Eq. ), which is a () ˆ is () D ui as a realization of the random variable D ui representing the () ˆ by fitting the model of dispersion using maximum likelihood methodology of a Gamma model (Mc Cullagh and Nelder, 989) (). Use the parameter ˆ to calculate the estimator of the variance: () ( ) () ( x ) = exp H x ˆ ˆ σ (Eq. ). Use ˆ σ ( x) to compute the matrix V in Eq. : ( ˆ σ ( ),, ˆ ( )) () x () u σ x u V = diag. () 9

10 5. Calculate ˆ () = ( X' V X) X' V Y 6. Go back to step with () ˆ using weighted least square method with V as a weight () ˆ replacing () ˆ, i.e. calculate ( k ) ( ) ˆ ˆ k ui = X ) ( ) 7. Continue until convergence, i.e. when ( ) ( k k / ˆ ( x u )( Yui ˆ µ ( xu )) µ σ is stable. Given the response surface models of the mean ˆ µ ( x) and the dispersion σ ( x), an interval of prediction of the response Y can be constructed. Zabalza () has shown that the prediction interval for Y is given by: with: ˆ S( x) = X Var( ) X' + σ x) x I x α α = ˆ µ ( x) t Sˆ( x); ˆ( µ x) t Sˆ( x ) (5) ˆ x ˆ(, X x being the model matrix at point x α α t and t are the quantiles of order α α and the normal law N(,) Remark: Usually, α is taken to be equal to.5, meaning that the probability that a prediction is out of the interval of prediction is less or equal than 5%. A Monte-Carlo sampling technique is then simultaneously applied to both models with a known prior distribution for each deterministic parameter, usually of uniform distribution. A normal law N ( ˆ( µ x), ˆ( σ x)) is applied to account for both deterministic and stochastic uncertainty. We thus obtain the density of probability of the response and the quantiles of production by using those two models. ˆ.. Illustrative example of Methodology Similarly to the ED section, we present a simple illustration of the principle of the JMM. In the previous example (Figure ), only realization was taken into account. Now, as shown in Figure, the workflow is presented in the case of different geostatistical realizations (Figure A) and deterministic uncertain parameters: channel permeability and porosity. Similar to the example in Figure, a Central Composite Design for parameters (Figure B) is used to account for these deterministic parameters, and a total of 9x=6 simulations are performed in this example, 9 for each geostatistical realization. Figure C shows the response for each geostatistical realization and for each of the 9 points of the experimental design, (i.e. each possible configuration of permeability and porosity) as well as the interval of prediction I x obtained by the RSM of the mean and the RSM of the dispersion. Figure D presents the response surfaces obtained by the

11 models: the middle surface represents the mean behavior of the production as a function of the deterministic parameters, and the two bounding surfaces represent the dispersion due to geostatistical variability (Eq. 5). In order to calculate the probability density of the response (Figure E), a Monte-Carlo simulation is performed. In this case, values for each deterministic parameter are generated using a uniform distribution. In the classical case (without geostatistical variation), the response is directly obtained by the response surface of the mean, as illustrated in the previous section on ED. In our application, in order to account for the geostatistical variability, for each of the points, 5 values of responses are generated using a normal law N ( ˆ( µ x), ˆ( σ x)). The density is then computed using 5 response values. FOPR at 6 days for different realizations (A) + 9 simulations PORO PERMX (B) r= geological realizations n=9 points of ED FOPR Simulation # Simulations Mean value µ ui Interval of prediction.5 PORO (D) (C) -.5 Uniform sampling PERMX.5 Normal sampling Frequency 8 x Density Plot for - 5 FOPR (E) Figure : Workflow of the Methodology As we can see on Figure, the JMM workflow is closely related to the ED workflow, except that ED considers only one response surface (the mean), whereas the JMM constructs a mean model and prediction intervals of the response. We propose now to see the impact of the use of prediction intervals (i.e. considering stochastic uncertainty) on the quantiles and density estimations.

12 .5. Comparison of the results of the two methods To do so, we compare the probability densities and quantiles estimation obtained with ED and JMM. Figure 5A shows a superposition of both probability densities of the cumulative oil production at 6 days, the red curve representing results from ED and the blue curve from JMM. Similarly, Figure 5B represents a superposition of the cumulative oil production quantiles as a function of the time obtained with the ED technique (red) - without taking into account the geostatistical uncertainty, and the quantiles obtained by the JMM (blue). Recall that both ED and JMM assume that the response is independent of time. Thus, to calculate the quantiles of the production as a function of time, the entire workflow must be repeated for each time. In other words, a new response surface is constructed at each time. The assumption of independence is obviously incorrect for flow response, such as the cumulative oil production used in this work. However, time-dependant experimental designs and response surfaces are still an area of research and are not used in practice. Frequency 6 x - Density Plot 5 Experimental Design.5 x.5.5 Quantile estimation - P, P5 and P9 ED JMM at 6 days x (A) (B) Figure 5: Comparison of the ED and JMM: (a) probability density estimations at 6 days and (b) quantile estimation as a function of time We can observe on Figure 5 that the uncertainty obtained from the JMM is larger than that obtained with experimental design technique. For example at 6 days, the P, P5 and P9 quantiles of the production are respectively.5,.5 and 5.8 Mm for the ED and 9., 5. and 9.9 Mm for the JMM. Although this result is not surprising, it serves well as a reminder that stochastic uncertainty should be considered in any uncertainty quantification study. The distance-kernel methodology has been developed to take into account spatial (stochastic) uncertainty in uncertainty quantification. DKM allows the selection, among many geostatistical realizations, of some representative realizations for flow simulation. In the next section, we present briefly the main steps of the distance-kernel methodology.

13 . Distance-Kernel Methodology: Review The principle of the methodology, illustrated for facies models, is described in Figure 6. Starting with multiple (N R ) realizations generated using any algorithm, a dissimilarity distance matrix is constructed (Figure 6a and 6b). This N R x N R matrix contains the distance between any two model realizations, which is a way to determine how similar two reservoir models are in terms of geological properties and flow behavior. The distance can be calculated in any manner - the only requirement for the distance is to be well correlated to the flow response(s) of interest. Example of such distances are the Hausdorff distance (Suzuki and Caers, 6), time-of-flight based distances (Park and Caers, 7), or flow-based distance using fast flow simulators. The distance matrix is then used to map all realizations into a Euclidean space R (Figure 6c), using a technique called multidimensional scaling (MDS). MDS translate the dissimilarity matrix into a configuration of points in n-dimensional Euclidean space (Borg and Groenen, 997). Each point in this map represents a realization - the points are arranged in a way that their Euclidean distances correspond as much as possible to the dissimilarity distance of the realizations. Since in most cases the structure of the points in mapping space R is nonlinear, we propose the use of kernel methods, in order to transform the Euclidean space R into a new space F, called the feature space (Figure 6d). The goal of the kernel transform (Schöelkopf and Smola, ) is that points in this new space behave more linearly, so that standard linear tools for pattern detection can be used more successfully (such as principal component analysis, cluster analysis, dimensionality reduction, etc.). These tools allow the selection of a few typical points, in our case reservoir models, among a potentially very large set. In this work, we use the classical k-means algorithm in the feature space F, also called kernel k-means (KKM), to determine a subset of points defined by their centroids. The subset of models selected by KKM is small enough to allow uncertainty quantification (e.g. P, P5, P9 quantiles) through flow simulation. For more details about the methodology, refer to Scheidt and Caers (7).

14 Model Model δ δ δ δ δ δ Model Model δ δ δ δ δ δ δ δ δ δ δ δ δ δ δ δ (a) (b) (c) P,P5,P9 model selection Φ (e) Φ (d) Figure 6: Proposed workflow for uncertainty quantification: (a) distance between two models, (b) distance matrix, (c) models mapped in Euclidean space, (d) feature space, (e) pre-image construction, (f) P, P5, P9 estimation The k-means algorithm tries to assign points in k clusters S i by minimizing the expected squared distance between the points of the cluster and its center µ i : J = k i= x S j i x µ The algorithm starts by partitioning randomly the input points into k initial sets S i. It then calculates the mean point, or centroid µ i, of each set. Then, every point is assigned to the cluster whose centroid is closest to that point. These two steps are repeated until convergence, which is obtained when the points no longer switch clusters (or alternatively, when centroids no longer change location). The k-means algorithm often starts with a random partition of input points to initialize the clusters. However, this approach is not always optimal, and is subject to many local minima. In this work, we employ a -step approach, which consists of initializing KKM with results of an alternative of k-means, which is called spectral clustering. This twolayer approach, first running spectral clustering to get an initial partitioning, and then refining the partitioning by running KKM on the partitioning, typically results in a robust j i

15 partitioning of the data (Dhillon, ). Appendix B. For details about spectral k-means, see DKM is well adapted to model uncertainty on spatial parameters. However, DKM can also be employed in the context of deterministic parameters or both deterministic and stochastic parameters. The next section proposes a comparison of the DKM and JMM.. Application to a synthetic case comparison between both methods In this section, we describe a synthetic case on which uncertainty assessment was performed using both DKM and JMM. Results are presented for both approaches, as well as results obtained by mixing both approaches in a new workflow... Case description: The synthetic case used in this section is modified form the case presented in Suzuki and Caers (6). In this synthetic case, we assume again that we have a facies model with two different types of uncertainties: - Deterministic (non-spatial) uncertain parameters: o Permeability of channels: 5, 5, 75 md o Porosity of channels:.5,.5,. o Critical mobile water saturation (SWCR):.,.5,. - Geostatistical (stochastic, spatial) uncertain parameters: o Orientation of channels: N5E, N,N5W o Proportion of facies: %, % or % o Sinuosity of the channels: small, medium, large The objective is to estimate the quantiles and the densities of the cumulative oil production () as a function of the time. The uncertainty in the geostatistical parameters is taken into account by combining the geostatistical uncertainties together, which leads to 7 possible training images (TI), depicted in Figure 7. 5 facies realizations per TI were generated using a multi-point geostatistical algorithm, giving a total of 5 possible geostatistical realizations which should be taken into account. 5

16 Figure 7: 7 Training Images used in this application The uncertainty in the deterministic parameters is analyzed using an experimental design. Since a second order polynomial model is most often used in JMM to model the RSM (Eq. and Eq. ), a Central Composite Design (CCD) is employed. A CCD is constructed with the deterministic parameters, giving 5 reservoir simulations. As a consequence, each of the 5 geostatistical realizations has 5 possible configurations of permeability, porosity and critical mobile water saturation (as given by the ED), leading to a total number of 5 reservoir models (5x5). To compare the accuracy of both modeling approaches, flow simulation was performed on those 5 models, an exercise that would not be possible in practice. The cumulative oil production for these 5 simulations as a function of time is presented in Figure 8. Figure 8: Cumulative oil production () as a function of time for the 5 simulations The 5 simulations of the cumulative oil production were performed using a standard finite difference simulator in order to serve as a reference to analyze the efficiency and the quality of the results of both methods, the JMM and the DKM. 6

17 .. Application of the Method We first apply the JMM to the case presented previously. Since 5 flow simulations need to be performed for each geostatistical realization, the use of JMM requires a reasonable number of geostatistical realizations to be considered. We are interested in assessing the accuracy of the JMM by varying the number of simulations to obtain the response surfaces, while keeping at the same time a feasible number of flow simulations. Our first step is to reduce the number of simulations required by the JMM, by selecting only one geostatistical realization per TI. Considering all 7 TIs leads to a total of 5x7=5 simulations, which would still be a significant number of flow simulations for real field cases. In order to vary (and reduce) the number of simulations to perform, we select geostatistical realizations from a subset of TIs. The JMM results that follow are compared with the results for the entire set of 5 simulations. In the following figures, we present: - (A): the P, P5 and P9 quantile estimation of the cumulative oil production - (B) and (C): the value at and 6 days for each point of the ED and for each geostatistical realization (value obtained by the mean model is presented in red, interval of prediction in black) - (D) and (E): density plots at and 6 days for all the 5 simulations (in red) and for the JMM (in blue). Note that in order to calculate the quantiles as a function of time, the mean and dispersion models must be constructed for each time step, and are thus incorrectly assumed to be independent. We first apply the JMM by running a small number of flow simulations. Since the experimental design contains 5 points, only multiples of 5 are possible. Thus, to take into account the geostatistical uncertainty on facies, we consider different geostatistical realizations, resulting in 5 simulations. To do so, we fix the sinuosity of the channel and proportion of facies to their mean values - the orientation of facies remains uncertain. The corresponding TIs are taken at the center of each of the groups in Figure 7. Uncertainty assessment was then done on those 5 simulations. The resulting quantiles (A), intervals of prediction ((B) and (C)) and densities ((D) and (E)) are presented in Figure 9. 7

18 .5 x Exhautive Set (A) at days for realizations x Simulation # (B) at 6 days for realizations.5 x Simulation # (C) 5 x - Density Plot at days Exhaustive Set. x - Density Plot at 6 days. Exhaustive Set Frequency Frequency x - 5 x (D) (E) Figure 9: Results of JMM for TI, i.e. 5 simulations: (A) P, P5 and P9 quantile estimations for all 5 simulations (red) and for JMM (blue), (B) and (C) value of at and 6 days for each point of the ED, (D) and (E) densities at and 6 days We observe in Figure 9A a large overestimation of the P5 and P9 quantiles given by the JMM. Also, for some points of the experimental design (8, 9, ), the interval of prediction is too large, and thus exaggerating the stochastic uncertainty (Figure 9B and 9C). The two probability density plots (Figure 9D and 9E) show that the cumulative oil production density predicted by JMM is too narrow compared to the exhaustive set. This is understandable given that we have selected a very small number of geostatistical realizations to model the dispersion. 8

19 To improve the quality of results obtained using JMM, we add gradually more TIs. We now consider 9 TIs using again geostatistical realization per TI, corresponding to first diagonal (from top left to bottom right) of each group of 9 TIs in Figure 7, totaling of 5 simulations. The results are shown in Figure..5 x Exhautive Set (A) at days for 9 realizations. x Simulation # (B) at 6 days for 9 realizations x 5 5 Simulation # (C) x - Density Plot at days Exhaustive Set 8 x -5 Density Plot at 6 days 7 Exhaustive Set 6 Frequency Frequency x - 5 x (D) (E) Figure : Results of JMM for 9 TI, i.e. 5 simulations: (A) P, P5 and P9 quantile estimations for all 5 simulations (red) and for JMM (blue), (B) and (C) value of at and 6 days for each point of the ED, (D) and (E) densities at and 6 days As we observe on Figure A, the results have significantly improved when using more simulations. The estimation of production quantiles is very different from the case of 5 simulations. Moreover, the P9 quantiles, which were well estimated in the previous example, are now overestimated, and the P quantiles are underestimated, contrary to 9

20 the previous case where overestimation was detected. In addition, the P5 quantiles are much more precise in this case. Another observation is that overestimated prediction intervals can still be observed (Figure C), and for a some points of the experimental design (5-8, - on Figure B), simulated points can be found outside of the prediction interval. Finally, we observe for the JMM (Figure D and E) negative values of the cumulative oil production for and 6 days, which is clearly unphysical. Figure shows the results where 5 TIs where used from both diagonals on Figure 7. The total number of simulations for this case is 5..5 x Exhautive Set (A) at days for 5 realizations. x Simulation # (B) at 6 days for 5 realizations.5 x Simulation # (C) x - Density Plot at days Exhaustive Set 8 x -5 Density Plot at 6 days 7 Exhaustive Set 6 Frequency Frequency x - 5 x (D) (E) Figure : Results of JMM for 5 TI, i.e. 5 simulations: (A) P, P5 and P9 quantile estimations for all 5 simulations (red) and for JMM (blue), (B) and (C) value of at and 6 days for each point of the ED, (D) and (E) densities at and 6 days

21 As we observe on Figure A, the results have significantly improved when using more simulations. However, the geostatistical uncertainty is still not completely captured - we can still observe an overestimation of the range of uncertainty (Figure B and C). Again, negative values of are observed in the probability density plots. Let us examine the results, in Figure, using all the 7 TIs, which leads to a total of 5 flow simulations..5 x Exhautive Set (A) at days for 7 realizations. x Simulation # (B) at 6 days for 7 realizations.5 x Simulation # (C) x - Density Plot at days Exhaustive Set 8 x -5 Density Plot at 6 days 7 Exhaustive Set 6 Frequency Frequency x - 5 x (D) (E) Figure : Results of JMM for 7 TI, i.e. 5 simulations: (A) P, P5 and P9 quantile estimations for all 5 simulation (red) and for JMM (blue), (B) and (C) value of at and 6 days for each point of the ED, (D) and (E) densities at and 6 days

22 Again, results have improved compared to the previous case. We observe that results are very accurate but a large number of flow simulations is required, which in many the cases may not be feasible. The JMM constructs response surfaces and uses the Monte Carlo method to construct the probability densities and quantile production curves. Alternatively, we can employ the DKM... Application of the Distance Kernel Method (DKM) In this section, we apply the DKM method with kernel k-means methodology (KKM) on the synthetic case presented previously. As mentioned in Section, the method relies on an estimation of the dissimilarity of all the possible reservoir models (5 in this case). Note that in this case, flow-based distance is required since static distances, like Hausdorff distance, do not take into account the differences in the deterministic parameters such as uniform channel permeability, porosity, and critical water saturation. The distance is calculated using streamline simulation. Streamline simulation is known to be a fast flow simulation technique due to its capability to take large time steps. Simulation was performed for only two time-steps, up to and days. The distance between any two reservoir models is given as the average of absolute difference in total oil field production rate at and days. Using the dissimilarity distance defined previously, all reservoir models are mapped into a D Euclidean space (cf. Figure A) using multidimensional scaling (MDS). The scatter plot on Figure B, representing the Euclidean distance between any two points in the D Euclidean space as a function of the dissimilarity distance given by streamline simulation, shows clearly that the two distances are similar. Thus, we are assured that using a D representation of the models is accurate enough to subsequently use the Euclidean distance in R used for the rest of the study instead of the dissimilarity distance. MDS space (A) Figure : (A) 5 realizations mapped in to a D space R, (B) Scatter plot of Euclidean distance between points in R and dissimilarity matrix distance (correlation ρ =.9975). (B)

23 The kernel employed to define the feature space F is Gaussian, whose parameter equals % of the range of distances in the Euclidean space (σ= 5). Its expression is: k ( x, y) exp x y = where and x and y represent realizations in the MDS space R. The KKM algorithm is then applied, which consists of using the traditional k-means clustering algorithm directly in the feature space F. In this example, we select 5 models and perform simulations on each of the selected models, which results in the same number of simulations as in the example application for the JMM, for comparison of the results. In order to initialize the cluster centroids, we propose to perform a two-step approach. First, spectral clustering is applied to have initial centroids, and then the kernel k-means (KKM) algorithm is performed using those initial centroids (See Appendix B for details about this -step approach). Flow simulations were then performed on the 5 centroids of the clusters, presented in blue in Figure A. The results of the simulations for are presented in Figure B. We can see that KKM permits the selection of representative realizations - the spread of uncertainty for only 5 simulations is quite good. σ FOPR.5 x All realizations Selected realizations (A) Figure : (A) Mapping space R: blue points represent the points selected by the methodology, (B) values for the 5 realizations corresponding to the blue points (B) The P, P5 and P9 quantiles (Figure 5) were calculated by weighing each realization by the number of points within the clusters.

24 .5 x All realizations KKM Figure 5: P, P5 and P9 quantile estimation as a function of time x - Density Plot at days Exhaustive Set KKM results 8 x -5 Density Plot at 6 days 7 Exhaustive Set KKM results 6 Frequency Frequency x (A) - 5 x Figure 6: Density plot for all models and the 5 selected models at (a) and (b) 6 days (B) As we can see on Figure 5, the quantile estimations are accurate, while performing only 5 simulations (which is.% of the total number of simulations). The density estimations are also of good quality (Figure 6), the 5 simulations (blue) have similar properties than the entire set of 5 (red). Note that the densities are not smooth, due to the small number of simulations and the weight for each realization applied to compute those densities. In addition, comparison with the estimation of the quantiles using JMM using 5 simulations shows a better estimation of the quantiles and densities for DKM. To obtain equivalent results using JMM, one must run a larger number of simulations

25 .. Comparison with randomly chosen realizations We now compare the results of DKM and JMM with a random selection of the same number of simulations. To do this, we have generated sets of 5 and 5 simulations, we present comparisons with the best and the worst set. In the case of 5 simulations, comparison is made with the DKM and JMM methods, whereas in the other case, comparison is only done with the JMM..5 x Exhautive Set.5 x All realizations KKM (A) (B).5 x All realizations Random Selection.5 x All realizations Random Selection (C) (D) Figure 7: P, P5 P9 values as a function of time, (A) 5 simulations selected by JMM, (B) 5 simulations selected by DKM, (C) best results using 5 randomly chosen models, (D) worst results using 5 randomly chosen models.5 x Exhautive Set x.5 All realizations Random Selection x.5 All realizations Random Selection (A) (B) Figure 8: P, P5 P9 values as a function of time, (A ) JMM using 5 simulations, (B) best results using 5 randomly chosen models, (C) worst results using 5 randomly chosen models (C) 5

26 We can see in Figure 7 that DKM gives better quantile estimation than randomly chosen realizations, which is not always the case for JMM (Fig 8). The main problem with the JMM is that in order to be accurate, one must select the correct realizations to properly capture the stochastic uncertainty. In the next section, we propose thus to use the DKM procedure to select the correct realizations..5. Combination of JMM and DKM We propose now to combine the two approaches. Instead of choosing arbitrarily the training images on which we want to perform the JMM, we first perform DKM with values of permeability, porosity and SWCR fixed at their mean, and then perform JMM on the geostatistical realizations representing the P, P5 and P9 quantiles of the production. To do so, geostatistical realizations were selected by the DKM. Flow simulation was performed on these realizations, and the P, P5, and P9 realizations were identified. Subsequently, we apply the JMM using these realizations to calculate the stochastic uncertainty. A total of 5 simulations for the JMM (5 for the combination of both methods) are used. Results are presented in Figure 9 below. 6

27 .5 x Exhautive Set (A) at days for realizations. x Simulation # (B) at 6 days for realizations.5 x Simulation # (C) x - Density Plot at days Exhaustive Set 8 x -5 Density Plot at 6 days 7 Exhaustive Set 6 Frequency Frequency x (D) - 5 x Figure 9: Quantiles and densities (at and 6 days) obtained by JMM on 5 models selected by DKM (E) We can observe that the quantile estimations are much more accurate than in the case of 5 simulations using only JMM. The DKM procedure here allows the selection of optimal geostatistical realizations and thus improves the JMM. However, the results are still less accurate than applying DKM only. 7

28 5. Conclusions The aim of this paper is to propose the DKM as an alternative method to JMM to quantify uncertainty on both deterministic and stochastic parameters. This method has been compared with the JMM on a synthetic dataset, consisting of a facies model with stochastic uncertainty on the channel orientation, the proportion of facies and the channel sinuosity and with deterministic uncertainty on porosity, permeability and critical water saturation. The comparative study shows that DKM provides more accurate quantile estimations than JMM, with only 5 flow simulations, reducing clearly the number of flow simulation required to have an accurate estimation of the densities and quantiles of the response. The efficiency of the DKM is dependent upon an accurate measure of distance which must be correlated with the production response(s) of interest. In the case presented here, the DKM is applied by using a flow-based distance. A flow-based distance is required to calculate distances when flow parameters (such as critical water saturation) are uncertain. Employing static measures of distance such as the Hausdorff distance may not be as accurate. Note that the distance calculation in this case required a streamline simulation for each model, but the CPU time required for each simulation is negligible compared to the full flow simulation. One limitation of the JMM is the number of geostatistical realizations that can be taken into account. In order to have an acceptable number of simulations, only a few ( to 5) geostatistical realizations can be used in practice. When there are many possible training images or simply many geostatistical realizations, a method to select an accurate subset of realizations for use with the JMM is necessary. One way to do so, also presented in this paper, is to combine both JMM and DKM approaches. Results were promising in the synthetic case used, showing a great improvement of the results compared to the case where the realizations were chosen randomly. In addition, one drawback of JMM is that non-physical results may be obtained, for example negative prediction of the cumulative oil production. This is not the case using DKM, which only consider response from existing simulations and does not try to predict new responses from potentially inaccurate response surface models. 8

Uncertainty Quantification Using Distances and Kernel Methods Application to a Deepwater Turbidite Reservoir

Uncertainty Quantification Using Distances and Kernel Methods Application to a Deepwater Turbidite Reservoir Uncertainty Quantification Using Distances and Kernel Methods Application to a Deepwater Turbidite Reservoir Céline Scheidt and Jef Caers Stanford Center for Reservoir Forecasting, Stanford University

More information

Modeling Uncertainty in the Earth Sciences Jef Caers Stanford University

Modeling Uncertainty in the Earth Sciences Jef Caers Stanford University Modeling response uncertainty Modeling Uncertainty in the Earth Sciences Jef Caers Stanford University Modeling Uncertainty in the Earth Sciences High dimensional Low dimensional uncertain uncertain certain

More information

Adaptive spatial resampling as a Markov chain Monte Carlo method for uncertainty quantification in seismic reservoir characterization

Adaptive spatial resampling as a Markov chain Monte Carlo method for uncertainty quantification in seismic reservoir characterization 1 Adaptive spatial resampling as a Markov chain Monte Carlo method for uncertainty quantification in seismic reservoir characterization Cheolkyun Jeong, Tapan Mukerji, and Gregoire Mariethoz Department

More information

Stanford Center for Reservoir Forecasting Stanford University. The Problem. Seismic Image (SI) Space. Training Image (TI) Space. Set of Realizations

Stanford Center for Reservoir Forecasting Stanford University. The Problem. Seismic Image (SI) Space. Training Image (TI) Space. Set of Realizations Stanford Center for Reservoir Forecasting Stanford University Training Image (TI) Space The Problem? Seismic Image (SI) Space Set of Realizations The Problem Training Image Space Seismic Image Space -Metric

More information

A Geostatistical and Flow Simulation Study on a Real Training Image

A Geostatistical and Flow Simulation Study on a Real Training Image A Geostatistical and Flow Simulation Study on a Real Training Image Weishan Ren (wren@ualberta.ca) Department of Civil & Environmental Engineering, University of Alberta Abstract A 12 cm by 18 cm slab

More information

Programs for MDE Modeling and Conditional Distribution Calculation

Programs for MDE Modeling and Conditional Distribution Calculation Programs for MDE Modeling and Conditional Distribution Calculation Sahyun Hong and Clayton V. Deutsch Improved numerical reservoir models are constructed when all available diverse data sources are accounted

More information

A workflow to account for uncertainty in well-log data in 3D geostatistical reservoir modeling

A workflow to account for uncertainty in well-log data in 3D geostatistical reservoir modeling A workflow to account for uncertainty in well-log data in 3D geostatistical reservoir Jose Akamine and Jef Caers May, 2007 Stanford Center for Reservoir Forecasting Abstract Traditionally well log data

More information

Inclusion of Aleatory and Epistemic Uncertainty in Design Optimization

Inclusion of Aleatory and Epistemic Uncertainty in Design Optimization 10 th World Congress on Structural and Multidisciplinary Optimization May 19-24, 2013, Orlando, Florida, USA Inclusion of Aleatory and Epistemic Uncertainty in Design Optimization Sirisha Rangavajhala

More information

Recent advances in Metamodel of Optimal Prognosis. Lectures. Thomas Most & Johannes Will

Recent advances in Metamodel of Optimal Prognosis. Lectures. Thomas Most & Johannes Will Lectures Recent advances in Metamodel of Optimal Prognosis Thomas Most & Johannes Will presented at the Weimar Optimization and Stochastic Days 2010 Source: www.dynardo.de/en/library Recent advances in

More information

Integration of Geostatistical Modeling with History Matching: Global and Regional Perturbation

Integration of Geostatistical Modeling with History Matching: Global and Regional Perturbation Integration of Geostatistical Modeling with History Matching: Global and Regional Perturbation Oliveira, Gonçalo Soares Soares, Amílcar Oliveira (CERENA/IST) Schiozer, Denis José (UNISIM/UNICAMP) Introduction

More information

On internal consistency, conditioning and models of uncertainty

On internal consistency, conditioning and models of uncertainty On internal consistency, conditioning and models of uncertainty Jef Caers, Stanford University Abstract Recent research has been tending towards building models of uncertainty of the Earth, not just building

More information

Improvements in Continuous Variable Simulation with Multiple Point Statistics

Improvements in Continuous Variable Simulation with Multiple Point Statistics Improvements in Continuous Variable Simulation with Multiple Point Statistics Jeff B. Boisvert A modified version of Mariethoz et al s (2010) algorithm for simulating continuous variables using multiple

More information

High Resolution Geomodeling, Ranking and Flow Simulation at SAGD Pad Scale

High Resolution Geomodeling, Ranking and Flow Simulation at SAGD Pad Scale High Resolution Geomodeling, Ranking and Flow Simulation at SAGD Pad Scale Chad T. Neufeld, Clayton V. Deutsch, C. Palmgren and T. B. Boyle Increasing computer power and improved reservoir simulation software

More information

Improvement of Realizations through Ranking for Oil Reservoir Performance Prediction

Improvement of Realizations through Ranking for Oil Reservoir Performance Prediction Improvement of Realizations through Ranking for Oil Reservoir Performance Prediction Stefan Zanon, Fuenglarb Zabel, and Clayton V. Deutsch Centre for Computational Geostatistics (CCG) Department of Civil

More information

Geostatistical Reservoir Characterization of McMurray Formation by 2-D Modeling

Geostatistical Reservoir Characterization of McMurray Formation by 2-D Modeling Geostatistical Reservoir Characterization of McMurray Formation by 2-D Modeling Weishan Ren, Oy Leuangthong and Clayton V. Deutsch Department of Civil & Environmental Engineering, University of Alberta

More information

Reservoir Modeling Combining Geostatistics with Markov Chain Monte Carlo Inversion

Reservoir Modeling Combining Geostatistics with Markov Chain Monte Carlo Inversion Reservoir Modeling Combining Geostatistics with Markov Chain Monte Carlo Inversion Andrea Zunino, Katrine Lange, Yulia Melnikova, Thomas Mejer Hansen and Klaus Mosegaard 1 Introduction Reservoir modeling

More information

Robust Kernel Methods in Clustering and Dimensionality Reduction Problems

Robust Kernel Methods in Clustering and Dimensionality Reduction Problems Robust Kernel Methods in Clustering and Dimensionality Reduction Problems Jian Guo, Debadyuti Roy, Jing Wang University of Michigan, Department of Statistics Introduction In this report we propose robust

More information

A009 HISTORY MATCHING WITH THE PROBABILITY PERTURBATION METHOD APPLICATION TO A NORTH SEA RESERVOIR

A009 HISTORY MATCHING WITH THE PROBABILITY PERTURBATION METHOD APPLICATION TO A NORTH SEA RESERVOIR 1 A009 HISTORY MATCHING WITH THE PROBABILITY PERTURBATION METHOD APPLICATION TO A NORTH SEA RESERVOIR B. Todd HOFFMAN and Jef CAERS Stanford University, Petroleum Engineering, Stanford CA 94305-2220 USA

More information

A Parallel, Multiscale Approach to Reservoir Modeling. Omer Inanc Tureyen and Jef Caers Department of Petroleum Engineering Stanford University

A Parallel, Multiscale Approach to Reservoir Modeling. Omer Inanc Tureyen and Jef Caers Department of Petroleum Engineering Stanford University A Parallel, Multiscale Approach to Reservoir Modeling Omer Inanc Tureyen and Jef Caers Department of Petroleum Engineering Stanford University 1 Abstract With the advance of CPU power, numerical reservoir

More information

Modelling Workflow Tool

Modelling Workflow Tool A Reservoir Simulation Uncertainty ty Modelling Workflow Tool Brennan Williams Geo Visual Systems Ltd Contents 1. Introduction 2. Simulation model overview 3. What are the problems? 4. What do we need?

More information

Multi-Objective Stochastic Optimization by Co-Direct Sequential Simulation for History Matching of Oil Reservoirs

Multi-Objective Stochastic Optimization by Co-Direct Sequential Simulation for History Matching of Oil Reservoirs Multi-Objective Stochastic Optimization by Co-Direct Sequential Simulation for History Matching of Oil Reservoirs João Daniel Trigo Pereira Carneiro under the supervision of Amílcar de Oliveira Soares

More information

M odel Selection by Functional Decomposition of M ulti-proxy Flow Responses

M odel Selection by Functional Decomposition of M ulti-proxy Flow Responses M odel Selection by Functional Decomposition of M ulti-proxy Flow Responses Report Prepared for SCRF Affiliates Meeting, Stanford University 1 Ognjen Grujic and Jef Caers A bstract Time constraints play

More information

Note Set 4: Finite Mixture Models and the EM Algorithm

Note Set 4: Finite Mixture Models and the EM Algorithm Note Set 4: Finite Mixture Models and the EM Algorithm Padhraic Smyth, Department of Computer Science University of California, Irvine Finite Mixture Models A finite mixture model with K components, for

More information

Iterative spatial resampling applied to seismic inverse modeling for lithofacies prediction

Iterative spatial resampling applied to seismic inverse modeling for lithofacies prediction Iterative spatial resampling applied to seismic inverse modeling for lithofacies prediction Cheolkyun Jeong, Tapan Mukerji, and Gregoire Mariethoz Department of Energy Resources Engineering Stanford University

More information

Variogram Inversion and Uncertainty Using Dynamic Data. Simultaneouos Inversion with Variogram Updating

Variogram Inversion and Uncertainty Using Dynamic Data. Simultaneouos Inversion with Variogram Updating Variogram Inversion and Uncertainty Using Dynamic Data Z. A. Reza (zreza@ualberta.ca) and C. V. Deutsch (cdeutsch@civil.ualberta.ca) Department of Civil & Environmental Engineering, University of Alberta

More information

PTE 519 Lecture Note Finite Difference Approximation (Model)

PTE 519 Lecture Note Finite Difference Approximation (Model) PTE 519 Lecture Note 3 3.0 Finite Difference Approximation (Model) In this section of the lecture material, the focus is to define the terminology and to summarize the basic facts. The basic idea of any

More information

DI TRANSFORM. The regressive analyses. identify relationships

DI TRANSFORM. The regressive analyses. identify relationships July 2, 2015 DI TRANSFORM MVstats TM Algorithm Overview Summary The DI Transform Multivariate Statistics (MVstats TM ) package includes five algorithm options that operate on most types of geologic, geophysical,

More information

CONDITIONING FACIES SIMULATIONS WITH CONNECTIVITY DATA

CONDITIONING FACIES SIMULATIONS WITH CONNECTIVITY DATA CONDITIONING FACIES SIMULATIONS WITH CONNECTIVITY DATA PHILIPPE RENARD (1) and JEF CAERS (2) (1) Centre for Hydrogeology, University of Neuchâtel, Switzerland (2) Stanford Center for Reservoir Forecasting,

More information

Modeling Uncertainty in the Earth Sciences Jef Caers Stanford University

Modeling Uncertainty in the Earth Sciences Jef Caers Stanford University Modeling spatial continuity Modeling Uncertainty in the Earth Sciences Jef Caers Stanford University Motivation uncertain uncertain certain or uncertain uncertain Spatial Input parameters Spatial Stochastic

More information

Multiple Point Statistics with Multiple Training Images

Multiple Point Statistics with Multiple Training Images Multiple Point Statistics with Multiple Training Images Daniel A. Silva and Clayton V. Deutsch Abstract Characterization of complex geological features and patterns has been one of the main tasks of geostatistics.

More information

Exploring Direct Sampling and Iterative Spatial Resampling in History Matching

Exploring Direct Sampling and Iterative Spatial Resampling in History Matching Exploring Direct Sampling and Iterative Spatial Resampling in History Matching Matz Haugen, Grergoire Mariethoz and Tapan Mukerji Department of Energy Resources Engineering Stanford University Abstract

More information

Using 3D-DEGA. Omer Inanc Tureyen and Jef Caers Department of Petroleum Engineering Stanford University

Using 3D-DEGA. Omer Inanc Tureyen and Jef Caers Department of Petroleum Engineering Stanford University Using 3D-DEGA Omer Inanc Tureyen and Jef Caers Department of Petroleum Engineering Stanford University 1 1 Introduction With the advance of CPU power, numerical reservoir models have become an essential

More information

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS CHAPTER 4 CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS 4.1 Introduction Optical character recognition is one of

More information

Points Lines Connected points X-Y Scatter. X-Y Matrix Star Plot Histogram Box Plot. Bar Group Bar Stacked H-Bar Grouped H-Bar Stacked

Points Lines Connected points X-Y Scatter. X-Y Matrix Star Plot Histogram Box Plot. Bar Group Bar Stacked H-Bar Grouped H-Bar Stacked Plotting Menu: QCExpert Plotting Module graphs offers various tools for visualization of uni- and multivariate data. Settings and options in different types of graphs allow for modifications and customizations

More information

Using the DATAMINE Program

Using the DATAMINE Program 6 Using the DATAMINE Program 304 Using the DATAMINE Program This chapter serves as a user s manual for the DATAMINE program, which demonstrates the algorithms presented in this book. Each menu selection

More information

Predict Outcomes and Reveal Relationships in Categorical Data

Predict Outcomes and Reveal Relationships in Categorical Data PASW Categories 18 Specifications Predict Outcomes and Reveal Relationships in Categorical Data Unleash the full potential of your data through predictive analysis, statistical learning, perceptual mapping,

More information

Geostatistics on Stratigraphic Grid

Geostatistics on Stratigraphic Grid Geostatistics on Stratigraphic Grid Antoine Bertoncello 1, Jef Caers 1, Pierre Biver 2 and Guillaume Caumon 3. 1 ERE department / Stanford University, Stanford CA USA; 2 Total CSTJF, Pau France; 3 CRPG-CNRS

More information

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation Optimization Methods: Introduction and Basic concepts 1 Module 1 Lecture Notes 2 Optimization Problem and Model Formulation Introduction In the previous lecture we studied the evolution of optimization

More information

Computer Experiments. Designs

Computer Experiments. Designs Computer Experiments Designs Differences between physical and computer Recall experiments 1. The code is deterministic. There is no random error (measurement error). As a result, no replication is needed.

More information

1. Estimation equations for strip transect sampling, using notation consistent with that used to

1. Estimation equations for strip transect sampling, using notation consistent with that used to Web-based Supplementary Materials for Line Transect Methods for Plant Surveys by S.T. Buckland, D.L. Borchers, A. Johnston, P.A. Henrys and T.A. Marques Web Appendix A. Introduction In this on-line appendix,

More information

Machine Learning for Signal Processing Clustering. Bhiksha Raj Class Oct 2016

Machine Learning for Signal Processing Clustering. Bhiksha Raj Class Oct 2016 Machine Learning for Signal Processing Clustering Bhiksha Raj Class 11. 13 Oct 2016 1 Statistical Modelling and Latent Structure Much of statistical modelling attempts to identify latent structure in the

More information

PS wave AVO aspects on processing, inversion, and interpretation

PS wave AVO aspects on processing, inversion, and interpretation PS wave AVO aspects on processing, inversion, and interpretation Yong Xu, Paradigm Geophysical Canada Summary Using PS wave AVO inversion, density contrasts can be obtained from converted wave data. The

More information

CONDITIONAL SIMULATION OF TRUNCATED RANDOM FIELDS USING GRADIENT METHODS

CONDITIONAL SIMULATION OF TRUNCATED RANDOM FIELDS USING GRADIENT METHODS CONDITIONAL SIMULATION OF TRUNCATED RANDOM FIELDS USING GRADIENT METHODS Introduction Ning Liu and Dean S. Oliver University of Oklahoma, Norman, Oklahoma, USA; ning@ou.edu The problem of estimating the

More information

INF 4300 Classification III Anne Solberg The agenda today:

INF 4300 Classification III Anne Solberg The agenda today: INF 4300 Classification III Anne Solberg 28.10.15 The agenda today: More on estimating classifier accuracy Curse of dimensionality and simple feature selection knn-classification K-means clustering 28.10.15

More information

4.12 Generalization. In back-propagation learning, as many training examples as possible are typically used.

4.12 Generalization. In back-propagation learning, as many training examples as possible are typically used. 1 4.12 Generalization In back-propagation learning, as many training examples as possible are typically used. It is hoped that the network so designed generalizes well. A network generalizes well when

More information

Expectation-Maximization Methods in Population Analysis. Robert J. Bauer, Ph.D. ICON plc.

Expectation-Maximization Methods in Population Analysis. Robert J. Bauer, Ph.D. ICON plc. Expectation-Maximization Methods in Population Analysis Robert J. Bauer, Ph.D. ICON plc. 1 Objective The objective of this tutorial is to briefly describe the statistical basis of Expectation-Maximization

More information

B. Todd Hoffman and Jef Caers Stanford University, California, USA

B. Todd Hoffman and Jef Caers Stanford University, California, USA Sequential Simulation under local non-linear constraints: Application to history matching B. Todd Hoffman and Jef Caers Stanford University, California, USA Introduction Sequential simulation has emerged

More information

Use of Extreme Value Statistics in Modeling Biometric Systems

Use of Extreme Value Statistics in Modeling Biometric Systems Use of Extreme Value Statistics in Modeling Biometric Systems Similarity Scores Two types of matching: Genuine sample Imposter sample Matching scores Enrolled sample 0.95 0.32 Probability Density Decision

More information

Supplementary Figure 1. Decoding results broken down for different ROIs

Supplementary Figure 1. Decoding results broken down for different ROIs Supplementary Figure 1 Decoding results broken down for different ROIs Decoding results for areas V1, V2, V3, and V1 V3 combined. (a) Decoded and presented orientations are strongly correlated in areas

More information

Tensor Based Approaches for LVA Field Inference

Tensor Based Approaches for LVA Field Inference Tensor Based Approaches for LVA Field Inference Maksuda Lillah and Jeff Boisvert The importance of locally varying anisotropy (LVA) in model construction can be significant; however, it is often ignored

More information

Machine Learning (BSMC-GA 4439) Wenke Liu

Machine Learning (BSMC-GA 4439) Wenke Liu Machine Learning (BSMC-GA 4439) Wenke Liu 01-25-2018 Outline Background Defining proximity Clustering methods Determining number of clusters Other approaches Cluster analysis as unsupervised Learning Unsupervised

More information

Machine Learning (BSMC-GA 4439) Wenke Liu

Machine Learning (BSMC-GA 4439) Wenke Liu Machine Learning (BSMC-GA 4439) Wenke Liu 01-31-017 Outline Background Defining proximity Clustering methods Determining number of clusters Comparing two solutions Cluster analysis as unsupervised Learning

More information

Introduction to ANSYS DesignXplorer

Introduction to ANSYS DesignXplorer Lecture 4 14. 5 Release Introduction to ANSYS DesignXplorer 1 2013 ANSYS, Inc. September 27, 2013 s are functions of different nature where the output parameters are described in terms of the input parameters

More information

Downscaling saturations for modeling 4D seismic data

Downscaling saturations for modeling 4D seismic data Downscaling saturations for modeling 4D seismic data Scarlet A. Castro and Jef Caers Stanford Center for Reservoir Forecasting May 2005 Abstract 4D seismic data is used to monitor the movement of fluids

More information

Multiresponse Sparse Regression with Application to Multidimensional Scaling

Multiresponse Sparse Regression with Application to Multidimensional Scaling Multiresponse Sparse Regression with Application to Multidimensional Scaling Timo Similä and Jarkko Tikka Helsinki University of Technology, Laboratory of Computer and Information Science P.O. Box 54,

More information

General Instructions. Questions

General Instructions. Questions CS246: Mining Massive Data Sets Winter 2018 Problem Set 2 Due 11:59pm February 8, 2018 Only one late period is allowed for this homework (11:59pm 2/13). General Instructions Submission instructions: These

More information

Spatial Interpolation & Geostatistics

Spatial Interpolation & Geostatistics (Z i Z j ) 2 / 2 Spatial Interpolation & Geostatistics Lag Lag Mean Distance between pairs of points 11/3/2016 GEO327G/386G, UT Austin 1 Tobler s Law All places are related, but nearby places are related

More information

Supervised vs. Unsupervised Learning

Supervised vs. Unsupervised Learning Clustering Supervised vs. Unsupervised Learning So far we have assumed that the training samples used to design the classifier were labeled by their class membership (supervised learning) We assume now

More information

Modeling with Uncertainty Interval Computations Using Fuzzy Sets

Modeling with Uncertainty Interval Computations Using Fuzzy Sets Modeling with Uncertainty Interval Computations Using Fuzzy Sets J. Honda, R. Tankelevich Department of Mathematical and Computer Sciences, Colorado School of Mines, Golden, CO, U.S.A. Abstract A new method

More information

Data Preprocessing. Javier Béjar. URL - Spring 2018 CS - MAI 1/78 BY: $\

Data Preprocessing. Javier Béjar. URL - Spring 2018 CS - MAI 1/78 BY: $\ Data Preprocessing Javier Béjar BY: $\ URL - Spring 2018 C CS - MAI 1/78 Introduction Data representation Unstructured datasets: Examples described by a flat set of attributes: attribute-value matrix Structured

More information

Multicollinearity and Validation CIVL 7012/8012

Multicollinearity and Validation CIVL 7012/8012 Multicollinearity and Validation CIVL 7012/8012 2 In Today s Class Recap Multicollinearity Model Validation MULTICOLLINEARITY 1. Perfect Multicollinearity 2. Consequences of Perfect Multicollinearity 3.

More information

Quasi-Monte Carlo Methods Combating Complexity in Cost Risk Analysis

Quasi-Monte Carlo Methods Combating Complexity in Cost Risk Analysis Quasi-Monte Carlo Methods Combating Complexity in Cost Risk Analysis Blake Boswell Booz Allen Hamilton ISPA / SCEA Conference Albuquerque, NM June 2011 1 Table Of Contents Introduction Monte Carlo Methods

More information

Sensor Tasking and Control

Sensor Tasking and Control Sensor Tasking and Control Outline Task-Driven Sensing Roles of Sensor Nodes and Utilities Information-Based Sensor Tasking Joint Routing and Information Aggregation Summary Introduction To efficiently

More information

NONPARAMETRIC REGRESSION TECHNIQUES

NONPARAMETRIC REGRESSION TECHNIQUES NONPARAMETRIC REGRESSION TECHNIQUES C&PE 940, 28 November 2005 Geoff Bohling Assistant Scientist Kansas Geological Survey geoff@kgs.ku.edu 864-2093 Overheads and other resources available at: http://people.ku.edu/~gbohling/cpe940

More information

Automatic History Matching On The Norne Simulation Model

Automatic History Matching On The Norne Simulation Model Automatic History Matching On The Norne Simulation Model Eirik Morell - TPG4530 - Norwegian University of Science and Technology - 2008 Abstract This paper presents the result of an automatic history match

More information

Driven Cavity Example

Driven Cavity Example BMAppendixI.qxd 11/14/12 6:55 PM Page I-1 I CFD Driven Cavity Example I.1 Problem One of the classic benchmarks in CFD is the driven cavity problem. Consider steady, incompressible, viscous flow in a square

More information

11-Geostatistical Methods for Seismic Inversion. Amílcar Soares CERENA-IST

11-Geostatistical Methods for Seismic Inversion. Amílcar Soares CERENA-IST 11-Geostatistical Methods for Seismic Inversion Amílcar Soares CERENA-IST asoares@ist.utl.pt 01 - Introduction Seismic and Log Scale Seismic Data Recap: basic concepts Acoustic Impedance Velocity X Density

More information

9.1. K-means Clustering

9.1. K-means Clustering 424 9. MIXTURE MODELS AND EM Section 9.2 Section 9.3 Section 9.4 view of mixture distributions in which the discrete latent variables can be interpreted as defining assignments of data points to specific

More information

Statistical Matching using Fractional Imputation

Statistical Matching using Fractional Imputation Statistical Matching using Fractional Imputation Jae-Kwang Kim 1 Iowa State University 1 Joint work with Emily Berg and Taesung Park 1 Introduction 2 Classical Approaches 3 Proposed method 4 Application:

More information

Discovery of the Source of Contaminant Release

Discovery of the Source of Contaminant Release Discovery of the Source of Contaminant Release Devina Sanjaya 1 Henry Qin Introduction Computer ability to model contaminant release events and predict the source of release in real time is crucial in

More information

Basis Functions. Volker Tresp Summer 2017

Basis Functions. Volker Tresp Summer 2017 Basis Functions Volker Tresp Summer 2017 1 Nonlinear Mappings and Nonlinear Classifiers Regression: Linearity is often a good assumption when many inputs influence the output Some natural laws are (approximately)

More information

MCMC Methods for data modeling

MCMC Methods for data modeling MCMC Methods for data modeling Kenneth Scerri Department of Automatic Control and Systems Engineering Introduction 1. Symposium on Data Modelling 2. Outline: a. Definition and uses of MCMC b. MCMC algorithms

More information

Chapter 6: Examples 6.A Introduction

Chapter 6: Examples 6.A Introduction Chapter 6: Examples 6.A Introduction In Chapter 4, several approaches to the dual model regression problem were described and Chapter 5 provided expressions enabling one to compute the MSE of the mean

More information

Quantifying Data Needs for Deep Feed-forward Neural Network Application in Reservoir Property Predictions

Quantifying Data Needs for Deep Feed-forward Neural Network Application in Reservoir Property Predictions Quantifying Data Needs for Deep Feed-forward Neural Network Application in Reservoir Property Predictions Tanya Colwell Having enough data, statistically one can predict anything 99 percent of statistics

More information

10-701/15-781, Fall 2006, Final

10-701/15-781, Fall 2006, Final -7/-78, Fall 6, Final Dec, :pm-8:pm There are 9 questions in this exam ( pages including this cover sheet). If you need more room to work out your answer to a question, use the back of the page and clearly

More information

Updated Sections 3.5 and 3.6

Updated Sections 3.5 and 3.6 Addendum The authors recommend the replacement of Sections 3.5 3.6 and Table 3.15 with the content of this addendum. Consequently, the recommendation is to replace the 13 models and their weights with

More information

Locally Weighted Least Squares Regression for Image Denoising, Reconstruction and Up-sampling

Locally Weighted Least Squares Regression for Image Denoising, Reconstruction and Up-sampling Locally Weighted Least Squares Regression for Image Denoising, Reconstruction and Up-sampling Moritz Baecher May 15, 29 1 Introduction Edge-preserving smoothing and super-resolution are classic and important

More information

Non Stationary Variograms Based on Continuously Varying Weights

Non Stationary Variograms Based on Continuously Varying Weights Non Stationary Variograms Based on Continuously Varying Weights David F. Machuca-Mory and Clayton V. Deutsch Centre for Computational Geostatistics Department of Civil & Environmental Engineering University

More information

Clustering. Mihaela van der Schaar. January 27, Department of Engineering Science University of Oxford

Clustering. Mihaela van der Schaar. January 27, Department of Engineering Science University of Oxford Department of Engineering Science University of Oxford January 27, 2017 Many datasets consist of multiple heterogeneous subsets. Cluster analysis: Given an unlabelled data, want algorithms that automatically

More information

Convexization in Markov Chain Monte Carlo

Convexization in Markov Chain Monte Carlo in Markov Chain Monte Carlo 1 IBM T. J. Watson Yorktown Heights, NY 2 Department of Aerospace Engineering Technion, Israel August 23, 2011 Problem Statement MCMC processes in general are governed by non

More information

Nelder-Mead Enhanced Extreme Learning Machine

Nelder-Mead Enhanced Extreme Learning Machine Philip Reiner, Bogdan M. Wilamowski, "Nelder-Mead Enhanced Extreme Learning Machine", 7-th IEEE Intelligent Engineering Systems Conference, INES 23, Costa Rica, June 9-2., 29, pp. 225-23 Nelder-Mead Enhanced

More information

A noninformative Bayesian approach to small area estimation

A noninformative Bayesian approach to small area estimation A noninformative Bayesian approach to small area estimation Glen Meeden School of Statistics University of Minnesota Minneapolis, MN 55455 glen@stat.umn.edu September 2001 Revised May 2002 Research supported

More information

Closing the Loop via Scenario Modeling in a Time-Lapse Study of an EOR Target in Oman

Closing the Loop via Scenario Modeling in a Time-Lapse Study of an EOR Target in Oman Closing the Loop via Scenario Modeling in a Time-Lapse Study of an EOR Target in Oman Tania Mukherjee *(University of Houston), Kurang Mehta, Jorge Lopez (Shell International Exploration and Production

More information

Nonparametric Survey Regression Estimation in Two-Stage Spatial Sampling

Nonparametric Survey Regression Estimation in Two-Stage Spatial Sampling Nonparametric Survey Regression Estimation in Two-Stage Spatial Sampling Siobhan Everson-Stewart, F. Jay Breidt, Jean D. Opsomer January 20, 2004 Key Words: auxiliary information, environmental surveys,

More information

Rotation and affinity invariance in multiple-point geostatistics

Rotation and affinity invariance in multiple-point geostatistics Rotation and ainity invariance in multiple-point geostatistics Tuanfeng Zhang December, 2001 Abstract Multiple-point stochastic simulation of facies spatial distribution requires a training image depicting

More information

Spatial Outlier Detection

Spatial Outlier Detection Spatial Outlier Detection Chang-Tien Lu Department of Computer Science Northern Virginia Center Virginia Tech Joint work with Dechang Chen, Yufeng Kou, Jiang Zhao 1 Spatial Outlier A spatial data point

More information

SOM+EOF for Finding Missing Values

SOM+EOF for Finding Missing Values SOM+EOF for Finding Missing Values Antti Sorjamaa 1, Paul Merlin 2, Bertrand Maillet 2 and Amaury Lendasse 1 1- Helsinki University of Technology - CIS P.O. Box 5400, 02015 HUT - Finland 2- Variances and

More information

History matching with an ensemble Kalman filter and discrete cosine parameterization

History matching with an ensemble Kalman filter and discrete cosine parameterization Comput Geosci (2008) 12:227 244 DOI 10.1007/s10596-008-9080-3 ORIGINAL PAPER History matching with an ensemble Kalman filter and discrete cosine parameterization Behnam Jafarpour & Dennis B. McLaughlin

More information

Data Preprocessing. Javier Béjar AMLT /2017 CS - MAI. (CS - MAI) Data Preprocessing AMLT / / 71 BY: $\

Data Preprocessing. Javier Béjar AMLT /2017 CS - MAI. (CS - MAI) Data Preprocessing AMLT / / 71 BY: $\ Data Preprocessing S - MAI AMLT - 2016/2017 (S - MAI) Data Preprocessing AMLT - 2016/2017 1 / 71 Outline 1 Introduction Data Representation 2 Data Preprocessing Outliers Missing Values Normalization Discretization

More information

Calibration of NFR models with interpreted well-test k.h data. Michel Garcia

Calibration of NFR models with interpreted well-test k.h data. Michel Garcia Calibration of NFR models with interpreted well-test k.h data Michel Garcia Calibration with interpreted well-test k.h data Intermediate step between Reservoir characterization Static model conditioned

More information

Today. Lecture 4: Last time. The EM algorithm. We examine clustering in a little more detail; we went over it a somewhat quickly last time

Today. Lecture 4: Last time. The EM algorithm. We examine clustering in a little more detail; we went over it a somewhat quickly last time Today Lecture 4: We examine clustering in a little more detail; we went over it a somewhat quickly last time The CAD data will return and give us an opportunity to work with curves (!) We then examine

More information

VARIANCE REDUCTION TECHNIQUES IN MONTE CARLO SIMULATIONS K. Ming Leung

VARIANCE REDUCTION TECHNIQUES IN MONTE CARLO SIMULATIONS K. Ming Leung POLYTECHNIC UNIVERSITY Department of Computer and Information Science VARIANCE REDUCTION TECHNIQUES IN MONTE CARLO SIMULATIONS K. Ming Leung Abstract: Techniques for reducing the variance in Monte Carlo

More information

Divide and Conquer Kernel Ridge Regression

Divide and Conquer Kernel Ridge Regression Divide and Conquer Kernel Ridge Regression Yuchen Zhang John Duchi Martin Wainwright University of California, Berkeley COLT 2013 Yuchen Zhang (UC Berkeley) Divide and Conquer KRR COLT 2013 1 / 15 Problem

More information

CHAPTER 3 AN OVERVIEW OF DESIGN OF EXPERIMENTS AND RESPONSE SURFACE METHODOLOGY

CHAPTER 3 AN OVERVIEW OF DESIGN OF EXPERIMENTS AND RESPONSE SURFACE METHODOLOGY 23 CHAPTER 3 AN OVERVIEW OF DESIGN OF EXPERIMENTS AND RESPONSE SURFACE METHODOLOGY 3.1 DESIGN OF EXPERIMENTS Design of experiments is a systematic approach for investigation of a system or process. A series

More information

Machine-learning Based Automated Fault Detection in Seismic Traces

Machine-learning Based Automated Fault Detection in Seismic Traces Machine-learning Based Automated Fault Detection in Seismic Traces Chiyuan Zhang and Charlie Frogner (MIT), Mauricio Araya-Polo and Detlef Hohl (Shell International E & P Inc.) June 9, 24 Introduction

More information

Cluster Analysis. Mu-Chun Su. Department of Computer Science and Information Engineering National Central University 2003/3/11 1

Cluster Analysis. Mu-Chun Su. Department of Computer Science and Information Engineering National Central University 2003/3/11 1 Cluster Analysis Mu-Chun Su Department of Computer Science and Information Engineering National Central University 2003/3/11 1 Introduction Cluster analysis is the formal study of algorithms and methods

More information

Louis Fourrier Fabien Gaie Thomas Rolf

Louis Fourrier Fabien Gaie Thomas Rolf CS 229 Stay Alert! The Ford Challenge Louis Fourrier Fabien Gaie Thomas Rolf Louis Fourrier Fabien Gaie Thomas Rolf 1. Problem description a. Goal Our final project is a recent Kaggle competition submitted

More information

PEER Report Addendum.

PEER Report Addendum. PEER Report 2017-03 Addendum. The authors recommend the replacement of Section 3.5.1 and Table 3.15 with the content of this Addendum. Consequently, the recommendation is to replace the 13 models and their

More information

Short Note: Some Implementation Aspects of Multiple-Point Simulation

Short Note: Some Implementation Aspects of Multiple-Point Simulation Short Note: Some Implementation Aspects of Multiple-Point Simulation Steven Lyster 1, Clayton V. Deutsch 1, and Julián M. Ortiz 2 1 Department of Civil & Environmental Engineering University of Alberta

More information

CIS 520, Machine Learning, Fall 2015: Assignment 7 Due: Mon, Nov 16, :59pm, PDF to Canvas [100 points]

CIS 520, Machine Learning, Fall 2015: Assignment 7 Due: Mon, Nov 16, :59pm, PDF to Canvas [100 points] CIS 520, Machine Learning, Fall 2015: Assignment 7 Due: Mon, Nov 16, 2015. 11:59pm, PDF to Canvas [100 points] Instructions. Please write up your responses to the following problems clearly and concisely.

More information