Model Based Symbolic Description for Big Data Analysis

Similar documents
Kernel Density Estimation (KDE)

DATA MINING LECTURE 7. Hierarchical Clustering, DBSCAN The EM Algorithm

Statistical Analysis of Metabolomics Data. Xiuxia Du Department of Bioinformatics & Genomics University of North Carolina at Charlotte

Nonparametric Density Estimation

On Kernel Density Estimation with Univariate Application. SILOKO, Israel Uzuazor

Today. Lecture 4: Last time. The EM algorithm. We examine clustering in a little more detail; we went over it a somewhat quickly last time

Acquisition Description Exploration Examination Understanding what data is collected. Characterizing properties of data.

Network Traffic Measurements and Analysis

Dynamic Thresholding for Image Analysis

Clustering CS 550: Machine Learning

Data Mining Chapter 9: Descriptive Modeling Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University

Nonparametric Risk Attribution for Factor Models of Portfolios. October 3, 2017 Kellie Ottoboni

Unsupervised Learning : Clustering

Use of Extreme Value Statistics in Modeling Biometric Systems

Note Set 4: Finite Mixture Models and the EM Algorithm

Big Data Analytics: What is Big Data? Stony Brook University CSE545, Fall 2016 the inaugural edition

SD 372 Pattern Recognition

Classification. Vladimir Curic. Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University

University of Florida CISE department Gator Engineering. Clustering Part 4

Recap: Gaussian (or Normal) Distribution. Recap: Minimizing the Expected Loss. Topics of This Lecture. Recap: Maximum Likelihood Approach

Machine Learning Lecture 3

The data quality trends report

10-701/15-781, Fall 2006, Final

Lecture 25 Nonlinear Programming. November 9, 2009

Bayesian Spherical Wavelet Shrinkage: Applications to Shape Analysis

Clustering Part 4 DBSCAN

Expectation Maximization (EM) and Gaussian Mixture Models

When, Where & Why to Use NoSQL?

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS

Customer Clustering using RFM analysis

Salford Systems Predictive Modeler Unsupervised Learning. Salford Systems

Latent Variable Models and Expectation Maximization

SYDE Winter 2011 Introduction to Pattern Recognition. Clustering

Machine Learning Lecture 3

Points Lines Connected points X-Y Scatter. X-Y Matrix Star Plot Histogram Box Plot. Bar Group Bar Stacked H-Bar Grouped H-Bar Stacked

Introduction to Nonparametric/Semiparametric Econometric Analysis: Implementation

Machine Learning Lecture 3

University of Florida CISE department Gator Engineering. Clustering Part 2

CSE 5243 INTRO. TO DATA MINING

Computer Vision 6 Segmentation by Fitting

Unit 7 Statistics. AFM Mrs. Valentine. 7.1 Samples and Surveys

Analysis of Functional MRI Timeseries Data Using Signal Processing Techniques

Outline. Topic 16 - Other Remedies. Ridge Regression. Ridge Regression. Ridge Regression. Robust Regression. Regression Trees. Piecewise Linear Model

ChristoHouston Energy Inc. (CHE INC.) Pipeline Anomaly Analysis By Liquid Green Technologies Corporation

BBS654 Data Mining. Pinar Duygulu. Slides are adapted from Nazli Ikizler

The Ohio State University Columbus, Ohio, USA Universidad Autónoma de Nuevo León San Nicolás de los Garza, Nuevo León, México, 66450

COMPUTATIONAL STATISTICS UNSUPERVISED LEARNING

Aaron Daniel Chia Huang Licai Huang Medhavi Sikaria Signal Processing: Forecasting and Modeling

What s New in Spotfire DXP 1.1. Spotfire Product Management January 2007

8/3/2017. Contour Assessment for Quality Assurance and Data Mining. Objective. Outline. Tom Purdie, PhD, MCCPM

Computer Experiments: Space Filling Design and Gaussian Process Modeling

Lecture 9: Hough Transform and Thresholding base Segmentation

MultiDimensional Signal Processing Master Degree in Ingegneria delle Telecomunicazioni A.A

Analysing Search Trends

Data transformation in multivariate quality control

Applying Supervised Learning

The Curse of Dimensionality

Clustering: Classic Methods and Modern Views

Machine Learning (BSMC-GA 4439) Wenke Liu

Introduction to Trajectory Clustering. By YONGLI ZHANG

Vocabulary. 5-number summary Rule. Area principle. Bar chart. Boxplot. Categorical data condition. Categorical variable.

Solution Sketches Midterm Exam COSC 6342 Machine Learning March 20, 2013

Some questions of consensus building using co-association

Active Appearance Models

Machine Learning (BSMC-GA 4439) Wenke Liu

User Behaviour and Platform Performance. in Mobile Multiplayer Environments

2014 Stat-Ease, Inc. All Rights Reserved.

3 Graphical Displays of Data

Understanding Clustering Supervising the unsupervised

Introduction to Data Mining

Machine Learning for Pre-emptive Identification of Performance Problems in UNIX Servers Helen Cunningham

Clustering and Visualisation of Data

Exploratory data analysis for microarrays

Review of feature selection techniques in bioinformatics by Yvan Saeys, Iñaki Inza and Pedro Larrañaga.

Chapter 10. Conclusion Discussion

ELEC Dr Reji Mathew Electrical Engineering UNSW

Data Mining and Analytics. Introduction

Graph Structure Over Time

* Hyun Suk Park. Korea Institute of Civil Engineering and Building, 283 Goyangdae-Ro Goyang-Si, Korea. Corresponding Author: Hyun Suk Park

CHAPTER-13. Mining Class Comparisons: Discrimination between DifferentClasses: 13.4 Class Description: Presentation of Both Characterization and

This tutorial has been prepared for computer science graduates to help them understand the basic-to-advanced concepts related to data mining.

Lecture 27, April 24, Reading: See class website. Nonparametric regression and kernel smoothing. Structured sparse additive models (GroupSpAM)

3 Feature Selection & Feature Extraction

Data Polygamy: The Many-Many Relationships among Urban Spatio-Temporal Data Sets. Fernando Chirigati Harish Doraiswamy Theodoros Damoulas

The Perils of Unfettered In-Sample Backtesting

CSE 5243 INTRO. TO DATA MINING

Application of Clustering Techniques to Energy Data to Enhance Analysts Productivity

Introduction to Mobile Robotics

Welcome to Analytics. Welcome to Applause! Table of Contents:

Applied Bayesian Nonparametrics 5. Spatial Models via Gaussian Processes, not MRFs Tutorial at CVPR 2012 Erik Sudderth Brown University

Automate Transform Analyze

MATH3016: OPTIMIZATION

Locating Salient Object Features

Predict Outcomes and Reveal Relationships in Categorical Data

High-Dimensional Incremental Divisive Clustering under Population Drift

CS Introduction to Data Mining Instructor: Abdullah Mueen

Conditional Volatility Estimation by. Conditional Quantile Autoregression

Network Heartbeat Traffic Characterization. Mackenzie Haffey Martin Arlitt Carey Williamson Department of Computer Science University of Calgary

INDEX UNIT 4 PPT SLIDES

Data Mining: Data. Lecture Notes for Chapter 2. Introduction to Data Mining

Transcription:

Model Based Symbolic Description for Big Data Analysis 1 Model Based Symbolic Description for Big Data Analysis *Carlo Drago, **Carlo Lauro and **Germana Scepi *University of Rome Niccolo Cusano, **University of Naples Federico II COMPSTAT 2014 21st International Conference on Computational Statistics

Model Based Symbolic Description for Big Data Analysis 2 Outline The Statistical Problem Beanplot Time Series Definition Kernel and Bandwidth choice Beanplot Characteristics and Robustness Parameterization Beanplot Modelling Multiple Beanplot Time Series Beanplot Multiple Factor Analysis Beanplot Clustering (using the Beanplot Model Distance) Beanplot Constrained Clustering (using the Beanplot Model Distance) Beanplot Forecasting

Model Based Symbolic Description for Big Data Analysis Big Data 3 Big Data Recent technological advances carried on many innovations in data. In particular, there was an explosion of large data sets available. Big data is the term frequently used today for any collection of data sets so large and complex that it becomes difficult to process using on-hand database management tools or traditional data processing applications. Big data are data characterized by: high volume high velocity high variety This type of data usually shows a temporal dimension also.

Model Based Symbolic Description for Big Data Analysis Big Data 4 Financial Big Data This is especially promising and differentiating for financial services companies. In fact, financial business copes with hundreds of millions daily transactions and use big data in order to conduct transformations on their processes and organizations and to obtain competitive advantages in financial markets. Financial firms must be able to collect, store, and analyze rapidly changing, this type of data in order to maximize profits, reduce risk, and meet increasingly stringent regulatory requirements. The extraction of insights from so complex, and frequently unstructured data, is a very important step in this process and the statistical approach can give a fundamental contribution in this sense.

Model Based Symbolic Description for Big Data Analysis Big Data 5 Financial Big Data We consider as big data, observations on financial variables, taken daily or at a finer time scale, often irregularly spaced over time, and usually exhibit periodic (intra-day and intra-week) patterns in financial markets. The high-frequency data possess these peculiar features and can be considered an example of big data in finance markets, such as records of transactions and quotes for stocks or bonds, currencies and so on. These peculiar time series shows many difficulties in visualization and if are analyzed by means of an aggregated index conduct to an evident information loss.

Model Based Symbolic Description for Big Data Analysis Big Data 6 The Frequency Domain A time series of distributions would offer a more informative representation than other forms of aggregated time series. In order to analyze these data and we will consider the data not on the temporal domain of the time series, but in the frequency domain (considering for example the day). In this sense we consider the number of occurrencies on the time related to a specific value. We have several advantages on do that: We can detect simply the data patterns on the data as the most recurrent observations on the temporal interval We can detect the inter-temporal seasonalities which can occur on the temporal interval We can observe the similarities between different series.

Model Based Symbolic Description for Big Data Analysis Big Data 7 From Financial Big Data to Symbolic Data From the initial financial big data we are able to obtain the symbolic data table in which each data can be represented as a distribution. At this point we can: Represent the distribution as a Beanplot data Choosing the adequate data model Parameterizing the data model and obtaining the relevant parameters The final parameters are the relevant big data representation and could be used in clustering and forecasting.

Model Based Symbolic Description for Big Data Analysis Big Data 8 From Financial Big Data to Symbolic Data Figure: Fro Finacial Big Data to Symbolic Data (the first graph is from Martinaitis (2012))

Model Based Symbolic Description for Big Data Analysis Big Data 9 Methods Figure: Methods

Model Based Symbolic Description for Big Data Analysis Beanplot Time Series 10 Beanplot Time Series (BTS) A Beanplot time series can be defined as an ordered sequence of beanplot data (Kampstra 2008) over the time. The advantage of using the beanplot is his capacity to represent the intra-period data structure at time t. In the Beanplot time series a density data at time t with t = 1... T is defined as: ˆb k,h,t = 1 nh n i=1 K( x x i ) = 1 h nh (K( x x1 h )+K( x x2 h )+ +K( x xn )) h (1) where K is a kernel function, h is a smoothing parameter defined as a bandwidth and n is the number of x i intra-period observations.

Model Based Symbolic Description for Big Data Analysis Beanplot Time Series 11 Beanplot Taxonomies We can detect some typical taxonomies in the beanplots: A) Unimodality: data tend to gather over one mode in a regular way. B) Multimodality: data tend to gather over two modes. C) Break: data tend to gather in two mode but there is at least a break between the observations. Figure: Beanplot Taxonomy

Model Based Symbolic Description for Big Data Analysis Beanplot Time Series 12 Identifying Intra-Period Breaks Beanplot can be characterised by some groups of internal outlier observations (more than one). In this way the final result is a break in the data structure. In order to detect the intra-period breaks we: We sort the observations from the highest to the lowest We compute the first differences i with i = 1... n 1 and we compute the mean = i n 1 Are considered relevant the values which are over a specified threshold for example i > 3 In particular these value need to break the internal patterns considered. Is relevant to take in to account we can weight the internal outliers detected. In this way the beanplot is represented by a suitable weighting system. Figure: Intra-period breaks

Model Based Symbolic Description for Big Data Analysis Beanplot Time Series 13 Kernels Cosine Various kernels (K) can be generally chosen: Gaussian, uniform, Epanechnikov, triweight, exponential, cosine between others. The kernel is chosen in order to represent adequately the density function. K need to satisfy: Kernel Uniform: + K(u) du = 1 (2) K(u) = 1 2 1 ( u 1) (3) Epanechnikov K(u) = 3 4 (1 u2 ) 1 ( u 1) (4) Triweight Exponential K(u) = 35 32 (1 u2 ) 3 1 ( u 1) (5) K(u) = 1 2π e 1 2 u2 (6)

Model Based Symbolic Description for Big Data Analysis Beanplot Time Series 14 Kernel Properties Kernel function K(u) is nonnegative and need to fulfill (Racine 2008): K(u)du = 1 (8) K(u) = K( u) (9) u 2 K(u)d(u) = K 2 > 0 (10)

Model Based Symbolic Description for Big Data Analysis Beanplot Time Series 15 Kernel Selection It turns out that a range of kernel functions result in estimators having similar relative efficiencies, one could choose the kernel based on computational considerations, the Gaussian kernel being a popular choice... (Racine 1986) In order to approximate our data we will choose the Gaussian kernel: K(u) = 1 2π e 1 2 u2 (11) By considering big data, the Gaussian kernel is the most simple to interpret...unlike choosing a kernel function, however, choosing an appropriate bandwidth is a crucial aspect of sound nonparametric analysis (Racine 1986)

Model Based Symbolic Description for Big Data Analysis Beanplot Time Series 16 Kernel Selection Figure: Kernel Choice and Kernel Density Estimation The figure show the kernel density estimation computation using a Gaussian kernel and a bandwidth of h = 0.3 (R code by François 2012)

Model Based Symbolic Description for Big Data Analysis Beanplot Time Series 17 BTS: Bandwidth Selection We ll show the impact of the different selected bandwidths (using three choices: low, high and Sheather-Jones) on the beanplot time series. In the example we consider a yearly interval for the beanplot observation related the Dow Jones Index. This interval could be validated by considering the temporal horizons in which in these data (stocks) can occur. In fact in risk management application the relevant interval is the year (to take in to account the risks of financial crisis). In particular by considering the bandwidth we can to observe: Low Bandwidth: tend to show many bumps or to maximize the number of bumps by beanplot. High Bandwidth: we tend to have a more regular shape of the density traces. However here the risk is to lose some informations. Sheather Jones Method: the bandwidth change beanplot by beanplot so the bandwidth as well became an indicator of variability. Usually the impact of both bandwidth selection and kernel selection is obtained by simulation.

Model Based Symbolic Description for Big Data Analysis Beanplot Time Series 18 BTS: Bandwidth Selection Dow Jones BTS Bandwidth Selection Yearly Beanplot Time Series on Dow Jones daily data (2003-2010). Different Bandwidth choices on the Beanplot Time Series: Low bandwidth h = 8, High Bandwidth h = 102, Sheather and Jones method (use some pilot estimation of derivatives to choose the bandwidth). Kernel selected: Gaussian.

Model Based Symbolic Description for Big Data Analysis Beanplot Time Series 19 The Impact of the Kernel and the Bandwidth Changing Selection It is possible to explore the beanplot data characteristics using different Kernels and Bandwidths. We will choose to use the Gaussian kernel (for his flexibility) and the bandwidth obtained by the Sheather Jones method (to explore the data structure).

Model Based Symbolic Description for Big Data Analysis Beanplot Time Series 20 Beanplot Time Series: Characteristics Beanline the mean or the median. Beanplots Lower and Upper Bound [X ] t = [X t,l, X t,u ] with < X t,l X t,u < Beanplots Center and Radius [X ] t = X t,c, X t,r where X t,c = (X t,l + X t,u )/2 and X t,r = (X t,u X t,l )/2 Quantiles Main Characteristics Location: the beanline mean, the beanplot Center. Size: Beanplots Radius, Lower and Upper Bounds Shape: the h parameter regulates the density trace. So, the higher the bandwidth the wiggler the density function. The h parameter can be obtained using the Sheather-Jones method (see Kampstra (2008)). Relevant effects also on the kurthosis.

Model Based Symbolic Description for Big Data Analysis Beanplot Time Series 21 Beanplot Time Series: Characteristics Intra-period and inter-period variability Yearly beanplot time series on Dow Jones daily data (1996-2010) allows the identification of structural changes and intra-period variability patterns. The Kernel chosen is the Gaussian, the bandwidth is obtained by mean of the Sheather-Jones method.

Model Based Symbolic Description for Big Data Analysis Parameterization 22 Beanplot Modeling: Choosing the Class of the Model We consider the symbolic aggregation approach by considering as temporal interval the day We consider an approch to the frequency domain in order to extract the relevant daily patterns At this point we choose the class of the model. In particular we consider the number of the mixtures to use, the distributions considered and so on. In our case we choose two mixtures because the gof indexes show a good approximation of the data. At the same time the Gaussian distribution allow to maximize the gof index in the experiments on data we have performed. From te relevant daily data we extract the relevant parameters by the parameterization procedure. In particular we will consider a finite mixture model for each density function.

Model Based Symbolic Description for Big Data Analysis Parameterization 23 Beanplot Parameterization In order to compare and to analyse the beanplot time series we need to parameterize the different beanplot. The aim of the parameterization are: Synthesizing the beanplot observations Comparing, analysing and interpreting the beanplot observations Storing big data In this sense: We consider a kernel density estimation of the density function (a bandwidth h and a kernel K). We obtain: B k t Finite mixture model of the density function. We obtain: B M t Model diagnostic and model fit

Model Based Symbolic Description for Big Data Analysis Parameterization 24 Beanplot by Mixture Models Parameterization is important because the storing of the relevant information of the beanplots can be used in clustering and in forecasting. With the aim of parameterization we estimate the model parameters as a finite mixture density function. So we have: B M t = J π j f (x θ j ) (12) j=1 Where π 1... π J are scalars and θ 1,..., θ J are vectors of parameters Here: 0 π j 1 and also π 1 + π 2 + + π J = 1. Therefore we obtain A µ t (means), A σ t (standard deviations), A p t (weights). We use Gaussian distributions for their flexibility. We use the Maximum Likelihood Estimation for the estimation of the parameters.

Model Based Symbolic Description for Big Data Analysis Parameterization 25 Bt M Parameters Interpretation Parameters can be interpreted in this way: µ j they represent the main intra-period characteristics, for example in the financial context, which values the price of a stock has gathered over time. Changes in µ j can occur in the presence of structural changes σ j represent the intra-period variability, where in financial terms this can be higher volatility. Changes in σ j can occur in the presence of financial news (higher or lower intra-period volatility). π j represents the relative weight for each distinct group of observation. Changes in π j are related to intra-periodal changes

Model Based Symbolic Description for Big Data Analysis Parameterization 26 Number of Bt M Parameters The number of parameters to estimate is referred to the number of components (C) in the mixture. A feasible solution need to be a compromise between comparability, simplicity and usability. After the estimation of the model is necessary to consider the quality of the fit. Figure: Beanplot Model with C = 2

Model Based Symbolic Description for Big Data Analysis Parameterization 27 Weighting In every finite mixture model we measure the fit of the model by using an goodness of fit index. The index measure the level of fit of the model related the initial data. In this sense 1 represent the highest level of fit, and 0 the minimum. This index is used to weighting the observations in all the different models of models in order to weight less the observations with no represent adequately the data. At the same time observations with higher goodness of fit are weighted more.

Model Based Symbolic Description for Big Data Analysis Multiple Beanplot Time Series 28 Multiple Beanplot Time Series (MBTS) Here with the aim to create a representative market index we consider a beanplot to take in to account the intra-period variation. In particular we construct a Beanplot market index in order to represent the entire market risk. A beanplot market index can have a relevant applications in risk management to anticipate the risk over time. At the same time a beanplot market index can reflect the state of an economy and the sentiment of the investors and help investment decisions. So in this sense we extend our previous approach for single beanplot analysis to the case of Multiple Beanplot Time Series

Model Based Symbolic Description for Big Data Analysis Multiple Beanplot Time Series 29 Multiple Beanplot Time Series Multiple Beanplot Time Series can be defined the simultaneous observation of more than one Beanplot Time Series. For example we can observe the Beanplot Time Series related to a more than one financial market. By considering the multiple beanplot time series related to a market the resultant synthesis will be a beanplot representing the entire market (as an index of the entire market, for example, FTSE MIB for the Italian Case). Possible real applications: Exploratory Time Series Analysis Constructing Composite Indicators based on multiple beanplot time series Portfolio Selection Change Point Detection Forecasting

Model Based Symbolic Description for Big Data Analysis Multiple Beanplot Time Series 30 Multiple Beanplot Time Series Analysis We consider four different methods with different aims: Multiple Factor Analysis with the aim of seeking the common structure of the blocks describing the multiple beanplot time series. Clustering with the aim of detecting relevant subgroups over time and finding similar beanplot observations. These observation can be related to different stocks. The results can be used in portfolio selection strategies Constrained Clustering with the aim of detecting relevant subperiods in a beanplot time series. These relevant subperiods represented by groups of beanplots over time can be used in order to detect market change point. Forecasting with the aim of prediction of the observations over time. The models can be used in trading.

Model Based Symbolic Description for Big Data Analysis Multiple Beanplot Time Series 31 Beanplot Multiple Factor Analysis (BMFA) The aim of the method is to synthesize the different beanplot multiple time series in order to obtain indexes over time of the market or the portfolio. The indexes can be used in order to take decisions. We consider as one of the most important element in building the index the gof as the capacity of the models to approximate the original data. We parameterize the different beanplot time series. In this case we obtain the parameters related the weights, the means and the variance for each data. In this example we visualize the first parameter (the weight related the first mixture): m1.p1 m2.p1 m3.p1 m4.p1 m5.p1 m6.p1 m7.p1 1 0.73 0.57 0.48 0.58 0.30 0.76 0.34 2 0.63 0.64 0.53 0.48 0.67 0.14 0.65 3 0.69 0.51 0.57 0.91 0.89 0.60 0.61 4 0.76 0.39 0.67 0.87 0.33 0.90 0.62 5 0.27 0.64 0.50 0.72 0.49 0.59 0.56 6 0.24 0.66 0.72 0.60 0.26 0.22 0.54 7 0.51 0.98 0.86 0.55 0.78 0.28 0.93 8 0.23 0.28 0.83 0.26 0.69 0.00 0.76

Model Based Symbolic Description for Big Data Analysis Multiple Beanplot Time Series 32 Beanplot Multiple Factor Analysis Here we visualize the matrix for the weight related the second mixture: m1.p2 m2.p2 m3.p2 m4.p2 m5.p2 m6.p2 m7.p2 1 0.27 0.43 0.52 0.42 0.70 0.24 0.66 2 0.37 0.36 0.47 0.52 0.33 0.86 0.35 3 0.31 0.49 0.43 0.09 0.11 0.40 0.39 4 0.24 0.61 0.33 0.13 0.67 0.10 0.38 5 0.73 0.36 0.50 0.28 0.51 0.41 0.44 6 0.76 0.34 0.28 0.40 0.74 0.78 0.46 7 0.49 0.02 0.14 0.45 0.22 0.72 0.07 8 0.77 0.72 0.17 0.74 0.31 1.00 0.24

Model Based Symbolic Description for Big Data Analysis Multiple Beanplot Time Series 33 Beanplot Multiple Factor Analysis The first parameter for the mean of the first mixture: m1.m1 m2.m1 m3.m1 m4.m1 m5.m1 m6.m1 m7.m1 1 82.19 336.30 72.25 104.34 49.81 47.17 29.07 2 304.05 366.78 170.76 369.46 170.40 45.92 297.46 3 90.79 448.03 827.77 612.05 107.85 209.93 289.85 4 65.93 54.46 693.90 323.22 161.90 427.09 114.29 5 93.20 123.75 324.24 211.98 57.44 296.11 301.16 6 380.75 975.04 90.62 196.74 150.57 27.91 493.68 7 738.65 1260.84 222.19 689.56 533.29 47.30 701.92 8 723.23 1387.33 76.92 75.71 624.56 136.35 626.18

Model Based Symbolic Description for Big Data Analysis Multiple Beanplot Time Series 34 Beanplot Multiple Factor Analysis The second parameter for the mean of the second mixture: m1.m2 m2.m2 m3.m2 m4.m2 m5.m2 m6.m2 m7.m2 1 196.28 583.40 315.45 104.34 85.67 157.03 93.17 2 304.05 1321.94 678.52 424.55 170.40 251.43 297.46 3 313.62 448.03 1005.00 834.99 252.48 301.69 631.93 4 196.00 356.98 801.19 383.43 202.49 604.11 306.53 5 276.73 550.80 612.69 383.76 162.39 457.51 301.16 6 484.66 1188.33 169.80 592.88 463.98 157.05 715.68 7 771.67 1580.85 414.31 850.60 778.78 161.16 824.54 8 969.05 1661.30 276.89 304.42 777.33 136.35 792.14

Model Based Symbolic Description for Big Data Analysis Multiple Beanplot Time Series 35 Beanplot Multiple Factor Analysis The variance parameter related the first mixture: m1.s1 m2.s1 m3.s1 m4.s1 m5.s1 m6.s1 m7.s1 1 50.37 156.14 48.26 135.63 24.80 25.53 12.25 2 447.50 210.03 110.08 230.87 354.57 15.93 347.48 3 43.31 140.41 67.09 87.25 50.90 34.15 64.24 4 42.06 27.26 40.42 128.32 53.06 59.19 55.69 5 64.46 97.20 86.14 77.59 26.81 105.84 86.11 6 22.18 154.71 32.28 132.58 136.55 21.24 71.54 7 29.09 79.46 68.33 55.11 66.68 20.13 51.98 8 37.35 73.39 53.28 25.40 95.38 75.48 48.66

Model Based Symbolic Description for Big Data Analysis Multiple Beanplot Time Series 36 Beanplot Multiple Factor Analysis The second parameter related the variance of the second mixture: m1.s2 m2.s2 m3.s2 m4.s2 m5.s2 m6.s2 m7.s2 1 86.64 94.68 47.92 58.58 36.64 33.44 43.28 2 233.92 244.25 198.35 242.20 102.43 146.07 181.34 3 54.21 70.07 40.71 13.69 21.53 43.93 93.33 4 44.08 117.17 73.56 135.32 29.21 32.08 39.72 5 52.45 48.41 134.09 36.90 50.20 56.45 54.34 6 133.13 85.80 27.77 36.86 67.26 64.96 42.63 7 56.94 39.59 15.51 52.00 49.61 92.77 9.82 8 70.26 60.86 35.51 140.37 26.44 86.54 49.18

Model Based Symbolic Description for Big Data Analysis Multiple Beanplot Time Series 37 Beanplot Multiple Factor Analysis We obtain as well the gof index for each mixture. Each model is represented by their parameters and by the gof index. The gof index is necessary in order to weight differently the observations with have a lower gof in the different models. m1.gof m2.gof m3.gof m4.gof m5.gof m6.gof m7.gof 1 0.98 0.98 0.98 0.95 0.99 0.96 0.97 2 0.82 0.99 0.97 0.94 0.61 0.95 0.71 3 0.96 0.98 1.00 1.00 0.98 0.99 0.97 4 0.96 0.95 1.00 0.97 1.00 0.99 0.97 5 0.98 0.95 0.97 0.98 0.99 0.99 0.99 6 0.97 1.00 0.99 0.98 0.98 0.98 0.99 7 1.00 0.99 0.99 1.00 1.00 0.98 1.00 8 1.00 0.99 0.96 0.97 1.00 0.93 1.00

Model Based Symbolic Description for Big Data Analysis Multiple Beanplot Time Series 38 Beanplot Multiple Factor Analysis We can obtain the index as beanplots from the block-pca weighting for the gof index. At the end of the procedure we can obtain the beanplot prototype time series. The global PCA is performed on a matrix with merged initial datasets (Abdi and Valentine 2007) Figure: MFA Beanplot Prototype Time Series

Model Based Symbolic Description for Big Data Analysis Multiple Beanplot Time Series 39 Beanplot Multiple Factor Analysis By considering the correlation circle we can observe the variables of high performing stocks (represented by higher means) versus the characteristics of low means (x-axis). At the same time we are able to see characteizations of higher volatility in the y-axis. Figure: Correlation Circle

Model Based Symbolic Description for Big Data Analysis Multiple Beanplot Time Series 40 Beanplot Multiple Factor Analysis We obtain as well: the individual factor maps and the groups representations. These results can be used in order to interpret the results financially: Individual factor map (1) show the characteristics of the different temporal observations. We can observe here the dynamics over time of the market as a whole Individual factor map (2) show the way the different stocks (represented by the different models) performs over time. It is possible to read that some stocks tend to grow more than others so they seems to be good opportunities (model 2 and model 5) Groups representation show the portfolio selection by considering the different performances of the stocks (or models). In this context seems to be reasonable a strategy by picking first of all the stocks 5 and 7 then 1 and 2. Overall these stocks seems to be convenient by considering their performances over time. The plot is useful in order to discriminate good stocks to others. We use the gof index in order to weight accordingly the observations.

Model Based Symbolic Description for Big Data Analysis Multiple Beanplot Time Series 41 Beanplot Multiple Factor Analysis Figure: Individual Factor Map (1)

Model Based Symbolic Description for Big Data Analysis Multiple Beanplot Time Series 42 Beanplot Multiple Factor Analysis Figure: Individual Factor Map (2)

Model Based Symbolic Description for Big Data Analysis Multiple Beanplot Time Series 43 Beanplot Multiple Factor Analysis Figure: Groups representation

Model Based Symbolic Description for Big Data Analysis Multiple Beanplot Time Series 44 Beanplot Clustering The aim of the clustering procedure is find groups of different beanplot models or stocks on a day which can be more similar. The procedure can be very useful on stock picking processes. In this context a relevant distance used is the model distance by Lauro Romano Giordano (2006). By using the appropriate distance we are able to discover that the stocks 2 and 3 performs very peculiarly on the groups of the stocks considered. The stocks 1 and 7 show together a very low gof. Finally we are able to discriminate the different stocks typologies.

Model Based Symbolic Description for Big Data Analysis Multiple Beanplot Time Series 45 Beanplot Clustering model t p1 p2 m1 m2 s1 s2 gof 1 m1 2 0.63 0.37 304.05 304.05 447.50 233.92 0.82 2 m2 2 0.64 0.36 366.78 1321.94 210.03 244.25 0.99 3 m3 2 0.53 0.47 170.76 678.52 110.08 198.35 0.97 4 m4 2 0.48 0.52 369.46 424.55 230.87 242.20 0.94 5 m5 2 0.67 0.33 170.40 170.40 354.57 102.43 0.61 6 m6 2 0.14 0.86 45.92 251.43 15.93 146.07 0.95 7 m7 2 0.65 0.35 297.46 297.46 347.48 181.34 0.71

Model Based Symbolic Description for Big Data Analysis Multiple Beanplot Time Series 46 Beanplot Clustering Figure: Clustering

Model Based Symbolic Description for Big Data Analysis Multiple Beanplot Time Series 47 Beanplot Constrained Clustering The aim of the constrained clustering procedure is to find groups of beanplots (or models) which are similar over the time. The final results can be used to detect relevant change point over time. Also in this case the relevant distance used is the model distance by Lauro Romano Giordano (2006). The results show a very unstable situation for the first three observations. In this context we can detect three changing points on the first three observations. Then the period 4-5 and the period 6-8 show relevant similarities. Overall the periods 1,2,3 are very risky because the gof level is comparatively not so high

Model Based Symbolic Description for Big Data Analysis Multiple Beanplot Time Series 48 Beanplot Constrained Clustering t p1 p2 m1 m2 s1 s2 gof 1 1 0.34 0.66 29.07 93.17 12.25 43.28 0.97 2 2 0.65 0.35 297.46 297.46 347.48 181.34 0.71 3 3 0.61 0.39 289.85 631.93 64.24 93.33 0.97 4 4 0.62 0.38 114.29 306.53 55.69 39.72 0.97 5 5 0.56 0.44 301.16 301.16 86.11 54.34 0.99 6 6 0.54 0.46 493.68 715.68 71.54 42.63 0.99 7 7 0.93 0.07 701.92 824.54 51.98 9.82 1.00 8 8 0.76 0.24 626.18 792.14 48.66 49.18 1.00

Model Based Symbolic Description for Big Data Analysis Multiple Beanplot Time Series 49 Beanplot Constrained Clustering Figure: Constrained Clustering

Model Based Symbolic Description for Big Data Analysis Multiple Beanplot Time Series 50 Beanplot Forecasting In order to predict adequately the observations related the beanplot models over time we can use a forecasting procedure based on the VAR. The aim of the procedure is to predict each observation over time by choosing the adequate VAR model The models take in the account the weight based on the gof. The results of the predicted parameters allows to obtain the predicted models.

Model Based Symbolic Description for Big Data Analysis Multiple Beanplot Time Series 51 Beanplot Forecasting V1 V2 V3 V4 V5 V6 V7 V8 1 30 0.33 0.67 26.16 83.85 11.03 38.95 0.87 2 31 0.34 0.66 29.07 93.17 12.25 43.28 0.97 3 32 0.65 0.35 297.46 297.46 347.48 181.34 0.71 4 33 0.61 0.39 289.85 631.93 64.24 93.33 0.97 5 34 0.62 0.38 114.29 306.53 55.69 39.72 0.97 6 35 0.56 0.44 301.16 301.16 86.11 54.34 0.99 7 36 0.54 0.46 493.68 715.68 71.54 42.63 0.99 8 37 0.93 0.07 701.92 824.54 51.98 9.82 1.00 9 38 0.76 0.24 626.18 792.14 48.66 49.18 1.00 10 prediction 0.73 0.26 594.18 765.14 50.69 45.57 0.99

Model Based Symbolic Description for Big Data Analysis Multiple Beanplot Time Series 52 Beanplot Forecasting Figure: Forecasting: real beanplot to predict (left) and the forecast (right)

Model Based Symbolic Description for Big Data Analysis Conclusions 53 Conclusions The application of the Beanplots as Symbolic Data seems to be very fruitful on Financial Big Data. The use of the models based on the beanplots allow to retain the relevant information based on the parameters of the models as well. A fundamental point is to use the error on weighting the different models and observations. In this context we have shown that the use of the error allow the improvement of the results The different models allow to detect relevant patterns in the data which can be exploited in various financial operations like trading, risk management and so on. As future development we will consider these methodologies in other contexts as for example control charts in order to evaluate the stability of the markets and building relevant system alerts.