Designing a Modern Data Warehouse + Data Lake

Similar documents
Modern Data Warehouse The New Approach to Azure BI

Data Architectures in Azure for Analytics & Big Data

Drawing the Big Picture

Přehled novinek v SQL Server 2016

Lambda Architecture for Batch and Stream Processing. October 2018

Capture Business Opportunities from Systems of Record and Systems of Innovation

Overview of Data Services and Streaming Data Solution with Azure

Microsoft Developer Day

Management Information Systems Review Questions. Chapter 6 Foundations of Business Intelligence: Databases and Information Management

Virtuoso Infotech Pvt. Ltd.

Making Data Integration Easy For Multiplatform Data Architectures With Diyotta 4.0. WEBINAR MAY 15 th, PM EST 10AM PST

MAPR DATA GOVERNANCE WITHOUT COMPROMISE

Data Management Glossary

CONSOLIDATING RISK MANAGEMENT AND REGULATORY COMPLIANCE APPLICATIONS USING A UNIFIED DATA PLATFORM

USERS CONFERENCE Copyright 2016 OSIsoft, LLC

Big Data Technology Ecosystem. Mark Burnette Pentaho Director Sales Engineering, Hitachi Vantara

Copyright 2016 Datalynx Pty Ltd. All rights reserved. Datalynx Enterprise Data Management Solution Catalogue

Chapter 6 VIDEO CASES

Modernizing Business Intelligence and Analytics

From Single Purpose to Multi Purpose Data Lakes. Thomas Niewel Technical Sales Director DACH Denodo Technologies March, 2019

Chapter 3. Foundations of Business Intelligence: Databases and Information Management

Top Five Reasons for Data Warehouse Modernization Philip Russom

@Pentaho #BigDataWebSeries

Fast Innovation requires Fast IT

PERSPECTIVE. Data Virtualization A Potential Antidote for Big Data Growing Pains. Abstract

Stages of Data Processing

The Emerging Data Lake IT Strategy

Data Warehousing in the Age of In-Memory Computing and Real-Time Analytics. Erich Schneider, Daniel Rutschmann June 2014

The Data Catalog The Key to Managing Data, Big and Small. April Reeve May

SAP Agile Data Preparation Simplify the Way You Shape Data PUBLIC

Composite Software Data Virtualization The Five Most Popular Uses of Data Virtualization

Best practices for building a Hadoop Data Lake Solution CHARLOTTE HADOOP USER GROUP

Introduction to Data Science

REGULATORY REPORTING FOR FINANCIAL SERVICES

Low Friction Data Warehousing WITH PERSPECTIVE ILM DATA GOVERNOR

Realizing the Full Potential of MDM 1

2014 年 3 月 13 日星期四. From Big Data to Big Value Infrastructure Needs and Huawei Best Practice

Building a Data Strategy for a Digital World

Bull Fast Track/PDW and Big Data

Saving ETL Costs Through Data Virtualization Across The Enterprise

White Paper / Azure Data Platform: Ingest

IBM Industry Model support for a data lake architecture

IBM Data Replication for Big Data

IOTA ARCHITECTURE: DATA VIRTUALIZATION AND PROCESSING MEDIUM DR. KONSTANTIN BOUDNIK DR. ALEXANDRE BOUDNIK

AVOIDING SILOED DATA AND SILOED DATA MANAGEMENT

What is Gluent? The Gluent Data Platform

Full file at

ETL is No Longer King, Long Live SDD

Evolving To The Big Data Warehouse

Management Information Systems MANAGING THE DIGITAL FIRM, 12 TH EDITION FOUNDATIONS OF BUSINESS INTELLIGENCE: DATABASES AND INFORMATION MANAGEMENT

WHITEPAPER. MemSQL Enterprise Feature List

Hybrid Data Platform

Data-Intensive Distributed Computing

Informatica Enterprise Information Catalog

BIG DATA COURSE CONTENT

Integrating Oracle Databases with NoSQL Databases for Linux on IBM LinuxONE and z System Servers

Third generation of Data Virtualization

Introduction to Big-Data

Data Analytics at Logitech Snowflake + Tableau = #Winning

Tour of Database Platforms as a Service. June 2016 Warner Chaves Christo Kutrovsky Solutions Architect

Data 101 Which DB, When. Joe Yong Azure SQL Data Warehouse, Program Management Microsoft Corp.

SQT03 Big Data and Hadoop with Azure HDInsight Andrew Brust. Senior Director, Technical Product Marketing and Evangelism

VOLTDB + HP VERTICA. page

Data 101 Which DB, When Joe Yong Sr. Program Manager Microsoft Corp.

Cloud Computing 2. CSCI 4850/5850 High-Performance Computing Spring 2018

Oliver Engels & Tillmann Eitelberg. Big Data! Big Quality?

Chapter 6. Foundations of Business Intelligence: Databases and Information Management VIDEO CASES

Shine a Light on Dark Data with Vertica Flex Tables

SQL Server Evolution. SQL 2016 new innovations. Trond Brande

Improving Your Business with Oracle Data Integration See How Oracle Enterprise Metadata Management Can Help You

Processing Unstructured Data. Dinesh Priyankara Founder/Principal Architect dinesql Pvt Ltd.

Security and Performance advances with Oracle Big Data SQL

Oracle GoldenGate for Big Data

When (and how) to move applications from VMware to Cisco Metacloud

Architectural challenges for building a low latency, scalable multi-tenant data warehouse

Handout 12 Data Warehousing and Analytics.

Oracle Big Data Connectors

Oracle Database 11g for Data Warehousing & Big Data: Strategy, Roadmap Jean-Pierre Dijcks, Hermann Baer Oracle Redwood City, CA, USA

SQL Server SQL Server 2008 and 2008 R2. SQL Server SQL Server 2014 Currently supporting all versions July 9, 2019 July 9, 2024

Data Protection for Virtualized Environments

MOBIUS + ARKIVY the enterprise solution for MIFID2 record keeping

CHAPTER 3 Implementation of Data warehouse in Data Mining

BI, Big Data, Mission Critical. Eduardo Rivadeneira Specialist Sales Manager

Implementing a SQL Data Warehouse

Combine Native SQL Flexibility with SAP HANA Platform Performance and Tools

The Value of Data Modeling for the Data-Driven Enterprise

ADABAS & NATURAL 2050+

<Insert Picture Here> Introduction to Big Data Technology

Demystifying Cloud Data Warehousing

Informatica Data Quality Product Family

The Technology of the Business Data Lake. Appendix

Information empowerment for your evolving data ecosystem

Duration: 5 Days. EZY Intellect Pte. Ltd.,

Managing Data Resources

Azure Data Factory VS. SSIS. Reza Rad, Consultant, RADACAD

Training 24x7 DBA Support Staffing. MCSA:SQL 2016 Business Intelligence Development. Implementing an SQL Data Warehouse. (40 Hours) Exam

Azure Data Factory. Data Integration in the Cloud

Understanding the latent value in all content

<Insert Picture Here> Enterprise Data Management using Grid Technology

Transcription:

Designing a Modern Warehouse + Lake Strategies & architecture options for implementing a modern data warehousing environment Melissa Coates Analytics Architect, SentryOne Blog: sqlchick.com Twitter: @sqlchick Presentation content last updated: March 11, 2017

Designing a Modern Warehouse + Lake Agenda Discuss strategies & architecture options for: 1) Evolving to a Modern Warehouse 2) Lake Objectives, Challenges, & Implementation Options 3) The Logical Warehouse & Virtualization

Evolving to a Modern Warehouse

DW+BI Systems Used to Be Fairly Straightforward Operational Store Operational Reporting Reporting Tool of Choice Organizational (Sales, Inventory, etc) Third Party Batch ETL Enterprise Warehouse Marts OLAP Semantic Layer Historical Analytical Reporting Master

Modern Warehousing Devices & Sensors Social Media Organizational Batch ETL Third Party Demographics Streaming Federated Queries Master Lake Operational Store Hadoop Machine Learning Enterprise Warehouse Marts OLAP Semantic Layer In-Memory Model Near-Real-Time Monitoring Self-Service Reports & Models Science Advanced Analytics Mobile Operational Reporting Historical Analytical Reporting Alerts Reporting Tool of Choice

What Makes a Warehouse Modern Variety of data sources; multistructured Coexists with lake Coexists with Hadoop Larger data volumes; MPP Multi-platform architecture virtualization + integration Support all user types & levels Flexible deployment Deployment decoupled from dev Governance model & MDM Promotion of self-service solutions Near real-time data; Lambda arch Advanced analytics Agile delivery Cloud integration; hybrid env Automation & APIs catalog; search ability Scalable architecture Analytics sandbox w/ promotability Bimodal environment

Growing an Existing DW Environment Warehouse Lake Hadoop In-Memory Model NoSQL Internal to the data warehouse: modeling strategies Partitioning Clustered columnstore index In-memory structures MPP (massively parallel processing) Augment the data warehouse: Complementary data storage & analytical solutions Cloud & hybrid solutions virtualization / virtual DW -- Grow around your existing data warehouse --

Multi-Structured Objectives: Web Logs Social Media Big & Analytics Infrastructure Lake Organizational 1 Storage for multistructured data (json, xml, csv ) with a polygot persistence strategy Images, Audio, Video Spatial, GPS Devices, Sensors Hadoop NoSQL 1 2 3 Warehouse Reporting tool of choice 2 3 Integrate portions of the data into data warehouse Federated query access

Lambda Architecture Speed Layer: Low latency data Devices & Sensors Event Hub Stream Analytics Speed Layer Streaming Dashboard Batch Layer: processing to support complex analysis Batch Layer Organizational Raw Master Scheduled ETL Lake Curated Warehouse Mart(s) Serving Layer Machine Learning Advanced Analytics Reports, Dashboards Reporting tool of choice Serving Layer: Responds to queries Objectives: Support large volume of highvelocity data Near real-time analysis + persisted history

Larger Scale Warehouse: MPP Massively Parallel Processing (MPP) operates on high volumes of data across distributed nodes Control Node Shared-nothing architecture: each node has its own disk, memory, CPU Decoupled storage and compute Compute Node Compute Node Compute Node Scale up compute nodes to increase parallelism Integrates with relational & non-relational data Storage

Hybrid Architecture Cloud Social Media CRM On-Premises Sales System Inventory System 2 Hadoop MPP Warehouse 2 3 Mart Machine Learning 1 Reporting tool of choice Objectives: 2 3 1 Scale up MPP compute nodes during: o Peak ETL data loads, or o High query volumes Utilize existing on-premises data structures Take advantage of cloud services for advanced analytics

Lake Objectives, Challenges & Implementation Options

Lake Lake Repository Web Sensor Log Social Images A repository for analyzing large quantities of disparate sources of data in its native format One architectural platform to house all types of data: o Machine-generated data (ex: IoT, logs) o Human-generated data (ex: tweets, e-mail) o Traditional operational data (ex: sales, inventory)

Objectives of a Lake Reduce up-front effort by ingesting data in any format without requiring a schema initially Make acquiring new data easy, so it can be available for data science & analysis quickly Store large volume of multi-structured data in its native format Lake Repository Web Sensor Log Social Images

Objectives of a Lake Defer work to schematize after value & requirements are known Achieve agility faster than a traditional data warehouse can Speed up decision-making ability Storage for additional types of data which were historically difficult to obtain Lake Repository Web Sensor Log Social Images

Lake as a Staging Area for DW CRM Corporate Social Media Devices & Sensors Lake Store Raw : Staging Area 1 Warehouse Reporting tool of choice Strategy: Reduce storage needs in data warehouse Practical use for data stored in the data lake 1 Utilize the data lake as a landing area for DW staging area, instead of the relational database

Lake for Active Archiving CRM Corporate Social Media Lake Store Raw : Staging Area Active Archive Warehouse 1 2 Reporting tool of choice Strategy: archival, with query ability available when needed 1 Archival process based on data retention policy 2 Federated query to access current & historical data Devices & Sensors

Iterative Lake Pattern 1 Ingest and store data indefinitely in is native format Acquire data with costeffective storage 2 Analyze in place to determine value of the data ( schema on read ) Analyze data on a case-by-case basis with scalable parallel processing ability 3 For data of value: integrate with the data warehouse ( schema on write ), or use data virtualization Deliver data once requirements are fully known

Lake Implementation A data lake is a conceptual idea. It can be implemented with one or more technologies. HDFS (Hadoop Distributed File Storage) is a very common option for data lake storage. However, Hadoop is not a requirement for a data lake. A data lake may also span > 1 Hadoop cluster. Lake NoSQL HDFS NoSQL databases are also very common.

Multi-Technology Lake CRM Corporate Social Media Devices & Sensors 1 Conceptual Lake Lake Store Raw DocumentDB Collection Lake Store Standardized (Curated) 2 Warehouse Reporting tool of choice Strategy: Use a conceptual data lake strategy with multiple technologies, each best suited to the data (Polygot Persistence) 1 Ingest hierarchical, nested JSON data into DocumentDB 2 Standardize data into tabular text file

Coexistence of Lake & Warehouse Lake Lake Values: Agility Flexibility Rapid Delivery Exploration More effort Less effort retrieval ingestion Enterprise Warehouse DW Values: Governance Reliability Standardization Security Less effort More effort

Zones in a Lake Lake Transient/ Temp Zone Analytics Sandbox Raw / Staging Zone Curated Zone Metadata Security Governance Information Management

Raw Zone Raw data zone is immutable to change History is retained to accommodate future unknown needs Staging may be a distinct area on its own Supports any type of data o o o o o Streaming Batch One-time Full load Incremental load, etc

Transient Zone Useful when data quality or validity checks are necessary before data can be landed in the Raw Zone All landing zones considered kitchen area with highly limited access o o o Transient Zone Raw Zone Staging Area

Curated Zone Cleansed, organized data for data delivery: o o o consumption Federated queries Provides data to other systems Most self-service data access occurs from the Curated Zone Standard governance & security in the Curated Zone

Analytics Sandbox science and exploratory activities Minimal governance of the Analytics Sandbox Valuable efforts are promoted from Analytics Sandbox to the Curated Zone or to the data warehouse

Sandbox Solutions: Develop CRM Corporate Social Media Devices & Sensors Flat Files Raw 1 1 Warehouse Lake Curated Sandbox 1 2 Scientist / Analyst Objective: 1 Utilize sandbox area in the data lake for data preparation 2 Execution of R scripts from local workstation for exploratory data science & advanced analytics scenarios

Sandbox Solutions: Operationalize CRM Corporate Social Media Devices & Sensors Flat Files Raw 2 Warehouse Lake Curated Sandbox 2 R Server 1 3 Scientist / Analyst Objective: 1 Trained model is promoted to run in production server environment 2 3 Sandbox use is discontinued once solution is promoted Execution of R scripts from server for operationalized data science & advanced analytics scenarios

Organizing the Lake Plan the structure based on optimal data retrieval. The organization pattern should be self-documenting. Organization is frequently based upon: Subject area Security boundaries Time partitioning Downstream app/purpose Metadata capabilities of your technology will have a *big* impact on how you choose to handle organization. -- The objective is to avoid a chaotic data swamp --

Organizing the Lake Raw Zone Subject Area Source Object Date Loaded File(s) ------------------------------- Sales Salesforce CustomerContacts 2016 12 2016_12_01 CustCct.txt Example 1 Pros: Subject area at top level, organization-wide, Partitioned by time Cons: No obvious security or organizational boundaries Curated Zone Purpose Type Snapshot Date File(s) ----------------------------------- Sales Trending Analysis Summarized 2016_12_01 SalesTrend.txt

Organizing the Lake Raw Zone Organization Unit Subject Area Source Object Date Loaded File(s) ------------------------------- East Division Sales Salesforce CustomerContacts 2016 12 2016_12_01 CustCct.txt Example 2 Pros: Security at the organizational level, Partitioned by time Cons: Potentially siloed data, duplicated data Curated Zone Organizational Unit Purpose Type Snapshot Date File(s) ----------------------------------- East Division Sales Trending Analysis Summarized 2016_12_01 SalesTrend.txt

Organizing the Lake Other options which affect organization and/or metadata: Retention Policy Temporary data Permanent data Applicable period (ex: project lifetime) etc Business Impact / Criticality High (HBI) Medium (MBI) Low (LBI) etc Owner / Steward / SME Probability of Access Recent/current data Historical data etc Confidential Classification Public information Internal use only Supplier/partner confidential Personally identifiable information (PII) Sensitive financial Sensitive intellectual property etc

Challenges of a Lake People Process Technology

Challenges of a Lake: Technology Complex, multilayered architecture Polygot persistence strategy Architectures & toolsets are emerging and maturing Unknown storage & scalability Can we realistically store everything? Cloud deployments are attractive when scale is undetermined Working with un-curated data Inconsistent dates and data types type mismatches Missing or incomplete data Flex-field data which can vary per record Different granularities Incremental data loads etc

Challenges of a Lake: Technology Performance Trade-offs between latency, scalability, & query performance Monitoring & auditing retrieval Easy access for data consumers Organization of data to facilitate data retrieval Business metadata is *critical* for making sense of the data Change management File structure changes (inconsistent schema-on-read ) Integration with master data Inherent risks associated with a highly flexible repository Maintaining & updating over time (meeting future needs)

Challenges of a Lake: Process Finding right balance of agility (deferred work vs. up-front work) Risk & complexity are still there just shifted Finding optimal level of chaos which invites experimentation When schema-on-read is appropriate (Temporarily? Permanently?) How the data lake will coexist with the data warehouse How to operationalize self-service/sandbox work from analysts quality How to reconcile or confirm accuracy of results validation between systems

Challenges of a Lake: Process Governance & security Challenging to implement data governance Difficult to enforce standards Securing and obfuscating confidential data Meeting compliance and regulatory requirements Ignoring established best practices for data management New best practices are still evolving Repeating the same data warehousing failures (ex: silos) Erosion of credibility due to disorganization & mismanagement The build it and they will come mentality

Challenges of a Lake: People Redundant effort Expectations Little focus on reusability, consistency, standardization Time-consuming data prep & cleansing efforts which don t add analytical value Effort to operationalize self-service data prep processes Minimal sharing of prepared datasets, calculations, previous findings, & lessons learned among analysts High expectations for analysts to conduct their own data preparation, manipulation, integration, cleansing, analysis Skills required to interact with data which is schema-less stewardship Unclear data ownership & stewardship for each subject area or source

Ways to Get Started with a Lake 1. lake as staging area for DW 2. Offload archived data from DW back to data lake 3. Ingest new type of data to allow time for longer-term planning 4. Promote valuable solutions from data lake sandbox to DW (or curated data zone in data lake)

The Logical Warehouse & Virtualization

Conceptual Warehouse Layers Storage persistence Software base Management System (DBMS) Foundational Systems Servers, network User Access Reporting tools Metadata Views into the data storage Source: Philip Russom, TDWI

Logical Warehouse virtualization abstraction layer Storage persistence Software base Management System (DBMS) Foundational Systems Servers, network User Access Reporting tools Metadata Views into the data storage Distributed storage & processing from databases, data lake, Hadoop, NoSQL, etc.

Logical Warehouse An LDW is a data warehouse which uses repositories, virtualization, and distributed processes in combination. 7 major components of an LDW: Virtualization Distributed Processing We will focus on these two aspects Repository Management Metadata Management Taxonomy/Ontology Resolution Auditing & Performance Services Service Level Agreement Management Source: Gartner

Virtualization User Queries: Access: Virtualization Layer Big & Analytics Infrastructure Warehouse Transactional Systems ERP, CRM Systems Cloud Systems Lake Hadoop NoSQL Master Third Party, Flat Files Web Logs Social Media Images, Spatial, Audio, Video GPS Devices, Sensors

Objectives of Virtualization Objectives: Add flexibility & speed to a traditional data warehouse Make current data available quickly where it lives useful when: - is too large to practically move - movement window is small - cannot legally move out of a geographic region Enable user access to various data platforms Reduce data latency; enable near real-time analytics Reduce data redundancy & processing time Facilitate a polygot persistence strategy (use the best storage for the data)

Azure (excluding PolyBase) Azure Lake --> data from Blob Storage, SQL DW, SQL DB, or SQL Server in a VM via U-SQL Azure SQL DB --> data from another Azure SQL DB via elastic DB query Azure Analysis Services --> data via DirectQuery mode Microsoft Virtualization Options PolyBase SQL Server 2016 --> data from Blob Storage or Hadoop Azure SQL DW --> data from Blob Storage APS/PDW --> data from Blob Storage or Hadoop SQL Server (excluding PolyBase) SQL Server data files in Azure -->data from Azure blob storage SQL Server stretched table --> data from history persisted in Azure SQL Server linked server --> data from other tables/views, Excel, flat files, Hive SQL Server OPENDATASOURCE, OPENROWSET, and OPENQUERY --> remote queries SQL Server Analysis Services --> data via DirectQuery mode SQL Server FileTable and FileStream --> data from Windows file system R Server DistributedR --> data from other DB or Hadoop Microsoft Distributed Transaction Coordinator (MSDTC)

PolyBase with SQL Server 2016 Azure Blob Storage or Hadoop Cluster Raw Curated Metadata Catalog ETL PolyBase SQL Server 2016 Table External Table External Table View (optional) Queries View + External Tables serve as our data virtualization layer

Two Ways of Using PolyBase 1 A data virtualization technique for querying relational + multi-structured data in a single consolidated query. Objective is to *avoid* data movement. SQL Server Driver Car Name Model Amanda Civic Green Joe Escort Brown Insurance data State WA WA Azure Blob Storage Telemetry data T-SQL query results Driver Car State Avg Miles Name Model Speed Driven Amanda Civic WA 58 91 Green Joe Brown Escort WA 25 10

Two Ways of Using PolyBase 2 movement from source to target. Currently PolyBase is the recommended method for loading an MPP system such as Azure SQL DW or APS (PDW) due to its inherent parallel processing behavior. Source PolyBase Script Control Node Compute Node Compute Node Compute Node Storage

PolyBase Objects 2 1 3 5 4

External Table Design 1 First steps (after enabling PolyBase) 2 Don t leave passwords & keys in source control

External Table Design 3 4 Naming convention: Type of file format in its name

External Table Design 5 Using an EXT schema just to segregate the DDL. Note there s no Dim or Fact prefix in this name because it ll be a straight reporting table.

Challenges of Virtualization Performance; impact of adding reporting load on source systems Limited ability to handle data quality & referential integrity issues Complexity of virtualization layer (ex: different data formats, query languages, data granularities) Change management & managing lifecycle+impact of changes Lack of historical data; inability to do point-in-time historical analysis Consistent security, compliance & auditing Real-time reporting can be confusing with its frequent data changes Downtime of underlying data sources Auditing & reconciling abilities across systems

Wrap-Up & Questions

Growing an Existing DW Environment Warehouse Lake Hadoop In-Memory Model NoSQL Internal to the data warehouse: modeling strategies Partitioning Clustered columnstore index In-memory structures MPP (massively parallel processing) Augmenting the data warehouse: New data storage & analytical solutions (polygot persistence) Cloud & hybrid solutions virtualization / virtual DW Grow around your existing data warehouse

Final Thoughts Traditional data warehousing still is important, but needs to co-exist with other platforms. Build around your existing DW infrastructure. Plan your data lake with data retrieval in mind. Expect to balance ETL with some (perhaps limited) data virtualization techniques in a multi-platform environment. Be fully aware of data lake & data virtualization challenges in order to craft your own achievable & realistic best practices. Plan to work in an agile fashion, leveraging self-service & sandbox solutions which can be promoted to the corporate solution.

Recommended Resources Fundamentals More Advanced Recommended Whitepaper: How to Build An Enterprise Lake: Important Considerations Before Jumping In by Mark Madsen, Third Nature Inc.

Thank You! To download a copy of this presentation: SQLChick.com Presentations & Downloads page Melissa Coates BI Architect, SentryOne sentryone.com Blog: sqlchick.com Twitter: @sqlchick Creative Commons License: Attribution-NonCommercial-NoDerivative Works 3.0

Appendix A: Terminology

Terminology Logical Warehouse Facilitates access to various source systems via data virtualization, distributed processing, and other system components Virtualization Access to one or more distributed data sources without requiring the data to be physically materialized in another data structure Federation Accesses & consolidates data from multiple distributed data stores

Terminology Polygot Persistence Using the most effective data storage technology to handle different data storage needs Schema on Write structure is applied at design time, requiring additional upfront effort to formulate a data model Schema on Read structure is applied at query time rather than when the data is initially stored; deferred up-front effort facilitates agility

Terminology Defining the Components of a Modern Warehouse http://www.sqlchick.com/entries/2017/1/9/defining-thecomponents-of-a-modern-data-warehouse-a-glossary

Appendix B: What Makes A Warehouse Modern

What Makes a Warehouse Modern Variety of subject areas & data sources for analysis with capability to handle large volumes of data Expansion beyond a single relational DW/data mart structure to include Hadoop, Lake, or NoSQL Logical design across multi-platform architecture balancing scalability & performance virtualization in addition to data integration

What Makes a Warehouse Modern Support for all types & levels of users Flexible deployment (including mobile) which is decoupled from tool used for development Governance model to support trust and security, and master data management Support for promoting self-service solutions to the corporate environment

What Makes a Warehouse Modern Ability to facilitate near real-time analysis on high velocity data (Lambda architecture) Support for advanced analytics Agile delivery approach with fast delivery cycle Hybrid integration with cloud services APIs for downstream access to data

What Makes a Warehouse Modern Some DW automation to improve speed, consistency, & flexibly adapt to change cataloging to facilitate data search & document business terminology An analytics sandbox or workbench area to facilitate agility within a bimodal BI environment Support for self-service BI to augment corporate BI; discovery, data exploration, self-service data prep

Appendix C: Challenges With Modern Warehousing

Challenges with Modern Warehousing Reducing time to value Minimizing chaos Evolving & maturing technology Balancing schema on write with schema on read How strict to be with dimensional design? Agility

Challenges with Modern Warehousing Hybrid scenarios Multiplatform infrastructure Everincreasing data volumes File type & format diversity Real-time reporting needs Effort & cost of data integration Broad skillsets needed Complexity

Challenges with Modern Warehousing Self-service solutions which challenge centralized DW Managing production delivery from IT and user-created solutions Handling ownership changes (promotion) of valuable solutions Balance with Self-Service Initiatives

Challenges with Modern Warehousing quality Master data Security Governance The Never-Ending Challenges