Polybase In Action. Kevin Feasel Engineering Manager, Predictive Analytics ChannelAdvisor #ITDEVCONNECTIONS ITDEVCONNECTIONS.COM

Similar documents
Gerhard Brueckl. Deep-dive into Polybase

SQT03 Big Data and Hadoop with Azure HDInsight Andrew Brust. Senior Director, Technical Product Marketing and Evangelism

Modern Data Warehouse The New Approach to Azure BI

Microsoft Analytics Platform System (APS)

microsoft

Stages of Data Processing

Tomasz Libera. Azure SQL Data Warehouse

One is the Loneliest Number: Scaling out your Data Warehouse

Exam Questions

exam. Microsoft Perform Data Engineering on Microsoft Azure HDInsight. Version 1.0

Ian Choy. Technology Solutions Professional

Things I Learned The Hard Way About Azure Data Platform Services So You Don t Have To -Meagan Longoria

HDInsight > Hadoop. October 12, 2017

Asanka Padmakumara. ETL 2.0: Data Engineering with Azure Databricks

Approaching the Petabyte Analytic Database: What I learned

Data Architectures in Azure for Analytics & Big Data

Microsoft. Exam Questions Perform Data Engineering on Microsoft Azure HDInsight (beta) Version:Demo

SQL Server 2019 Big Data Clusters

Tuning the Hive Engine for Big Data Management

Databricks, an Introduction

BIG DATA COURSE CONTENT

Hadoop 2.x Core: YARN, Tez, and Spark. Hortonworks Inc All Rights Reserved

Configuring Sqoop Connectivity for Big Data Management

Hadoop. Course Duration: 25 days (60 hours duration). Bigdata Fundamentals. Day1: (2hours)

Big Data Technology Ecosystem. Mark Burnette Pentaho Director Sales Engineering, Hitachi Vantara


Microsoft. Exam Questions Perform Data Engineering on Microsoft Azure HDInsight (beta) Version:Demo

Azure Data Lake Store

Alexander Klein. #SQLSatDenmark. ETL meets Azure

New Features and Enhancements in Big Data Management 10.2

SQL Server SQL Server 2008 and 2008 R2. SQL Server SQL Server 2014 Currently supporting all versions July 9, 2019 July 9, 2024

Data Lake Based Systems that Work

Integrating with Apache Hadoop

INITIAL EVALUATION BIGSQL FOR HORTONWORKS (Homerun or merely a major bluff?)

iway Big Data Integrator New Features Bulletin and Release Notes

Microsoft Perform Data Engineering on Microsoft Azure HDInsight.

What is Gluent? The Gluent Data Platform

Things Every Oracle DBA Needs to Know about the Hadoop Ecosystem. Zohar Elkayam

Why All Column Stores Are Not the Same Twelve Low-Level Features That Offer High Value to Analysts

An Introduction to Big Data Formats

Apache Hive for Oracle DBAs. Luís Marques

How to Install and Configure Big Data Edition for Hortonworks

DOWNLOAD PDF MICROSOFT SQL SERVER HADOOP CONNECTOR USER GUIDE

White Paper / Azure Data Platform: Ingest

Introduction into Big Data analytics Lecture 3 Hadoop ecosystem. Janusz Szwabiński

Big Data Hadoop Stack

Overview. : Cloudera Data Analyst Training. Course Outline :: Cloudera Data Analyst Training::

Microsoft Exam

Hadoop. Introduction / Overview

Oracle Big Data Connectors

Microsoft Big Data and Hadoop

MapR Enterprise Hadoop

Talend Big Data Sandbox. Big Data Insights Cookbook

This is a brief tutorial that explains how to make use of Sqoop in Hadoop ecosystem.

Big Data Hadoop Developer Course Content. Big Data Hadoop Developer - The Complete Course Course Duration: 45 Hours

Impala. A Modern, Open Source SQL Engine for Hadoop. Yogesh Chockalingam

Performance Tuning and Sizing Guidelines for Informatica Big Data Management

Azure SQL Data Warehouse. Andrija Marcic Microsoft

How to Run the Big Data Management Utility Update for 10.1

Data sources. Gartner, The State of Data Warehousing in 2012

Swimming in the Data Lake. Presented by Warner Chaves Moderated by Sander Stad

Security and Performance advances with Oracle Big Data SQL

Getting Started with Pentaho and Cloudera QuickStart VM

Tuning Intelligent Data Lake Performance

BI ENVIRONMENT PLANNING GUIDE

Tuning Enterprise Information Catalog Performance

Oracle BDA: Working With Mammoth - 1

Sql Server 2008 Query Table Schema Management Studio Create

Franck Mercier. Technical Solution Professional Data + AI Azure Databricks

Blended Learning Outline: Cloudera Data Analyst Training (171219a)

Index. Scott Klein 2017 S. Klein, IoT Solutions in Microsoft s Azure IoT Suite, DOI /

From Single Purpose to Multi Purpose Data Lakes. Thomas Niewel Technical Sales Director DACH Denodo Technologies March, 2019

SAP VORA 1.4 on AWS - MARKETPLACE EDITION FREQUENTLY ASKED QUESTIONS

SQL Server 2017 Power your entire data estate from on-premises to cloud

SQL 2016 Performance, Analytics and Enhanced Availability. Tom Pizzato

Configuring a Hadoop Environment for Test Data Management

Transitioning From SSIS to Azure Data Factory. Meagan Longoria, Solution Architect, BlueGranite

Tuning Intelligent Data Lake Performance

MODERN BIG DATA DESIGN PATTERNS CASE DRIVEN DESINGS

Azure Data Factory. Data Integration in the Cloud

Data sources. Gartner, The State of Data Warehousing in 2012

SAS Data Loader 2.4 for Hadoop: User s Guide

Shark: SQL and Rich Analytics at Scale. Michael Xueyuan Han Ronny Hajoon Ko

Benchmarks Prove the Value of an Analytical Database for Big Data

Data 101 Which DB, When Joe Yong Sr. Program Manager Microsoft Corp.

The Hadoop Ecosystem. EECS 4415 Big Data Systems. Tilemachos Pechlivanoglou

April Copyright 2013 Cloudera Inc. All rights reserved.

Informatica Cloud Data Integration Spring 2018 April. What's New

Performance Optimization for Informatica Data Services ( Hotfix 3)

Overview of Data Services and Streaming Data Solution with Azure

Agenda. Spark Platform Spark Core Spark Extensions Using Apache Spark

CIS 612 Advanced Topics in Database Big Data Project Lawrence Ni, Priya Patil, James Tench

Juxtaposition of Apache Tez and Hadoop MapReduce on Hadoop Cluster - Applying Compression Algorithms

Survey of the Azure Data Landscape. Ike Ellis

Oracle Big Data SQL. Release 3.2. Rich SQL Processing on All Data

Current Schema Version Is 30 Not Supported Dfs

IBM Big SQL Partner Application Verification Quick Guide

In-memory data pipeline and warehouse at scale using Spark, Spark SQL, Tachyon and Parquet

CERTIFICATE IN SOFTWARE DEVELOPMENT LIFE CYCLE IN BIG DATA AND BUSINESS INTELLIGENCE (SDLC-BD & BI)

Hadoop Security. Building a fence around your Hadoop cluster. Lars Francke June 12, Berlin Buzzwords 2017

Transcription:

Polybase In Action Kevin Feasel Engineering Manager, Predictive Analytics ChannelAdvisor

Who Am I? What Am I Doing Here? Catallaxy Services Curated SQL We Speak Linux @feaselkl

Polybase Polybase is Microsoft's newest technology for connecting to remote servers. It started by letting you connect to Hadoop and has expanded since then to include Azure Blob Storage. Polybase is also the best method to load data into Azure SQL Data Warehouse.

Polybase Targets SQL Server to Hadoop (Hortonworks or Cloudera, on-prem or IaaS) SQL Server to Azure Blob Storage Azure Blob Storage to Azure SQL Data Warehouse In all three cases, you can use the T-SQL you know rather than a similar SQL-like language (e.g., HiveQL, SparkSQL, etc.) or some completely different language.

Polybase Targets SQL Server 2019 SQL Server to SQL Server SQL Server to Oracle SQL Server to MongoDB SQL Server to Teradata SQL Server to ODBC (e.g., Spark)

Massive Parallel Processing Polybase extends the idea of Massively Parallel Processing (MPP) to SQL Server. SQL Server is a classic "scale-up" technology: if you want more power, add more RAM/CPUs/resources to the single server. Hadoop is a great example of an MPP system: if you want more power, add more servers; the system will coordinate processing.

Why MPP? It is cheaper to scale out than scale up: 10 systems with 256 GB of RAM and 8 cores is a lot cheaper than a system with 2.5 TB of RAM and 80 cores. At the limit, you eventually run out of room to scale up, but scale out is much more practical: you can scale out to 2 petabytes of RAM but good luck finding a single server that supports this amount! There is additional complexity involved, but MPP systems let you move beyond the power of a single server.

Polybase As MPP MPP requires a head node and 1 or more compute nodes. Polybase lets you use SQL Servers as the head and compute nodes. Scale-out servers must be on an Active Directory domain. The head node must be Enterprise Edition, though the compute nodes can be Standard Edition. Polybase lets SQL Server compute nodes talk directly to Hadoop data nodes, perform aggregations, and then return results to the head node. This removes the classic SQL Server single point of contention.

Timeline Introduced in SQL Server Parallel Data Warehouse (PDW) edition, back in 2010 Expanded in SQL Server Analytics Platform System (APS) in 2012. Released to the "general public" in SQL Server 2016, with most support being in Enterprise Edition. Extended support for additional technologies (like Oracle, MongoDB, etc.) will be available in SQL Server 2019.

Motivation Today's talk will focus on using Polybase to integrate SQL Server 2016/2017 with Hadoop and Azure Blob Storage. We will use a couple smaller data sources to give you an idea of how Polybase works. Despite the size of the demos, Polybase works best with a significant number of compute nodes and Hadoop works best with a significant number of data nodes.

Installation Pre-Requisites 1. SQL Server 2016 or later, Enterprise or Developer Edition 2. Java Runtime Environment 7 Update 51 or later (get the latest version of 8 or 9; using JRE 9 requires SQL Server 2017 CU4) 3. Machines must be on a domain if you want to use scaleout 4. Polybase may only be installed once per server. If you have multiple instances, choose one. You can enable on multiple VMs, however.

Installation Select the New SQL Server stand-alone installation link in the SQL Server installer:

Installation When you get to feature selection, check the PolyBase Query Service for External Data box:

Installation If you get the following error, you didn t install the Java Runtime Environment. If you have JRE 9, you need SQL Server 2017 CU4 or later for SQL Server to recognize this.

Installation For standalone installation, select the first radio button. This selection does not require your machine be connected to a domain.

Installation The Polybase engine and data movement service accounts are NETWORK SERVICE accounts by default. There are no virtual accounts for Polybase.

Installation After installation is complete, run the following against the SQL Server instance: sp_configure @configname = 'hadoop connectivity', @configvalue = 7; GO RECONFIGURE GO Set the value to 6 for Cloudera s Hadoop distribution, or 7 for Hortonworks or Azure Blob Storage.

Hadoop Configuration First, we need to make sure our Hadoop and SQL Server configuration settings are in sync. We need to modify the yarn-site.xml and mapredsite.xml configuration files. If you do not do this correctly, then MapReduce jobs will fail!

Hadoop Configuration You will need to find your Hadoop configuration folder that came as part of the Polybase installation. By default, that is at: C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\Binn\Polybase \Hadoop\conf Inside this folder, there are two files we care about.

Hadoop Configuration Next, go looking for your Hadoop installation directory. On HDP, you'll find it at: /usr/hdp/[version]/hadoop/conf/ Note that the Polybase docs use /usr/hdp/current, but this is a bunch of symlinks with the wrong directory structure.

Hadoop Configuration Modify yarn-site.xml and change the yarn.application.classpath property. For the Hortonworks distribution of Hadoop (HDP), you ll see a series of values like: <value>/usr/hdp/2.4.3.0-227/hadoop/*,/usr/hdp/2.4.3.0-227/hadoop/lib/*, </value> Replace 2.4.3.0-227 with your HDP version.

Hadoop Configuration Include the following snippet in your mapred-site.xml file: <property> <name>yarn.app.mapreduce.am.staging-dir</name> <value>/user</value> </property> <property> <name>mapreduce.jobhistory.done-dir</name> <value>/mr-history/done</value> </property> <property> <name>mapreduce.jobhistory.intermediate-done-dir</name> <value>/mr-history/tmp</value> </property> Without this configured, you will be unable to perform MapReduce operations on Hadoop.

Polybase Basics In this section, we will look at three new constructs that Polybase introduces: external data sources, external file formats, and external tables.

External Data Source External data sources allow you to point to another system. There are several external data sources, and we will look at two today.

External Data Source CREATE EXTERNAL DATA SOURCE [HDP] WITH ( TYPE = HADOOP, LOCATION = N'hdfs://sandbox.hortonworks.com:8020', RESOURCE_MANAGER_LOCATION = N'sandbox.hortonworks.com:8050' ) The LOCATION is the NameNode port and is needed for Hadoop filesystem operations. RESOURCE_MANAGER_LOCATION is the YARN port and is needed for predicate pushdown.

External File Format External file formats explain the structure of a data set. There are several file formats available to us.

External File Format: Delimited File Delimited files are the simplest to understand but tend to be the least efficient. CREATE EXTERNAL FILE FORMAT file_format_name WITH ( FORMAT_TYPE = DELIMITEDTEXT [, FORMAT_OPTIONS ( <format_options> [,...n ] ) ] [, DATA_COMPRESSION = { 'org.apache.hadoop.io.compress.gzipcodec' 'org.apache.hadoop.io.compress.defaultcodec' } ]);

External File Format: Delimited File <format_options> ::= { FIELD_TERMINATOR = field_terminator STRING_DELIMITER = string_delimiter DATE_FORMAT = datetime_format USE_TYPE_DEFAULT = { TRUE FALSE } } </format_options>

External File Format: RCFile Record Columnar files are an early form of columnar storage. CREATE EXTERNAL FILE FORMAT file_format_name WITH ( FORMAT_TYPE = RCFILE, SERDE_METHOD = { 'org.apache.hadoop.hive.serde2.columnar.lazybinarycolumnarserde' 'org.apache.hadoop.hive.serde2.columnar.columnarserde' } [, DATA_COMPRESSION = 'org.apache.hadoop.io.compress.defaultcodec' ]);

External File Format: ORC Optimized Row Columnar files are strictly superior to RCFile. CREATE EXTERNAL FILE FORMAT file_format_name WITH ( FORMAT_TYPE = ORC [, DATA_COMPRESSION = { 'org.apache.hadoop.io.compress.snappycodec' 'org.apache.hadoop.io.compress.defaultcodec' } ]);

External File Format: Parquet Parquet files are also columnar. Cloudera prefers Parquet, whereas Hortonworks prefers ORC. CREATE EXTERNAL FILE FORMAT file_format_name WITH ( FORMAT_TYPE = PARQUET [, DATA_COMPRESSION = { 'org.apache.hadoop.io.compress.snappycodec' 'org.apache.hadoop.io.compress.gzipcodec' } ]);

External File Format: Comparison Method Good Bad Best Uses Delimited Easy to use Less efficient, slower performance Easy Mode RC File Columnar Strictly superior options Don t use this ORC Great agg perf Columnar not always a good fit; slower to write Parquet Great agg perf Columnar not always a good fit; often larger than ORC Non-nested files with aggregations of subsets of columns Nested data

External Tables External tables use external data sources and external file formats to point to some external resource and visualize it as a table.

External Tables CREATE EXTERNAL TABLE [dbo].[secondbasemen] ( [FirstName] [VARCHAR](50) NULL, [LastName] [VARCHAR](50) NULL, [Age] [INT] NULL, [Throws] [VARCHAR](5) NULL, [Bats] [VARCHAR](5) NULL ) WITH ( DATA_SOURCE = [HDP], LOCATION = N'/tmp/ootp/secondbasemen.csv', FILE_FORMAT = [TextFileFormat], REJECT_TYPE = VALUE, REJECT_VALUE = 5 );

External Tables External tables appear to end users just like normal tables: they have a two-part schema and even show up in Management Studio, though in an External Tables folder.

Demo Time

Querying Hadoop Once we have created an external table, we can write queries against it just like any other table.

Demo Time

Querying Hadoop MapReduce In order for us to be able to perform a MapReduce operation, we need the external data source to be set up with a resource manager. We also need one of the two: 1. The internal cost must be high enough (based on external table statistics) to run a MapReduce job. 2. We force a MapReduce job by using the OPTION(FORCE EXTERNALPUSHDOWN) query hint. Note that there is no "cost threshold for MapReduce," so the nonforced decision is entirely under the Polybase engine's control.

Querying Hadoop MapReduce Functionally, MapReduce queries operate the same as basic queries. Aside from the query hint, there is no special syntax for MapReduce operations and end users don't need to think about it. WARNING: if you are playing along at home, your Hadoop sandbox should have at least 12 GB of RAM allocated to it. This is because Polybase creates several 1.5 GB containers on top of memory requirements for other Hadoop services.

Demo Time

Querying Hadoop Statistics Although external tables have none of their data stored on SQL Server, the database optimizer can still make smart decisions by using statistics.

Querying Hadoop Statistics Important notes regarding statistics: 1. Stats are not auto-created. 2. Stats are not auto-updated. 3. The only way to update stats is to drop and re-create the stats. 4. SQL Server generates stats by bringing the data over, so you must have enough disk space! If you sample, you only need to bring that percent of rows down.

Querying Hadoop Statistics Statistics are stored in the same location as any other table's statistics, and the optimizer uses them the same way.

Demo Time

Querying Hadoop Data Insertion Not only can we select data from Hadoop, we can also write data to Hadoop. We are limited to INSERT operations. We cannot update or delete data using Polybase.

Demo Time

Azure Blob Storage Hadoop is not the only data source we can integrate with using Polybase. We can also insert and read data in Azure Blob Storage. The basic constructs of external data source, external file format, and external table are the same, though some of the options are different.

Azure Blob Storage Create an external data source along with a scoped database credential (for secure access to this blob): CREATE MASTER KEY ENCRYPTION BY PASSWORD = '{password}'; GO CREATE DATABASE SCOPED CREDENTIAL AzureStorageCredential WITH IDENTITY = 'cspolybase', SECRET = '{access key}'; GO CREATE EXTERNAL DATA SOURCE WASBFlights WITH ( TYPE = HADOOP, LOCATION = 'wasbs://csflights@cspolybase.blob.core.windows.net', CREDENTIAL = AzureStorageCredential );

Azure Blob Storage External file formats are the same between Hadoop and Azure Blob Storage. CREATE EXTERNAL FILE FORMAT [CsvFileFormat] WITH ( FORMAT_TYPE = DELIMITEDTEXT, FORMAT_OPTIONS ( FIELD_TERMINATOR = N',', USE_TYPE_DEFAULT = True ) );

Azure Blob Storage External tables are similar to Hadoop as well. CREATE EXTERNAL TABLE [dbo].[flights2008] (...) WITH ( LOCATION = N'historical/2008.csv.bz2', DATA_SOURCE = WASBFlights, FILE_FORMAT = CsvFileFormat, -- Up to 5000 rows can have bad values before Polybase returns an error. REJECT_TYPE = Value, REJECT_VALUE = 5000 );

Azure Blob Storage Start with a set of files:

Azure Blob Storage After creating the table, select top 10:

Azure Blob Storage Running an expensive aggregation query: SELECT fa.[year], COUNT(1) AS NumberOfRecords FROM dbo.flightsall fa GROUP BY fa.[year] ORDER BY fa.[year];

Azure Blob Storage While we're running the expensive aggregation query, we can see that the mpdwsvc app chews up CPU and memory:

Azure Blob Storage Create a table for writing to blob storage: CREATE EXTERNAL TABLE [dbo].[secondbasemenwasb] (...) WITH ( DATA_SOURCE = [WASBFlights], LOCATION = N'ootp/', FILE_FORMAT = [CsvFileFormat], REJECT_TYPE = VALUE, REJECT_VALUE = 5 )

Azure Blob Storage Insert into the table: INSERT INTO dbo.secondbasemenwasb SELECT sb.firstname, sb.lastname, sb.age, sb.bats, sb.throws FROM Player.SecondBasemen sb;

Azure Blob Storage Eight files are created:

Azure Blob Storage Multiple uploads create separate file sets:

Other Azure Offerings Azure SQL DW Polybase features prominently in Azure SQL Data Warehouse, as Polybase is the best method for getting data into an Azure SQL DW cluster.

Other Azure Offerings Azure SQL DW Access via SQL Server Data Tools, not SSMS:

Other Azure Offerings Azure SQL DW Once connected, we can see the database.

Other Azure Offerings Azure SQL DW Create an external data source to Blob Storage: CREATE MASTER KEY ENCRYPTION BY PASSWORD = '{password}'; GO CREATE DATABASE SCOPED CREDENTIAL AzureStorageCredential WITH IDENTITY = 'cspolybase', SECRET = '{access key}'; GO CREATE EXTERNAL DATA SOURCE WASBFlights WITH ( TYPE = HADOOP, LOCATION = 'wasbs://csflights@cspolybase.blob.core.windows.net', CREDENTIAL = AzureStorageCredential );

Other Azure Offerings Azure SQL DW Create an external file format: CREATE EXTERNAL FILE FORMAT [CsvFileFormat] WITH ( FORMAT_TYPE = DELIMITEDTEXT, FORMAT_OPTIONS ( FIELD_TERMINATOR = N',', USE_TYPE_DEFAULT = True ) ); GO

Other Azure Offerings Azure SQL DW Create an external table: CREATE EXTERNAL TABLE [dbo].[flights2008] (...) WITH ( LOCATION = N'historical/2008.csv.bz2', DATA_SOURCE = WASBFlights, FILE_FORMAT = CsvFileFormat, -- Up to 5000 rows can have bad values before Polybase returns an error. REJECT_TYPE = Value, REJECT_VALUE = 5000 );

Other Azure Offerings Azure SQL DW Blob Storage data retrieval isn t snappy:

Other Azure Offerings Azure SQL DW Use CTAS syntax to create an Azure SQL DW table: CREATE TABLE[dbo].[Flights2008DW] WITH ( CLUSTERED COLUMNSTORE INDEX, DISTRIBUTION = HASH(tailnum) ) AS SELECT * FROM dbo.flights2008; GO

Other Azure Offerings Azure SQL DW Azure SQL Data Warehouse data retrieval is snappy:

Other Azure Offerings Azure SQL DW We can export data to Azure Blob Storage: CREATE EXTERNAL TABLE [dbo].[cmhflights] WITH ( LOCATION = N'columbus/', DATA_SOURCE = WASBFlights, FILE_FORMAT = CsvFileFormat, -- Up to 5000 rows can have bad values before Polybase returns an error. REJECT_TYPE = Value, REJECT_VALUE = 5000 ) AS SELECT * FROM dbo.flightsalldw WHERE dest = 'CMH'; GO

Other Azure Offerings Azure SQL DW This CETAS syntax lets us write out the result set:

Other Azure Offerings Azure SQL DW CETAS created 60 files, 1 for each Azure DW compute node:

Other Azure Offerings Data Lake Polybase can only read from Azure Data Lake Storage if you are pulling data into Azure SQL Data Warehouse. The general recommendation for SQL Server is to use U-SQL and Azure Data Lake Services to pull data someplace where SQL Server can read the data.

Other Azure Offerings HDInsight Polybase is not supported in Azure HDInsight. Polybase requires access to ports that are not available in an HDInsight cluster. The general recommendation is to use Azure Blob Storage as an intermediary between SQL Server and HDInsight.

Other Azure Offerings SQL DB Polybase concepts like external tables drive Azure SQL Database's cross-database support. Despite this, we can not use Polybase to connect to Hadoop or Azure Blob Storage via Azure SQL Database.

Issues -- Docker Polybase has significant issues connecting to Dockerized Hadoop nodes. For this reason, I do not recommend using the HDP 2.5 or 2.6 sandboxes, either in the Azure marketplace or on-prem. Instead, I recommend building your own Hadoop VM or machine using Ambari.

Issues -- MapReduce Current-gen Polybase supports direct file access and MapReduce jobs in Hadoop. It does not support connecting to Hive warehouses, using Tez, or using Spark. Because Polybase's Hadoop connector does not support these, it must fall back on a relatively slow method for data access.

Issues File Formats Polybase only supports files without in-text newlines. This makes it impractical for parsing long text columns which may include newlines. Polybase is limited in its file format support and does not support the Avro file format, which is a superior rowstore data format.

Big Data Clusters Next-Gen Polybase The next generation of Polybase involves being able to connect to Oracle, Elasticsearch, MongoDB, Teradata, and anything with an ODBC interface (like Spark). This is now available in the SQL Server 2019 public preview. The goal is to make SQL Server a data hub for interaction with various technologies and systems, with SQL Server as a virtualization layer.

For Further Thought Some interesting uses of Polybase: Hot/Cold partitioned views Hadoop-based data lake enriched by SQL data "Glacial" data in Azure Blob Storage Replacement for linked servers (with Polybase vnext)

Wrapping Up Polybase was one of the key SQL Server 2016 features. There is still room for growth (and a team hard at work), but it is a great integration point between SQL Server and Hadoop / Azure Blob Storage. To learn more, go here: https://csmore.info/on/polybase And for help, contact me: feasel@catallaxyservices.com @feaselkl