Apache Kudu. Zbigniew Baranowski
|
|
- Kevin Roberts
- 6 years ago
- Views:
Transcription
1 Apache Kudu Zbigniew Baranowski
2 Intro
3 What is KUDU? New storage engine for structured data (tables) does not use HDFS! Columnar store Mutable (insert, update, delete) Written in C++ Apache-licensed open source Quite new ->1.0 version recently released First commit on October 11th, 2012 and immature?
4 KUDU tries to fill the gap HDFS excels at Scanning of large amount of data at speed Accumulating data with high throughput HBASE (on HDFS) excels at Fast random lookups by key Making data mutable
5 Table oriented storage A Kudu table has RDBMS-like schema Primary key (one or many columns), No secondary indexes Finite and constant number of columns (unlike HBase) Each column has a name and type boolean, int(8,16,32,64), float, double, timestamp, string, binary Horizontally partitioned (range, hash) partitions are called tablets tablets can have 3 or 5 replicas
6 Data Consistency Writing Single row mutations done atomically across all columns No multi-row ACID transactions Reading Tuneable freshness of the data read whatever is available or wait until all changes committed in WAL are available Snapshot consistency changes made during scanning are not reflected in the results point-in-time queries are possible (based on provided timestamp)
7 Kudu simplifies BigData deployment model for online analytics (low latency ingestion and access) Classical low latency design Stream Source Stream Source Stream Source Events Flush periodically Staging area Flush immediately Big Files HDFS Indexed data Batch processing Fast data access
8 Implementing low latency with Kudu Stream Source Stream Source Events Stream Source Batch processing Fast data access
9 Kudu Architecture
10 Architecture overview Master server (can be multiple masters for HA) Stores metadata - tables definitions Tablets directory (tablets locations) Coordinates the cluster reconfigurations Tablet servers (worker nodes) Writes and reads tablets Stored on local disks (no HDFS) Tracks status of tablets replicas (followers) Replicates the data to followers
11 Tables and tablets Master Map of table TEST: TabletID TabletID Leader TabletID Leader Follower1 Leader Follower1 Follower2 Follower1 Follower2 Follower2 TEST1 TEST1 TS1 TEST1 TS1 TS2 TS1 TS2 TS3 TS2 TS3 TS3 TEST2 TEST2 TS4 TEST2 TS4 TS1 TS4 TS1 TS2 TS1 TS2 TS2 TEST3 TEST3 TS3 TEST3 TS3 TS4 TS3 TS4 TS1 TS4 TS1 TS1 Leader TEST1 TEST1 TEST1 Leader TEST2 TEST2 TEST2 Leader TEST3 TEST3 TEST3 TabletServer1 TabletServer2 TabletServer3 TabletServer4
12 Data changes propagation in Kudu (Raft Consensus - Master Client Tablet server X Tablet 1 (leader) Commit WAL Tablet server Y Tablet 1 (follower) Commit WAL Tablet server Z Tablet 1 (follower) Commit WAL
13 Insert into tablet (without uniqueness check) MemRowSet DiskRowSet1 (32MB) PK B+tree Row1,Row2,Row3 Flush DiskRowSet2 (32MB) Col1 Col2 Col3 PK {min, max} Bloom filters PK {min, max} Row: Col1,Col2, Col3 Leafs sorted by Primary Key Columnar store encoded similarly to Parquet Rows sorted by PK. Interval tree Bloom filters for PK ranges. Stored in cached btree INSERT Interval tree keeps track of PK ranges within DiskRowSets PK Col1 Col2 Col3 Bloom filters Tablets Server There might be Ks of sets per tablet
14 DiskRowSet compaction DiskRowSet1 (32MB) PK {A, G} DiskRowSet2 (32MB) PK {B, E} Compact DiskRowSet1 (32MB) PK {A, D} DiskRowSet2 (32MB) PK {E, G} Periodical task Removes deleted rows Reduces the number of sets with overlapping PK ranges Does not create bigger DiskRowSets 32MB size for each DRS is preserved
15 Btree index PK Btree index Btree index Btree index maps PK to row offset How columns are stored on disk maps row offsets to pages Column1 Column2 Column3 (DiskRowSet) Values Page metadata Values Page metadata Values Page metadata Values Page metadata Values Page metadata Values Page metadata Values Page metadata Values Page metadata Values Page metadata Values Page metadata Values Page metadata Values Page metadata Size 256KB Pages are encoded with a variety of encodings, such as dictionary encoding, 32MB bitshuffle, or RLE Pages can be compressed: Snappy, LZ4 or ZLib
16 Kudu deployment
17 3 options for deployments Build from source Using RPMs 1 core rpms 2 service rpms (master and servers) One shared config file Using Cloudera manager Click, click, click, done
18 Interfacing with Kudu
19 Table access and manipulations Operations on tables (NoSQL) insert, update, delete, scan Python, C++, Java API Integrated with Impala & Hive(SQL), MapReduce, Spark Flume sink (ingestion)
20 Table creation Manipulating Kudu tables with SQL(Impala/Hive) CREATE TABLE `kudu_example` ( `runnumber` BIGINT, `eventnumber` BIGINT, `project` STRING, `streamname` STRING, `prodstep` STRING, `datatype` STRING, `amitag` STRING, `lumiblockn` BIGINT, `bunchid` BIGINT, ) DISTRIBUTE BY HASH (runnumber) INTO 64 BUCKETS TBLPROPERTIES( 'storage_handler' = 'com.cloudera.kudu.hive.kudustoragehandler', 'kudu.table_name' = example_table', 'kudu.master_addresses' = kudu-master.cern.ch:7051', 'kudu.key_columns' = 'runnumber, eventnumber' ); DMLs insert into kudu_example values (1,30,'test',.); insert into kudu_example select * from data_parquet; update kudu_example set datatype='test' where runnumber=1; delete from kudu_example where project='test'; Queries select count(*),max(eventnumber) from kudu_example where datatype like '%AOD% group by runnumber; select * from kudu_example k, parquet_table p where k.runnumber=p.runnumber ;
21 Creating table with Java import org.kududb.* //CREATING TABLE String tablename = "my_table"; String KUDU_MASTER_NAME = "master.cern.ch" KuduClient client = new KuduClient.KuduClientBuilder(KUDU_MASTER_NAME).build(); List<ColumnSchema> columns = new ArrayList(); columns.add(new ColumnSchema.ColumnSchemaBuilder("runnumber",Type.INT64). key(true).encoding(columnschema.encoding.bit_shuffle).nullable(false).compressionalgorithm(columnschema.compressionalgorithm.sn APPY).build()); columns.add(new ColumnSchema.ColumnSchemaBuilder("eventnumber",Type.INT64). key(true).encoding(columnschema.encoding.bit_shuffle).nullable(false).compressionalgorithm(columnschema.compressionalgorithm.sn APPY).build());.. Schema schema = new Schema(columns); List<String> partcolumns = new ArrayList<>(); partcolumns.add("runnumber"); partcolumns.add("eventnumber"); CreateTableOptions options = new CreateTableOptions().addHashPartitions(partColumns, 64).setNumReplicas(3); client.createtable(tablename, schema,options);..
22 Inserting rows with Java //INSERTING KuduTable table = client.opentable(tablename); KuduSession session = client.newsession(); Insert insert = table.newinsert(); PartialRow row = insert.getrow(); row.addlong(0, 1); row.addstring(2,"test"). session.apply(insert); //stores them in memory on client side (for batch upload) session.flush(); //sends data to Kudu..
23 Scanner in Java //configuring column projection List<String> projectcolumns = new ArrayList<>(); projectcolumns.add("runnumber"); projectcolumns.add("datatype"); //setting a scan range PartialRow start = s.newpartialrow(); start.addlong("runnumber", 8); PartialRow end = s.newpartialrow(); end.addlong("runnumber",10); KuduScanner scanner = client.newscannerbuilder(table).lowerbound(start).exclusiveupperbound(end).setprojectedcolumnnames(projectcolumns).build(); while (scanner.hasmorerows()) { RowResultIterator results = scanner.nextrows(); while (results.hasnext()) { RowResult result = results.next(); System.out.println(result.getString(1)); //getting 2nd column } }
24 Spark with Kudu wget spark-shell --jars kudu-spark_ jar import org.apache.kudu.spark.kudu._ // Read a table from Kudu val df = sqlcontext.read.options( Map("kudu.master"-> kudu_master.cern.ch:7051, "kudu.table" -> kudu_table )s).kudu // Query using the DF API... df.select(df("runnumber"),df("eventnumber"),df("db0")).filter($"runnumber"===169864).filter($"eventnumber "===1).show(); //...or register a temporary table and use SQL df.registertemptable("kudu_table") sqlcontext.sql("select id from kudu_table where id >= 5").show() // Create a new Kudu table from a dataframe schema // NB: No rows from the dataframe are inserted into the table kuducontext.createtable("test_table", df.schema, Seq("key"), new CreateTableOptions().setNumReplicas(1)) // Insert data kuducontext.insertrows(df, "test_table")
25 Kudu Security To be done!
26 Performance (based on ATLAS EventIndex case)
27 Bytes Average row length Very good compaction ratio The same like parquet No compression Snappy GZip-like Each row consists of 56 attributes Most of them are strings Few integers and floats length in CSV kudu parquet hbase avro
28 Insertion spped (khz) Insertion rates (per machine, per partition) with Impala Average ingestion speed worse than parquet better than HBase 140 No compression Snappy GZip-like kudu parquet hbase avro
29 Average random lookup spped [s] Random lookup with Impala Good random data lookup speed Similar to Hbase No compression Snappy GZip-like kudu parquet hbase avro
30 Scan speed (khz) Data scan rate per core with a predicate on non PK column (using Impala) Quite good data scanning speed Much better than HBase If natively supported predicates operations are used it is even faster than parquet 600 No compression Snappy GZip-like kudu parquet hbase
31 Kudu monitoring
32 Cloudera Manager A lot of metrics are published though servers http All collected by CM agents and can be plotted Predefined CM dashboards Monitoring of Kudu processes Workload plots CM can be also used for Kudu configuration
33 CM Kudu host status
34 CM - Workload plots
35 CM - Resource utilisation
36 Observations & Conclusions
37 What is nice about Kudu The first one in Big Data open source world trying to combine columnar store + indexing Simple to deploy It works (almost) without problems It scales (this depends how the schema is designed) Writing, Accessing, Scanning Integrated with Big Data mainstream processing frameworks Spark, Impala, Hive, MapReduce SQL and NoSQL on the same data Gives more flexibility in optimizing schema design comparing to HBase (to levels of partitioning) Cloudera is pushing to deliver production-like quality of the software ASAP
38 What is bad about Kudu? No security (it should be added in next releases) authentication (who connected) authorization (ACLs) Raft consensus not always works as it should Too frequent tablet leader changes (sometime leader cannot be elected at all) Period without leader is quite long (sometimes never ends) This freezes updates on tables Handling disk failures you have to erase/reinitialize entire server Only one index per table No nested types (but there is a binary type) Cannot control tablet placement on servers
39 When to Kudu can be useful? When you have structured big data Like in a RDBMS Without complex types When sequential and random data access is required simultaneously and have to scale Data extraction and analytics at the same time Time series When low ingestion latency is needed and lambda architecture is too expensive
40 Learn more Main page: Video: Whitepaper: KUDU project: Some Java code examples: Get Cloudera Quickstart VM and test it
Cloudera Kudu Introduction
Cloudera Kudu Introduction Zbigniew Baranowski Based on: http://slideshare.net/cloudera/kudu-new-hadoop-storage-for-fast-analytics-onfast-data What is KUDU? New storage engine for structured data (tables)
More informationTime Series Storage with Apache Kudu (incubating)
Time Series Storage with Apache Kudu (incubating) Dan Burkert (Committer) dan@cloudera.com @danburkert Tweet about this talk: @getkudu or #kudu 1 Time Series machine metrics event logs sensor telemetry
More informationApache Kudu. A Distributed, Columnar Data Store for Fast Analytics. Mike Percy Software Engineer at Cloudera Apache Kudu PMC member
Apache Kudu A Distributed, Columnar Data Store for Fast Analytics Mike Percy Software Engineer at Cloudera Apache Kudu PMC member 1 Kudu Overview 2 Pace of Data Traditional Hadoop Storage Leaves a Gap
More informationIntroduction to Hadoop. High Availability Scaling Advantages and Challenges. Introduction to Big Data
Introduction to Hadoop High Availability Scaling Advantages and Challenges Introduction to Big Data What is Big data Big Data opportunities Big Data Challenges Characteristics of Big data Introduction
More informationBig Data Hadoop Course Content
Big Data Hadoop Course Content Topics covered in the training Introduction to Linux and Big Data Virtual Machine ( VM) Introduction/ Installation of VirtualBox and the Big Data VM Introduction to Linux
More informationDelving Deep into Hadoop Course Contents Introduction to Hadoop and Architecture
Delving Deep into Hadoop Course Contents Introduction to Hadoop and Architecture Hadoop 1.0 Architecture Introduction to Hadoop & Big Data Hadoop Evolution Hadoop Architecture Networking Concepts Use cases
More informationBig Data Hadoop Developer Course Content. Big Data Hadoop Developer - The Complete Course Course Duration: 45 Hours
Big Data Hadoop Developer Course Content Who is the target audience? Big Data Hadoop Developer - The Complete Course Course Duration: 45 Hours Complete beginners who want to learn Big Data Hadoop Professionals
More informationHadoop. Course Duration: 25 days (60 hours duration). Bigdata Fundamentals. Day1: (2hours)
Bigdata Fundamentals Day1: (2hours) 1. Understanding BigData. a. What is Big Data? b. Big-Data characteristics. c. Challenges with the traditional Data Base Systems and Distributed Systems. 2. Distributions:
More informationApache Hive for Oracle DBAs. Luís Marques
Apache Hive for Oracle DBAs Luís Marques About me Oracle ACE Alumnus Long time open source supporter Founder of Redglue (www.redglue.eu) works for @redgluept as Lead Data Architect @drune After this talk,
More informationImportant Notice Cloudera, Inc. All rights reserved.
Apache Kudu Guide Important Notice 2010-2018 Cloudera, Inc. All rights reserved. Cloudera, the Cloudera logo, and any other product or service names or slogans contained in this document are trademarks
More informationData Access 3. Managing Apache Hive. Date of Publish:
3 Managing Apache Hive Date of Publish: 2018-07-12 http://docs.hortonworks.com Contents ACID operations... 3 Configure partitions for transactions...3 View transactions...3 View transaction locks... 4
More informationIntroducing Apache Kudu and RecordService (incubating)
Introducing Apache Kudu and RecordService (incubating) Guido Oswald Sales Engineer, Switzerland April 2016, Swiss Big Data User Group Meetup 18 @GuidoOswald 1 Current storage landscape in Hadoop HDFS excels
More informationComparing SQL and NOSQL databases
COSC 6397 Big Data Analytics Data Formats (II) HBase Edgar Gabriel Spring 2014 Comparing SQL and NOSQL databases Types Development History Data Storage Model SQL One type (SQL database) with minor variations
More informationIntroduction to BigData, Hadoop:-
Introduction to BigData, Hadoop:- Big Data Introduction: Hadoop Introduction What is Hadoop? Why Hadoop? Hadoop History. Different types of Components in Hadoop? HDFS, MapReduce, PIG, Hive, SQOOP, HBASE,
More informationInnovatus Technologies
HADOOP 2.X BIGDATA ANALYTICS 1. Java Overview of Java Classes and Objects Garbage Collection and Modifiers Inheritance, Aggregation, Polymorphism Command line argument Abstract class and Interfaces String
More informationIn-memory data pipeline and warehouse at scale using Spark, Spark SQL, Tachyon and Parquet
In-memory data pipeline and warehouse at scale using Spark, Spark SQL, Tachyon and Parquet Ema Iancuta iorhian@gmail.com Radu Chilom radu.chilom@gmail.com Big data analytics / machine learning 6+ years
More informationMODERN BIG DATA DESIGN PATTERNS CASE DRIVEN DESINGS
MODERN BIG DATA DESIGN PATTERNS CASE DRIVEN DESINGS SUJEE MANIYAM FOUNDER / PRINCIPAL @ ELEPHANT SCALE www.elephantscale.com sujee@elephantscale.com HI, I M SUJEE MANIYAM Founder / Principal @ ElephantScale
More informationTurning Relational Database Tables into Spark Data Sources
Turning Relational Database Tables into Spark Data Sources Kuassi Mensah Jean de Lavarene Director Product Mgmt Director Development Server Technologies October 04, 2017 3 Safe Harbor Statement The following
More informationHadoop Online Training
Hadoop Online Training IQ training facility offers Hadoop Online Training. Our Hadoop trainers come with vast work experience and teaching skills. Our Hadoop training online is regarded as the one of the
More informationSecurity and Performance advances with Oracle Big Data SQL
Security and Performance advances with Oracle Big Data SQL Jean-Pierre Dijcks Oracle Redwood Shores, CA, USA Key Words SQL, Oracle, Database, Analytics, Object Store, Files, Big Data, Big Data SQL, Hadoop,
More informationCERTIFICATE IN SOFTWARE DEVELOPMENT LIFE CYCLE IN BIG DATA AND BUSINESS INTELLIGENCE (SDLC-BD & BI)
CERTIFICATE IN SOFTWARE DEVELOPMENT LIFE CYCLE IN BIG DATA AND BUSINESS INTELLIGENCE (SDLC-BD & BI) The Certificate in Software Development Life Cycle in BIGDATA, Business Intelligence and Tableau program
More informationHadoop File Formats and Data Ingestion. Prasanth Kothuri, CERN
Prasanth Kothuri, CERN 2 Files Formats not just CSV - Key factor in Big Data processing and query performance - Schema Evolution - Compression and Splittability - Data Processing Write performance Partial
More informationCOSC 6339 Big Data Analytics. NoSQL (II) HBase. Edgar Gabriel Fall HBase. Column-Oriented data store Distributed designed to serve large tables
COSC 6339 Big Data Analytics NoSQL (II) HBase Edgar Gabriel Fall 2018 HBase Column-Oriented data store Distributed designed to serve large tables Billions of rows and millions of columns Runs on a cluster
More informationMaking the Most of Hadoop with Optimized Data Compression (and Boost Performance) Mark Cusack. Chief Architect RainStor
Making the Most of Hadoop with Optimized Data Compression (and Boost Performance) Mark Cusack Chief Architect RainStor Agenda Importance of Hadoop + data compression Data compression techniques Compression,
More informationEvolution of the Logging Service Hands-on Hadoop Proof of Concept for CALS-2.0
Evolution of the Logging Service Hands-on Hadoop Proof of Concept for CALS-2.0 Chris Roderick Marcin Sobieszek Piotr Sowinski Nikolay Tsvetkov Jakub Wozniak Courtesy IT-DB Agenda Intro to CALS System Hadoop
More informationBlended Learning Outline: Developer Training for Apache Spark and Hadoop (180404a)
Blended Learning Outline: Developer Training for Apache Spark and Hadoop (180404a) Cloudera s Developer Training for Apache Spark and Hadoop delivers the key concepts and expertise need to develop high-performance
More information10 Million Smart Meter Data with Apache HBase
10 Million Smart Meter Data with Apache HBase 5/31/2017 OSS Solution Center Hitachi, Ltd. Masahiro Ito OSS Summit Japan 2017 Who am I? Masahiro Ito ( 伊藤雅博 ) Software Engineer at Hitachi, Ltd. Focus on
More informationWe are ready to serve Latest Testing Trends, Are you ready to learn?? New Batches Info
We are ready to serve Latest Testing Trends, Are you ready to learn?? New Batches Info START DATE : TIMINGS : DURATION : TYPE OF BATCH : FEE : FACULTY NAME : LAB TIMINGS : PH NO: 9963799240, 040-40025423
More informationBig Data. Big Data Analyst. Big Data Engineer. Big Data Architect
Big Data Big Data Analyst INTRODUCTION TO BIG DATA ANALYTICS ANALYTICS PROCESSING TECHNIQUES DATA TRANSFORMATION & BATCH PROCESSING REAL TIME (STREAM) DATA PROCESSING Big Data Engineer BIG DATA FOUNDATION
More informationOverview. Prerequisites. Course Outline. Course Outline :: Apache Spark Development::
Title Duration : Apache Spark Development : 4 days Overview Spark is a fast and general cluster computing system for Big Data. It provides high-level APIs in Scala, Java, Python, and R, and an optimized
More informationHadoop 2.x Core: YARN, Tez, and Spark. Hortonworks Inc All Rights Reserved
Hadoop 2.x Core: YARN, Tez, and Spark YARN Hadoop Machine Types top-of-rack switches core switch client machines have client-side software used to access a cluster to process data master nodes run Hadoop
More informationHive and Shark. Amir H. Payberah. Amirkabir University of Technology (Tehran Polytechnic)
Hive and Shark Amir H. Payberah amir@sics.se Amirkabir University of Technology (Tehran Polytechnic) Amir H. Payberah (Tehran Polytechnic) Hive and Shark 1393/8/19 1 / 45 Motivation MapReduce is hard to
More informationmicrosoft
70-775.microsoft Number: 70-775 Passing Score: 800 Time Limit: 120 min Exam A QUESTION 1 Note: This question is part of a series of questions that present the same scenario. Each question in the series
More informationApache HBase Andrew Purtell Committer, Apache HBase, Apache Software Foundation Big Data US Research And Development, Intel
Apache HBase 0.98 Andrew Purtell Committer, Apache HBase, Apache Software Foundation Big Data US Research And Development, Intel Who am I? Committer on the Apache HBase project Member of the Big Data Research
More informationTHE ATLAS DISTRIBUTED DATA MANAGEMENT SYSTEM & DATABASES
1 THE ATLAS DISTRIBUTED DATA MANAGEMENT SYSTEM & DATABASES Vincent Garonne, Mario Lassnig, Martin Barisits, Thomas Beermann, Ralph Vigne, Cedric Serfon Vincent.Garonne@cern.ch ph-adp-ddm-lab@cern.ch XLDB
More informationDo-It-Yourself 1. Oracle Big Data Appliance 2X Faster than
Oracle Big Data Appliance 2X Faster than Do-It-Yourself 1 Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such
More informationAlbis: High-Performance File Format for Big Data Systems
Albis: High-Performance File Format for Big Data Systems Animesh Trivedi, Patrick Stuedi, Jonas Pfefferle, Adrian Schuepbach, Bernard Metzler, IBM Research, Zurich 2018 USENIX Annual Technical Conference
More informationMicrosoft. Exam Questions Perform Data Engineering on Microsoft Azure HDInsight (beta) Version:Demo
Microsoft Exam Questions 70-775 Perform Data Engineering on Microsoft Azure HDInsight (beta) Version:Demo NEW QUESTION 1 You have an Azure HDInsight cluster. You need to store data in a file format that
More informationApril Copyright 2013 Cloudera Inc. All rights reserved.
Hadoop Beyond Batch: Real-time Workloads, SQL-on- Hadoop, and the Virtual EDW Headline Goes Here Marcel Kornacker marcel@cloudera.com Speaker Name or Subhead Goes Here April 2014 Analytic Workloads on
More informationMap-Reduce. Marco Mura 2010 March, 31th
Map-Reduce Marco Mura (mura@di.unipi.it) 2010 March, 31th This paper is a note from the 2009-2010 course Strumenti di programmazione per sistemi paralleli e distribuiti and it s based by the lessons of
More informationDistributed File Systems II
Distributed File Systems II To do q Very-large scale: Google FS, Hadoop FS, BigTable q Next time: Naming things GFS A radically new environment NFS, etc. Independence Small Scale Variety of workloads Cooperation
More informationOverview. : Cloudera Data Analyst Training. Course Outline :: Cloudera Data Analyst Training::
Module Title Duration : Cloudera Data Analyst Training : 4 days Overview Take your knowledge to the next level Cloudera University s four-day data analyst training course will teach you to apply traditional
More informationBig Data Technology Ecosystem. Mark Burnette Pentaho Director Sales Engineering, Hitachi Vantara
Big Data Technology Ecosystem Mark Burnette Pentaho Director Sales Engineering, Hitachi Vantara Agenda End-to-End Data Delivery Platform Ecosystem of Data Technologies Mapping an End-to-End Solution Case
More informationBacktesting with Spark
Backtesting with Spark Patrick Angeles, Cloudera Sandy Ryza, Cloudera Rick Carlin, Intel Sheetal Parade, Intel 1 Traditional Grid Shared storage Storage and compute scale independently Bottleneck on I/O
More informationHadoop Beyond Batch: Real-time Workloads, SQL-on- Hadoop, and thevirtual EDW Headline Goes Here
Hadoop Beyond Batch: Real-time Workloads, SQL-on- Hadoop, and thevirtual EDW Headline Goes Here Marcel Kornacker marcel@cloudera.com Speaker Name or Subhead Goes Here 2013-11-12 Copyright 2013 Cloudera
More informationHadoop Development Introduction
Hadoop Development Introduction What is Bigdata? Evolution of Bigdata Types of Data and their Significance Need for Bigdata Analytics Why Bigdata with Hadoop? History of Hadoop Why Hadoop is in demand
More informationBig Data Infrastructure CS 489/698 Big Data Infrastructure (Winter 2017)
Big Data Infrastructure CS 489/698 Big Data Infrastructure (Winter 2017) Week 10: Mutable State (1/2) March 14, 2017 Jimmy Lin David R. Cheriton School of Computer Science University of Waterloo These
More informationBlended Learning Outline: Cloudera Data Analyst Training (171219a)
Blended Learning Outline: Cloudera Data Analyst Training (171219a) Cloudera Univeristy s data analyst training course will teach you to apply traditional data analytics and business intelligence skills
More informationCloudera Introduction
Cloudera Introduction Important Notice 2010-2018 Cloudera, Inc. All rights reserved. Cloudera, the Cloudera logo, and any other product or service names or slogans contained in this document are trademarks
More informationBigtable. Presenter: Yijun Hou, Yixiao Peng
Bigtable Fay Chang, Jeffrey Dean, Sanjay Ghemawat, Wilson C. Hsieh, Deborah A. Wallach Mike Burrows, Tushar Chandra, Andrew Fikes, Robert E. Gruber Google, Inc. OSDI 06 Presenter: Yijun Hou, Yixiao Peng
More informationCmprssd Intrduction To
Cmprssd Intrduction To Hadoop, SQL-on-Hadoop, NoSQL Arseny.Chernov@Dell.com Singapore University of Technology & Design 2016-11-09 @arsenyspb Thank You For Inviting! My special kind regards to: Professor
More informationHadoop. Introduction to BIGDATA and HADOOP
Hadoop Introduction to BIGDATA and HADOOP What is Big Data? What is Hadoop? Relation between Big Data and Hadoop What is the need of going ahead with Hadoop? Scenarios to apt Hadoop Technology in REAL
More informationShen PingCAP 2017
Shen Li @ PingCAP About me Shen Li ( 申砾 ) Tech Lead of TiDB, VP of Engineering Netease / 360 / PingCAP Infrastructure software engineer WHY DO WE NEED A NEW DATABASE? Brief History Standalone RDBMS NoSQL
More informationChase Wu New Jersey Institute of Technology
CS 644: Introduction to Big Data Chapter 4. Big Data Analytics Platforms Chase Wu New Jersey Institute of Technology Some of the slides were provided through the courtesy of Dr. Ching-Yung Lin at Columbia
More informationImpala. A Modern, Open Source SQL Engine for Hadoop. Yogesh Chockalingam
Impala A Modern, Open Source SQL Engine for Hadoop Yogesh Chockalingam Agenda Introduction Architecture Front End Back End Evaluation Comparison with Spark SQL Introduction Why not use Hive or HBase?
More informationOracle NoSQL Database Enterprise Edition, Version 18.1
Oracle NoSQL Database Enterprise Edition, Version 18.1 Oracle NoSQL Database is a scalable, distributed NoSQL database, designed to provide highly reliable, flexible and available data management across
More informationOracle Big Data. A NA LYT ICS A ND MA NAG E MENT.
Oracle Big Data. A NALYTICS A ND MANAG E MENT. Oracle Big Data: Redundância. Compatível com ecossistema Hadoop, HIVE, HBASE, SPARK. Integração com Cloudera Manager. Possibilidade de Utilização da Linguagem
More informationBig Data Architect.
Big Data Architect www.austech.edu.au WHAT IS BIG DATA ARCHITECT? A big data architecture is designed to handle the ingestion, processing, and analysis of data that is too large or complex for traditional
More informationOracle NoSQL Database Enterprise Edition, Version 18.1
Oracle NoSQL Database Enterprise Edition, Version 18.1 Oracle NoSQL Database is a scalable, distributed NoSQL database, designed to provide highly reliable, flexible and available data management across
More informationCloudera Introduction
Cloudera Introduction Important Notice 2010-2018 Cloudera, Inc. All rights reserved. Cloudera, the Cloudera logo, and any other product or service names or slogans contained in this document are trademarks
More informationGhislain Fourny. Big Data 5. Wide column stores
Ghislain Fourny Big Data 5. Wide column stores Data Technology Stack User interfaces Querying Data stores Indexing Processing Validation Data models Syntax Encoding Storage 2 Where we are User interfaces
More informationMapR Enterprise Hadoop
2014 MapR Technologies 2014 MapR Technologies 1 MapR Enterprise Hadoop Top Ranked Cloud Leaders 500+ Customers 2014 MapR Technologies 2 Key MapR Advantage Partners Business Services APPLICATIONS & OS ANALYTICS
More informationStarting with Apache Spark, Best Practices and Learning from the Field
Starting with Apache Spark, Best Practices and Learning from the Field Felix Cheung, Principal Engineer + Spark Committer Spark@Microsoft Best Practices Enterprise Solutions Resilient - Fault tolerant
More informationHadoop is supplemented by an ecosystem of open source projects IBM Corporation. How to Analyze Large Data Sets in Hadoop
Hadoop Open Source Projects Hadoop is supplemented by an ecosystem of open source projects Oozie 25 How to Analyze Large Data Sets in Hadoop Although the Hadoop framework is implemented in Java, MapReduce
More informationBig Data Analytics using Apache Hadoop and Spark with Scala
Big Data Analytics using Apache Hadoop and Spark with Scala Training Highlights : 80% of the training is with Practical Demo (On Custom Cloudera and Ubuntu Machines) 20% Theory Portion will be important
More informationHive SQL over Hadoop
Hive SQL over Hadoop Antonino Virgillito THE CONTRACTOR IS ACTING UNDER A FRAMEWORK CONTRACT CONCLUDED WITH THE COMMISSION Introduction Apache Hive is a high-level abstraction on top of MapReduce Uses
More informationGhislain Fourny. Big Data 5. Column stores
Ghislain Fourny Big Data 5. Column stores 1 Introduction 2 Relational model 3 Relational model Schema 4 Issues with relational databases (RDBMS) Small scale Single machine 5 Can we fix a RDBMS? Scale up
More informationThings Every Oracle DBA Needs to Know about the Hadoop Ecosystem. Zohar Elkayam
Things Every Oracle DBA Needs to Know about the Hadoop Ecosystem Zohar Elkayam www.realdbamagic.com Twitter: @realmgic Who am I? Zohar Elkayam, CTO at Brillix Programmer, DBA, team leader, database trainer,
More informationHadoop & Big Data Analytics Complete Practical & Real-time Training
An ISO Certified Training Institute A Unit of Sequelgate Innovative Technologies Pvt. Ltd. www.sqlschool.com Hadoop & Big Data Analytics Complete Practical & Real-time Training Mode : Instructor Led LIVE
More informationCloudera Introduction
Cloudera Introduction Important Notice 2010-2017 Cloudera, Inc. All rights reserved. Cloudera, the Cloudera logo, and any other product or service names or slogans contained in this document are trademarks
More informationAn Introduction to Big Data Formats
Introduction to Big Data Formats 1 An Introduction to Big Data Formats Understanding Avro, Parquet, and ORC WHITE PAPER Introduction to Big Data Formats 2 TABLE OF TABLE OF CONTENTS CONTENTS INTRODUCTION
More informationHBase... And Lewis Carroll! Twi:er,
HBase... And Lewis Carroll! jw4ean@cloudera.com Twi:er, LinkedIn: @jw4ean 1 Introduc@on 2010: Cloudera Solu@ons Architect 2011: Cloudera TAM/DSE 2012-2013: Cloudera Training focusing on Partners and Newbies
More informationAsanka Padmakumara. ETL 2.0: Data Engineering with Azure Databricks
Asanka Padmakumara ETL 2.0: Data Engineering with Azure Databricks Who am I? Asanka Padmakumara Business Intelligence Consultant, More than 8 years in BI and Data Warehousing A regular speaker in data
More informationHBase. Леонид Налчаджи
HBase Леонид Налчаджи leonid.nalchadzhi@gmail.com HBase Overview Table layout Architecture Client API Key design 2 Overview 3 Overview NoSQL Column oriented Versioned 4 Overview All rows ordered by row
More informationEsgynDB Enterprise 2.0 Platform Reference Architecture
EsgynDB Enterprise 2.0 Platform Reference Architecture This document outlines a Platform Reference Architecture for EsgynDB Enterprise, built on Apache Trafodion (Incubating) implementation with licensed
More informationTypical size of data you deal with on a daily basis
Typical size of data you deal with on a daily basis Processes More than 161 Petabytes of raw data a day https://aci.info/2014/07/12/the-dataexplosion-in-2014-minute-by-minuteinfographic/ On average, 1MB-2MB
More informationOracle Big Data SQL High Performance Data Virtualization Explained
Keywords: Oracle Big Data SQL High Performance Data Virtualization Explained Jean-Pierre Dijcks Oracle Redwood City, CA, USA Big Data SQL, SQL, Big Data, Hadoop, NoSQL Databases, Relational Databases,
More informationOracle Big Data Connectors
Oracle Big Data Connectors Oracle Big Data Connectors is a software suite that integrates processing in Apache Hadoop distributions with operations in Oracle Database. It enables the use of Hadoop to process
More informationDistributed Systems 16. Distributed File Systems II
Distributed Systems 16. Distributed File Systems II Paul Krzyzanowski pxk@cs.rutgers.edu 1 Review NFS RPC-based access AFS Long-term caching CODA Read/write replication & disconnected operation DFS AFS
More informationUsing space-filling curves for multidimensional
Using space-filling curves for multidimensional indexing Dr. Bisztray Dénes Senior Research Engineer 1 Nokia Solutions and Networks 2014 In medias res Performance problems with RDBMS Switch to NoSQL store
More informationThe Hadoop Ecosystem. EECS 4415 Big Data Systems. Tilemachos Pechlivanoglou
The Hadoop Ecosystem EECS 4415 Big Data Systems Tilemachos Pechlivanoglou tipech@eecs.yorku.ca A lot of tools designed to work with Hadoop 2 HDFS, MapReduce Hadoop Distributed File System Core Hadoop component
More informationTopics. Big Data Analytics What is and Why Hadoop? Comparison to other technologies Hadoop architecture Hadoop ecosystem Hadoop usage examples
Hadoop Introduction 1 Topics Big Data Analytics What is and Why Hadoop? Comparison to other technologies Hadoop architecture Hadoop ecosystem Hadoop usage examples 2 Big Data Analytics What is Big Data?
More informationORC Files. Owen O June Page 1. Hortonworks Inc. 2012
ORC Files Owen O Malley owen@hortonworks.com @owen_omalley owen@hortonworks.com June 2013 Page 1 Who Am I? First committer added to Hadoop in 2006 First VP of Hadoop at Apache Was architect of MapReduce
More informationApache Spark 2.0. Matei
Apache Spark 2.0 Matei Zaharia @matei_zaharia What is Apache Spark? Open source data processing engine for clusters Generalizes MapReduce model Rich set of APIs and libraries In Scala, Java, Python and
More informationUnifying Big Data Workloads in Apache Spark
Unifying Big Data Workloads in Apache Spark Hossein Falaki @mhfalaki Outline What s Apache Spark Why Unification Evolution of Unification Apache Spark + Databricks Q & A What s Apache Spark What is Apache
More informationScale out databases for CERN use cases
Journal of Physics: Conference Series PAPER OPEN ACCESS Scale out databases for CERN use cases To cite this article: Zbigniew Baranowski et al 2015 J. Phys.: Conf. Ser. 664 042002 Related content - Accelerating
More informationCIS 612 Advanced Topics in Database Big Data Project Lawrence Ni, Priya Patil, James Tench
CIS 612 Advanced Topics in Database Big Data Project Lawrence Ni, Priya Patil, James Tench Abstract Implementing a Hadoop-based system for processing big data and doing analytics is a topic which has been
More informationHow Apache Hadoop Complements Existing BI Systems. Dr. Amr Awadallah Founder, CTO Cloudera,
How Apache Hadoop Complements Existing BI Systems Dr. Amr Awadallah Founder, CTO Cloudera, Inc. Twitter: @awadallah, @cloudera 2 The Problems with Current Data Systems BI Reports + Interactive Apps RDBMS
More informationHadoop. Introduction / Overview
Hadoop Introduction / Overview Preface We will use these PowerPoint slides to guide us through our topic. Expect 15 minute segments of lecture Expect 1-4 hour lab segments Expect minimal pretty pictures
More informationThe State of Apache HBase. Michael Stack
The State of Apache HBase Michael Stack Michael Stack Chair of the Apache HBase PMC* Caretaker/Janitor Member of the Hadoop PMC Engineer at Cloudera in SF * Project Management
More informationLambda Architecture for Batch and Real- Time Processing on AWS with Spark Streaming and Spark SQL. May 2015
Lambda Architecture for Batch and Real- Time Processing on AWS with Spark Streaming and Spark SQL May 2015 2015, Amazon Web Services, Inc. or its affiliates. All rights reserved. Notices This document
More informationOpen Data Standards for Administrative Data Processing
University of Pennsylvania ScholarlyCommons 2018 ADRF Network Research Conference Presentations ADRF Network Research Conference Presentations 11-2018 Open Data Standards for Administrative Data Processing
More informationMicrosoft Big Data and Hadoop
Microsoft Big Data and Hadoop Lara Rubbelke @sqlgal Cindy Gross @sqlcindy 2 The world of data is changing The 4Vs of Big Data http://nosql.mypopescu.com/post/9621746531/a-definition-of-big-data 3 Common
More informationCopyright 2013, Oracle and/or its affiliates. All rights reserved.
1 Oracle NoSQL Database: Release 3.0 What s new and why you care Dave Segleau NoSQL Product Manager The following is intended to outline our general product direction. It is intended for information purposes
More informationImpala Intro. MingLi xunzhang
Impala Intro MingLi xunzhang Overview MPP SQL Query Engine for Hadoop Environment Designed for great performance BI Connected(ODBC/JDBC, Kerberos, LDAP, ANSI SQL) Hadoop Components HDFS, HBase, Metastore,
More informationIntro to Big Data on AWS Igor Roiter Big Data Cloud Solution Architect
Intro to Big Data on AWS Igor Roiter Big Data Cloud Solution Architect Igor Roiter Big Data Cloud Solution Architect Working as a Data Specialist for the last 11 years 9 of them as a Consultant specializing
More informationCertified Big Data Hadoop and Spark Scala Course Curriculum
Certified Big Data Hadoop and Spark Scala Course Curriculum The Certified Big Data Hadoop and Spark Scala course by DataFlair is a perfect blend of indepth theoretical knowledge and strong practical skills
More informationHadoop File System S L I D E S M O D I F I E D F R O M P R E S E N T A T I O N B Y B. R A M A M U R T H Y 11/15/2017
Hadoop File System 1 S L I D E S M O D I F I E D F R O M P R E S E N T A T I O N B Y B. R A M A M U R T H Y Moving Computation is Cheaper than Moving Data Motivation: Big Data! What is BigData? - Google
More informationCertified Big Data and Hadoop Course Curriculum
Certified Big Data and Hadoop Course Curriculum The Certified Big Data and Hadoop course by DataFlair is a perfect blend of in-depth theoretical knowledge and strong practical skills via implementation
More informationHow To Rock with MyRocks. Vadim Tkachenko CTO, Percona Webinar, Jan
How To Rock with MyRocks Vadim Tkachenko CTO, Percona Webinar, Jan-16 2019 Agenda MyRocks intro and internals MyRocks limitations Benchmarks: When to choose MyRocks over InnoDB Tuning for the best results
More information