Fattane Zarrinkalam کارگاه ساالنه آزمایشگاه فناوری وب
|
|
- Mariah Griffith
- 5 years ago
- Views:
Transcription
1 Fattane Zarrinkalam کارگاه ساالنه آزمایشگاه فناوری وب 1391 زمستان
2 Outlines Introduction DataModel Architecture HBase vs. RDBMS HBase users 2
3 Why Hadoop? Datasets are growing to Petabytes Traditional datasets are expensive to scale and inherently difficult to distribute Need batch processing 3
4 Hadoop Hadoop Distributed Filesystem Scalable distributed file system which uses a cluster of commodity hardware to store huge amounts of data in terms of very large files Built-in support for data replication between nodes Optimized for streaming reads so that data could be read for processing later on
5 Hadoop Hadoop MapReduce HDFS plus MapReduce forms the backbone for processing massive amounts of data For instance the entire search index of Google 5
6 Why Hbase? Problems with Hadoop Good with a few very, very large files, but not as good with millions of tiny files Not intended for real time querying Does not support random access It is not a general purpose file system does not provide fast individual record lookups in files HBase has evolved to address these challenges 6
7 History of Hbase HBase is an open source implementation of Google s BigTable BigTable: A Distributed Storage System for Structured Data published November 2006 A solution that could drive interactive applications Uses the same infrastructure and relying on GFS for replication and data availability Data stored should be composed of much smaller entities, and the system would store transparently, take care of aggregating the small records into very large files, and offer some sort of indexing that allows the user to retrieve data with a minimal number of disk seeks 7
8 What is Hbase? Distributed, Column-Oriented, Multi-Dimensional, High-Availability, High-Performance Storage System Project Goals Billion of Rows * Million of Columns * thousand of Versions Petabyte across thousands of commodity servers 8
9 Hbase is not A SQL Database No join No query engine No types No SQL A drop- in replacement for your RDBMS 9
10 Data Model Applications store data into labeled tables. Table consists of Rows, each which has a row key This can be thought of as a primary index on the row key! Row keys are always unique 10
11 Data Model Table rows are sorted lexicographically by their row key hbase(main):001:0> scan 'table1' ROW COLUMN+CELL row-1 column=cf1:, timestamp= row-10 column=cf1:, timestamp= row-11 column=cf1:, timestamp= row-2 column=cf1:, timestamp= row-22 column=cf1:, timestamp= row-3 column=cf1:, timestamp= row-abc column=cf1:, timestamp= row(s) in seconds 11
12 Data Model Each Row may have any number of columns Columns are grouped into column families. All column family members have a common prefix for example, the columns temperature:air and temperature: dew_point are both members of the temperature column family, whereas station:identifier belongs to the station family. the colon character (:) delimits the column family from the column family qualifier. 12
13 Data Model A table s column families must be specified up front as part of the table schema definition New column family members can be added on demand. For example, a new column station:address can be offered by a client as part of an update, and its value persisted, as long as the column family station is already in existence on the targeted table. 13
14 Data Model Physically, all column family members are stored together on the filesystem. So, though earlier we described HBase as a columnoriented store, it would be more accurate if it were described as a column-family-oriented store. A note on the NULL value In RDBMS NULL cells need to be set and occupy space In HBase, NULL cells or columns are simply not stored 14
15 Data Model 15
16 Data Model All table accesses are via the table row key. Table cells the intersection of row and column coordinates are versioned (implicitly or explicitly). By default, their version is a timestamp auto-assigned by HBase at the time of cell insertion. This can be used to save multiple versions of a value that changes over time Versions are stored in decreasing timestamp, most recent first Access to data (Table, RowKey, Family, Column, Timestamp) -> Value
17 Data Model
18 Data Model In synopsis, HBase tables are like those in an RDBMS, only cells are versioned, rows are sorted, and columns can be added on the fly by the client as long as the column family they belong to preexists. 18
19 Data Model Tables are automatically partitioned horizontally by HBase into regions. Regions are contiguous ranges of rows stored together Regions are dynamically split by the system when they become too large Regions can also be merged to reduce the number of storage files Each region is served by exactly one Region Server Region servers can serve multiple regions Fine-grained Load Balancing is also achieved using regions as they can be easily moved across servers 19
20 Data Model Regions in practice Initially, there is one region System monitors region size: if a threshold is attained, SPLIT Regions are split in two at the middle key This creates roughly two equivalent (in size) regions Region is the basic unit of scalability and load balancing
21 Data Model
22 A canonical use case of Hbase Webtable The web pages stored while crawling the Internet. The row key is the reversed URL of the page For example, org.hbase.www. There is a column family storing: The actual HTML code Anchor» It is used to store outgoing links Inbound links For metadata like language.
23 A canonical use case of Hbase Using multiple versions for the contents family allows you to store a few older copies of the HTML It is helpful when you want to analyze how often a page changes, for example. The timestamps used are the actual times when they were fetched from the crawled website. 23
24 A canonical use case of Hbase Column Family Row key TimeStamp value
25 Architecture: overview There are three major components: Master server Region server Client API Make use of existing system, like HDFS and ZooKeeper
26 Architecture: overview 1. Master server Assigning regions to region servers using Apache ZooKeeper handling load balancing of regions across region servers The master is not part of the actual data storage or retrieval path It negotiates load balancing and maintains the state of the cluster It takes care of schema changes
27 Architecture: overview 2. Region server Region servers are responsible for all read and write requests for all regions they serve Split regions that have exceeded the configured region size thresholds. 27
28 Architecture: overview 3. Client API Get Put Scan Returns attributes for a specified row. Either adds new rows to a table (if the key is new) or can update existing rows (if the key already exists). Allow iteration over multiple rows for specified attributes. Delete Removes a row from a table. 28
29 Architecture
30 Architecture Zookeeper Distributed, highly available coordination service Role of Zookeeper in Hbase Master uses Zookeeper to discover available servers at start and track server failures Zookeeper provide client with the server name hosts the ROOTregion. With this information it can query that region server to get the server name that hosts the.meta. table region containing the row key in question. Lastly it can query the reported.meta. Server and retrieve the server name that has the region containing the row key the client looking for.
31 Architecture HDFS A distributed file system that runs on large clusters of commodity machines Hbase using HDFS for its underlying storage The store files are typically saved in HDFS, which provides a scalable, persistent, replicated storage layer for HBase. 31
32 Architecture Store A Store hosts a MemStore and StoreFiles (HFiles). A Store corresponds to a column family for a table for a given region. MemStore Holds in-memory modifications to the Store until enough is collected then flush to disk, avoiding creation of too many small files. 32
33 Architecture StoreFile (HFile) Store files are divided up into smaller blocks when stored within the Hadoop Distributed Filesystem (HDFS). Write-Ahead Log (HLog) data resides in memory is volatile, meaning it could be lost if server looses power. Data is written to the WAL and then passed to MemStore. 33
34 Operation: write
35 Operation: Read Check for data in cache Note that data in MemStore is sorted by keys, matching what happens in the HFiles. Look for data in persisted data 35
36 Operation: Delete Since HFiles are immutable, how can we delete data? A delete marker is written to indicate that a given key is deleted During the read process, data marked as deleted is skipped 36
37 37
38 Comparison #1 System to store a shopping cart Customers, Products, Orders 38
39 Simple SQL Schema 39
40 Simple Hbase Schema 40
41 Efficient Queries with Both Get name, , orders for customer Get name, price for product Get customer, stamp, total for order Get list of products in order 41
42 Where SQL Makes Life Easy Joining In a single query, get all products in an order with their products information Secondary Indexing Get customerid by Referential Integrity Deleting an order would delete links out of orderproducts ID updates propagate Realtime Analysis GROUP BY and ORDER BY allow for simple statistical analysis 42
43 Where HBase Makes Life Easy Dataset Scale We have 1M customers and 100M products Product information includes large text datasheets or PDF files Want to track every time a customer looks at a product page Read/Write Scale Tables distributed across nodes means reads/writes are fully distributed Writers are extremely fast and require no index updates Replication Comes for free Batch Analysis Massive and convoluted SQL queries executed serially become efficient MapReduce jobs distributed and executed in parallel 43
44 Conclusion For small instances of simple/straightforward systems, relational databases offer a much more convenient way to model and access data Can outsource most work to transaction and query engine HBase will force you to pull complexity into Application layer Once you need to scale, the properties and flexibility of HBase can relieve you from the headaches associated with scaling an RDBMS 44
45 Comparison #2 Compare key factors Hardware Requirements Scalability Reliability Ease of Use Cost 45
46 Hardware Requirements RDBMS are IO-bound Typically require large arrays of fast and expensive disks Modest production environment might have a single node with k RPM drivers, 16 cores, and GB RAM Requires a backup server with similar specs $$$$$$ HBase is designed for commodity hardware Biggest factor for performance is number of nodes Modest production environment might have nodes each with 2 500GB 7.2k RPM drivers, 4 cores, and 4GB RAM Common to have one master node with RAID, dual PSU, etc as this is currently a SPOF 46
47 Scalability RDBMS scale achieved through Caching, e.g. through Mamcached Partitioning often left up to the application or external tools Replication can be built-in or an add-on with most popular RDBMS Regardless of scale mechanisms architecture does not allow efficient multi-master support HBase scales out of the box Random access often made faster with something similar to Mamcached (built-in with 0.20 release) Constant performance from low to high concurrency Writes are distributed and there are no indexed Scale by plugging in more RegionServers 47
48 Reliability RDBMS Slave replication Warm/Hot backups Single node failure is often catastrophic HBase Replication is built-in Backups are unnecessary but available 48
49 Ease of Use RDBMS Millions are trained in SQL and relational data model Normalized schemas are well understood and have predictable performance However schemas are often limiting, difficult to change, and scale poorly HBase and MapReduce Significant learning curve Both have excellent communities and increasing numbers of tools to help ease of the initial pain Schemas are loosely defined so data structure is easy to change and performance is constant 49
50 Other factors Operating System / Architecture RDBMS vary greatly on their target architecture HBase designed for Linux though also being run on Solaris and with some success on Windows Cost HBase is FOSS Plenty of mature FOSS RDBMS, but many used in enterprise are expensive Widespread use RDBMS are tried and true Hadoop and HBase are still in development and though production ready are not yet in wide use 50
51 Conclusion Similar to the first comparison RDBMS provide tremendous functionality out of the box but is extremely difficult and costly to scale HBase provides barebones functionality out of the box but scaling is built-in and inexpensive 51
52 Users 52
53 Users: Facebook-messaging system 53
54 Old version: Facebook- messaging system Chat messages were held in memory and stored only for a small number of days Non-chat messages were stored in MySQL 54
55 Facebook- messaging system New version: Started in Dec 2009 Brings together chat, and the earlier version of messages into one umbrella. Needs 8B+ messages/day A storage system that could efficiently support a 20x increase in the amount of writes, Providing cheap and elastic storage for datasets that we were expecting to grow at 150TB+/month. 55
56 Facebook- messaging system Solution : HBase Traffic to HBase 75+ Billion R+W ops/day At peak: 1.5M ops/sec ~ 55% Read vs. 45% Write ops Avg write op inserts ~16 records across multiple column families 56
57 Users: Mozilla-Socorro 57
58 Mozilla-Socorro The crash-reports can help the software developers to diagnose and fix the root cause of the crashes. The automatic collection of crash-reports in Mozilla Firefox improved the reliability of Mozilla Firefox by 40% from November 2009 to March
59 Mozilla-Socorro Challenge Organizations to manage large amount of collected crashreports effectively 2.5 million crash-reports every day Around 320Gb each day! Solution: Socorro is Mozilla s crash reporting system Data storage and analytics are built on HBase 59
60 Users: OpenTSDB Web-based products serving millions of users typically have hundreds or thousands of servers in their back-end infrastructure. serving traffic, capturing logs, storing data, processing data, and so on. To keep the products up and running, it s critical to monitor the health of the servers as well as the software running on these servers. 60
61 Users: OpenTSDB Challenge Monitoring the entire stack at scale requires systems that can collect and store metrics of all kinds from different sources. make metrics be available for access over a long period of time Solution: OpenTSDB An open source framework that allows the company to collect metrics of all kinds into a single system. This framework uses HBase at its core to store and access the collected metrics. 61
62 When to use HBase? Storing large amounts of data (100s of TBs) Seed to scale gracefully with data for structured and semi-structured data Need efficient random access (key lookups) within large data sets Don t need full RDMS capabilities (cross row/cross table transactions, joins, etc.) 62
63 63
64 Architecture LogSyncer Every time an edit is sent to the servers, a call to sync() is initiated, it is a call that forces the update to the log so that you have durability. LogRoller Make sure a log is persisted on a regular basis. Every 60 minutes, the log is closed and a new one starts. It checks what the highest sequence number written to a storage file, delete all logs with smallest sequence numbers and leaves all others as they are still needed, and this to optimize the sapce needed to save HLog files.
65 Architecture Region lookup 65
BigTable: A Distributed Storage System for Structured Data
BigTable: A Distributed Storage System for Structured Data Amir H. Payberah amir@sics.se Amirkabir University of Technology (Tehran Polytechnic) Amir H. Payberah (Tehran Polytechnic) BigTable 1393/7/26
More informationData Informatics. Seon Ho Kim, Ph.D.
Data Informatics Seon Ho Kim, Ph.D. seonkim@usc.edu HBase HBase is.. A distributed data store that can scale horizontally to 1,000s of commodity servers and petabytes of indexed storage. Designed to operate
More informationScaling Up HBase. Duen Horng (Polo) Chau Assistant Professor Associate Director, MS Analytics Georgia Tech. CSE6242 / CX4242: Data & Visual Analytics
http://poloclub.gatech.edu/cse6242 CSE6242 / CX4242: Data & Visual Analytics Scaling Up HBase Duen Horng (Polo) Chau Assistant Professor Associate Director, MS Analytics Georgia Tech Partly based on materials
More informationCS November 2018
Bigtable Highly available distributed storage Distributed Systems 19. Bigtable Built with semi-structured data in mind URLs: content, metadata, links, anchors, page rank User data: preferences, account
More informationCS November 2017
Bigtable Highly available distributed storage Distributed Systems 18. Bigtable Built with semi-structured data in mind URLs: content, metadata, links, anchors, page rank User data: preferences, account
More informationBigTable: A Distributed Storage System for Structured Data (2006) Slides adapted by Tyler Davis
BigTable: A Distributed Storage System for Structured Data (2006) Slides adapted by Tyler Davis Motivation Lots of (semi-)structured data at Google URLs: Contents, crawl metadata, links, anchors, pagerank,
More informationGhislain Fourny. Big Data 5. Column stores
Ghislain Fourny Big Data 5. Column stores 1 Introduction 2 Relational model 3 Relational model Schema 4 Issues with relational databases (RDBMS) Small scale Single machine 5 Can we fix a RDBMS? Scale up
More informationCA485 Ray Walshe NoSQL
NoSQL BASE vs ACID Summary Traditional relational database management systems (RDBMS) do not scale because they adhere to ACID. A strong movement within cloud computing is to utilize non-traditional data
More informationBigtable: A Distributed Storage System for Structured Data By Fay Chang, et al. OSDI Presented by Xiang Gao
Bigtable: A Distributed Storage System for Structured Data By Fay Chang, et al. OSDI 2006 Presented by Xiang Gao 2014-11-05 Outline Motivation Data Model APIs Building Blocks Implementation Refinement
More informationHBASE INTERVIEW QUESTIONS
HBASE INTERVIEW QUESTIONS http://www.tutorialspoint.com/hbase/hbase_interview_questions.htm Copyright tutorialspoint.com Dear readers, these HBase Interview Questions have been designed specially to get
More informationDistributed File Systems II
Distributed File Systems II To do q Very-large scale: Google FS, Hadoop FS, BigTable q Next time: Naming things GFS A radically new environment NFS, etc. Independence Small Scale Variety of workloads Cooperation
More informationGhislain Fourny. Big Data 5. Wide column stores
Ghislain Fourny Big Data 5. Wide column stores Data Technology Stack User interfaces Querying Data stores Indexing Processing Validation Data models Syntax Encoding Storage 2 Where we are User interfaces
More informationComparing SQL and NOSQL databases
COSC 6397 Big Data Analytics Data Formats (II) HBase Edgar Gabriel Spring 2014 Comparing SQL and NOSQL databases Types Development History Data Storage Model SQL One type (SQL database) with minor variations
More informationCloud Computing and Hadoop Distributed File System. UCSB CS170, Spring 2018
Cloud Computing and Hadoop Distributed File System UCSB CS70, Spring 08 Cluster Computing Motivations Large-scale data processing on clusters Scan 000 TB on node @ 00 MB/s = days Scan on 000-node cluster
More informationBig Data Infrastructure CS 489/698 Big Data Infrastructure (Winter 2016)
Big Data Infrastructure CS 489/698 Big Data Infrastructure (Winter 2016) Week 10: Mutable State (1/2) March 15, 2016 Jimmy Lin David R. Cheriton School of Computer Science University of Waterloo These
More informationTypical size of data you deal with on a daily basis
Typical size of data you deal with on a daily basis Processes More than 161 Petabytes of raw data a day https://aci.info/2014/07/12/the-dataexplosion-in-2014-minute-by-minuteinfographic/ On average, 1MB-2MB
More informationBig Data Infrastructure CS 489/698 Big Data Infrastructure (Winter 2017)
Big Data Infrastructure CS 489/698 Big Data Infrastructure (Winter 2017) Week 10: Mutable State (1/2) March 14, 2017 Jimmy Lin David R. Cheriton School of Computer Science University of Waterloo These
More informationDistributed Systems [Fall 2012]
Distributed Systems [Fall 2012] Lec 20: Bigtable (cont ed) Slide acks: Mohsen Taheriyan (http://www-scf.usc.edu/~csci572/2011spring/presentations/taheriyan.pptx) 1 Chubby (Reminder) Lock service with a
More informationBigtable. A Distributed Storage System for Structured Data. Presenter: Yunming Zhang Conglong Li. Saturday, September 21, 13
Bigtable A Distributed Storage System for Structured Data Presenter: Yunming Zhang Conglong Li References SOCC 2010 Key Note Slides Jeff Dean Google Introduction to Distributed Computing, Winter 2008 University
More informationCOSC 6339 Big Data Analytics. NoSQL (II) HBase. Edgar Gabriel Fall HBase. Column-Oriented data store Distributed designed to serve large tables
COSC 6339 Big Data Analytics NoSQL (II) HBase Edgar Gabriel Fall 2018 HBase Column-Oriented data store Distributed designed to serve large tables Billions of rows and millions of columns Runs on a cluster
More informationFacebook. The Technology Behind Messages (and more ) Kannan Muthukkaruppan Software Engineer, Facebook. March 11, 2011
HBase @ Facebook The Technology Behind Messages (and more ) Kannan Muthukkaruppan Software Engineer, Facebook March 11, 2011 Talk Outline the new Facebook Messages, and how we got started with HBase quick
More informationA Glimpse of the Hadoop Echosystem
A Glimpse of the Hadoop Echosystem 1 Hadoop Echosystem A cluster is shared among several users in an organization Different services HDFS and MapReduce provide the lower layers of the infrastructures Other
More informationHBase Solutions at Facebook
HBase Solutions at Facebook Nicolas Spiegelberg Software Engineer, Facebook QCon Hangzhou, October 28 th, 2012 Outline HBase Overview Single Tenant: Messages Selection Criteria Multi-tenant Solutions
More informationBig Data Analytics. Rasoul Karimi
Big Data Analytics Rasoul Karimi Information Systems and Machine Learning Lab (ISMLL) Institute of Computer Science University of Hildesheim, Germany Big Data Analytics Big Data Analytics 1 / 1 Outline
More informationCSE-E5430 Scalable Cloud Computing Lecture 9
CSE-E5430 Scalable Cloud Computing Lecture 9 Keijo Heljanko Department of Computer Science School of Science Aalto University keijo.heljanko@aalto.fi 15.11-2015 1/24 BigTable Described in the paper: Fay
More informationCA485 Ray Walshe Google File System
Google File System Overview Google File System is scalable, distributed file system on inexpensive commodity hardware that provides: Fault Tolerance File system runs on hundreds or thousands of storage
More information10 Million Smart Meter Data with Apache HBase
10 Million Smart Meter Data with Apache HBase 5/31/2017 OSS Solution Center Hitachi, Ltd. Masahiro Ito OSS Summit Japan 2017 Who am I? Masahiro Ito ( 伊藤雅博 ) Software Engineer at Hitachi, Ltd. Focus on
More informationNoSQL Databases. Amir H. Payberah. Swedish Institute of Computer Science. April 10, 2014
NoSQL Databases Amir H. Payberah Swedish Institute of Computer Science amir@sics.se April 10, 2014 Amir H. Payberah (SICS) NoSQL Databases April 10, 2014 1 / 67 Database and Database Management System
More informationA Fast and High Throughput SQL Query System for Big Data
A Fast and High Throughput SQL Query System for Big Data Feng Zhu, Jie Liu, and Lijie Xu Technology Center of Software Engineering, Institute of Software, Chinese Academy of Sciences, Beijing, China 100190
More informationTopics. Big Data Analytics What is and Why Hadoop? Comparison to other technologies Hadoop architecture Hadoop ecosystem Hadoop usage examples
Hadoop Introduction 1 Topics Big Data Analytics What is and Why Hadoop? Comparison to other technologies Hadoop architecture Hadoop ecosystem Hadoop usage examples 2 Big Data Analytics What is Big Data?
More informationHBase: Overview. HBase is a distributed column-oriented data store built on top of HDFS
HBase 1 HBase: Overview HBase is a distributed column-oriented data store built on top of HDFS HBase is an Apache open source project whose goal is to provide storage for the Hadoop Distributed Computing
More informationHBase... And Lewis Carroll! Twi:er,
HBase... And Lewis Carroll! jw4ean@cloudera.com Twi:er, LinkedIn: @jw4ean 1 Introduc@on 2010: Cloudera Solu@ons Architect 2011: Cloudera TAM/DSE 2012-2013: Cloudera Training focusing on Partners and Newbies
More informationCIB Session 12th NoSQL Databases Structures
CIB Session 12th NoSQL Databases Structures By: Shahab Safaee & Morteza Zahedi Software Engineering PhD Email: safaee.shx@gmail.com, morteza.zahedi.a@gmail.com cibtrc.ir cibtrc cibtrc 2 Agenda What is
More informationADVANCED HBASE. Architecture and Schema Design GeeCON, May Lars George Director EMEA Services
ADVANCED HBASE Architecture and Schema Design GeeCON, May 2013 Lars George Director EMEA Services About Me Director EMEA Services @ Cloudera Consulting on Hadoop projects (everywhere) Apache Committer
More informationHDFS: Hadoop Distributed File System. CIS 612 Sunnie Chung
HDFS: Hadoop Distributed File System CIS 612 Sunnie Chung What is Big Data?? Bulk Amount Unstructured Introduction Lots of Applications which need to handle huge amount of data (in terms of 500+ TB per
More informationData Clustering on the Parallel Hadoop MapReduce Model. Dimitrios Verraros
Data Clustering on the Parallel Hadoop MapReduce Model Dimitrios Verraros Overview The purpose of this thesis is to implement and benchmark the performance of a parallel K- means clustering algorithm on
More informationMapReduce & BigTable
CPSC 426/526 MapReduce & BigTable Ennan Zhai Computer Science Department Yale University Lecture Roadmap Cloud Computing Overview Challenges in the Clouds Distributed File Systems: GFS Data Process & Analysis:
More informationCSE 444: Database Internals. Lectures 26 NoSQL: Extensible Record Stores
CSE 444: Database Internals Lectures 26 NoSQL: Extensible Record Stores CSE 444 - Spring 2014 1 References Scalable SQL and NoSQL Data Stores, Rick Cattell, SIGMOD Record, December 2010 (Vol. 39, No. 4)
More information7680: Distributed Systems
Cristina Nita-Rotaru 7680: Distributed Systems BigTable. Hbase.Spanner. 1: BigTable Acknowledgement } Slides based on material from course at UMichigan, U Washington, and the authors of BigTable and Spanner.
More informationΕΠΛ 602:Foundations of Internet Technologies. Cloud Computing
ΕΠΛ 602:Foundations of Internet Technologies Cloud Computing 1 Outline Bigtable(data component of cloud) Web search basedonch13of thewebdatabook 2 What is Cloud Computing? ACloudis an infrastructure, transparent
More informationbig picture parallel db (one data center) mix of OLTP and batch analysis lots of data, high r/w rates, 1000s of cheap boxes thus many failures
Lecture 20 -- 11/20/2017 BigTable big picture parallel db (one data center) mix of OLTP and batch analysis lots of data, high r/w rates, 1000s of cheap boxes thus many failures what does paper say Google
More informationReferences. What is Bigtable? Bigtable Data Model. Outline. Key Features. CSE 444: Database Internals
References CSE 444: Database Internals Scalable SQL and NoSQL Data Stores, Rick Cattell, SIGMOD Record, December 2010 (Vol 39, No 4) Lectures 26 NoSQL: Extensible Record Stores Bigtable: A Distributed
More informationIntroduction to BigData, Hadoop:-
Introduction to BigData, Hadoop:- Big Data Introduction: Hadoop Introduction What is Hadoop? Why Hadoop? Hadoop History. Different types of Components in Hadoop? HDFS, MapReduce, PIG, Hive, SQOOP, HBASE,
More informationGoogle File System and BigTable. and tiny bits of HDFS (Hadoop File System) and Chubby. Not in textbook; additional information
Subject 10 Fall 2015 Google File System and BigTable and tiny bits of HDFS (Hadoop File System) and Chubby Not in textbook; additional information Disclaimer: These abbreviated notes DO NOT substitute
More informationDistributed Filesystem
Distributed Filesystem 1 How do we get data to the workers? NAS Compute Nodes SAN 2 Distributing Code! Don t move data to workers move workers to the data! - Store data on the local disks of nodes in the
More informationApache Hadoop Goes Realtime at Facebook. Himanshu Sharma
Apache Hadoop Goes Realtime at Facebook Guide - Dr. Sunny S. Chung Presented By- Anand K Singh Himanshu Sharma Index Problem with Current Stack Apache Hadoop and Hbase Zookeeper Applications of HBase at
More informationDistributed Systems 16. Distributed File Systems II
Distributed Systems 16. Distributed File Systems II Paul Krzyzanowski pxk@cs.rutgers.edu 1 Review NFS RPC-based access AFS Long-term caching CODA Read/write replication & disconnected operation DFS AFS
More informationColumn Stores and HBase. Rui LIU, Maksim Hrytsenia
Column Stores and HBase Rui LIU, Maksim Hrytsenia December 2017 Contents 1 Hadoop 2 1.1 Creation................................ 2 2 HBase 3 2.1 Column Store Database....................... 3 2.2 HBase
More informationBigtable: A Distributed Storage System for Structured Data. Andrew Hon, Phyllis Lau, Justin Ng
Bigtable: A Distributed Storage System for Structured Data Andrew Hon, Phyllis Lau, Justin Ng What is Bigtable? - A storage system for managing structured data - Used in 60+ Google services - Motivation:
More informationIntroduction to Hadoop. Owen O Malley Yahoo!, Grid Team
Introduction to Hadoop Owen O Malley Yahoo!, Grid Team owen@yahoo-inc.com Who Am I? Yahoo! Architect on Hadoop Map/Reduce Design, review, and implement features in Hadoop Working on Hadoop full time since
More informationCassandra, MongoDB, and HBase. Cassandra, MongoDB, and HBase. I have chosen these three due to their recent
Tanton Jeppson CS 401R Lab 3 Cassandra, MongoDB, and HBase Introduction For my report I have chosen to take a deeper look at 3 NoSQL database systems: Cassandra, MongoDB, and HBase. I have chosen these
More informationParallel Programming Principle and Practice. Lecture 10 Big Data Processing with MapReduce
Parallel Programming Principle and Practice Lecture 10 Big Data Processing with MapReduce Outline MapReduce Programming Model MapReduce Examples Hadoop 2 Incredible Things That Happen Every Minute On The
More informationDistributed computing: index building and use
Distributed computing: index building and use Distributed computing Goals Distributing computation across several machines to Do one computation faster - latency Do more computations in given time - throughput
More informationBig Data with Hadoop Ecosystem
Diógenes Pires Big Data with Hadoop Ecosystem Hands-on (HBase, MySql and Hive + Power BI) Internet Live http://www.internetlivestats.com/ Introduction Business Intelligence Business Intelligence Process
More informationW b b 2.0. = = Data Ex E pl p o l s o io i n
Hypertable Doug Judd Zvents, Inc. Background Web 2.0 = Data Explosion Web 2.0 Mt. Web 2.0 Traditional Tools Don t Scale Well Designed for a single machine Typical scaling solutions ad-hoc manual/static
More informationHadoop 2.x Core: YARN, Tez, and Spark. Hortonworks Inc All Rights Reserved
Hadoop 2.x Core: YARN, Tez, and Spark YARN Hadoop Machine Types top-of-rack switches core switch client machines have client-side software used to access a cluster to process data master nodes run Hadoop
More informationHadoop/MapReduce Computing Paradigm
Hadoop/Reduce Computing Paradigm 1 Large-Scale Data Analytics Reduce computing paradigm (E.g., Hadoop) vs. Traditional database systems vs. Database Many enterprises are turning to Hadoop Especially applications
More informationHBase. Леонид Налчаджи
HBase Леонид Налчаджи leonid.nalchadzhi@gmail.com HBase Overview Table layout Architecture Client API Key design 2 Overview 3 Overview NoSQL Column oriented Versioned 4 Overview All rows ordered by row
More informationCmprssd Intrduction To
Cmprssd Intrduction To Hadoop, SQL-on-Hadoop, NoSQL Arseny.Chernov@Dell.com Singapore University of Technology & Design 2016-11-09 @arsenyspb Thank You For Inviting! My special kind regards to: Professor
More informationBigtable: A Distributed Storage System for Structured Data by Google SUNNIE CHUNG CIS 612
Bigtable: A Distributed Storage System for Structured Data by Google SUNNIE CHUNG CIS 612 Google Bigtable 2 A distributed storage system for managing structured data that is designed to scale to a very
More informationScalable Web Programming. CS193S - Jan Jannink - 2/25/10
Scalable Web Programming CS193S - Jan Jannink - 2/25/10 Weekly Syllabus 1.Scalability: (Jan.) 2.Agile Practices 3.Ecology/Mashups 4.Browser/Client 7.Analytics 8.Cloud/Map-Reduce 9.Published APIs: (Mar.)*
More informationGoal of the presentation is to give an introduction of NoSQL databases, why they are there.
1 Goal of the presentation is to give an introduction of NoSQL databases, why they are there. We want to present "Why?" first to explain the need of something like "NoSQL" and then in "What?" we go in
More informationBIG DATA TECHNOLOGIES: WHAT EVERY MANAGER NEEDS TO KNOW ANALYTICS AND FINANCIAL INNOVATION CONFERENCE JUNE 26-29,
BIG DATA TECHNOLOGIES: WHAT EVERY MANAGER NEEDS TO KNOW ANALYTICS AND FINANCIAL INNOVATION CONFERENCE JUNE 26-29, 2016 1 OBJECTIVES ANALYTICS AND FINANCIAL INNOVATION CONFERENCE JUNE 26-29, 2016 2 WHAT
More informationResearch on the Application of Bank Transaction Data Stream Storage based on HBase Xiaoguo Wang*, Yuxiang Liu and Lin Zhang
International Conference on Engineering Management (Iconf-EM 2016) Research on the Application of Bank Transaction Data Stream Storage based on HBase Xiaoguo Wang*, Yuxiang Liu and Lin Zhang School of
More informationHadoop An Overview. - Socrates CCDH
Hadoop An Overview - Socrates CCDH What is Big Data? Volume Not Gigabyte. Terabyte, Petabyte, Exabyte, Zettabyte - Due to handheld gadgets,and HD format images and videos - In total data, 90% of them collected
More informationArchitecture of Enterprise Applications 22 HBase & Hive
Architecture of Enterprise Applications 22 HBase & Hive Haopeng Chen REliable, INtelligent and Scalable Systems Group (REINS) Shanghai Jiao Tong University Shanghai, China http://reins.se.sjtu.edu.cn/~chenhp
More informationAccelerating Big Data: Using SanDisk SSDs for Apache HBase Workloads
WHITE PAPER Accelerating Big Data: Using SanDisk SSDs for Apache HBase Workloads December 2014 Western Digital Technologies, Inc. 951 SanDisk Drive, Milpitas, CA 95035 www.sandisk.com Table of Contents
More informationTools for Social Networking Infrastructures
Tools for Social Networking Infrastructures 1 Cassandra - a decentralised structured storage system Problem : Facebook Inbox Search hundreds of millions of users distributed infrastructure inbox changes
More informationCISC 7610 Lecture 2b The beginnings of NoSQL
CISC 7610 Lecture 2b The beginnings of NoSQL Topics: Big Data Google s infrastructure Hadoop: open google infrastructure Scaling through sharding CAP theorem Amazon s Dynamo 5 V s of big data Everyone
More informationHadoop محبوبه دادخواه کارگاه ساالنه آزمایشگاه فناوری وب زمستان 1391
Hadoop محبوبه دادخواه کارگاه ساالنه آزمایشگاه فناوری وب زمستان 1391 Outline Big Data Big Data Examples Challenges with traditional storage NoSQL Hadoop HDFS MapReduce Architecture 2 Big Data In information
More informationBig Table. Google s Storage Choice for Structured Data. Presented by Group E - Dawei Yang - Grace Ramamoorthy - Patrick O Sullivan - Rohan Singla
Big Table Google s Storage Choice for Structured Data Presented by Group E - Dawei Yang - Grace Ramamoorthy - Patrick O Sullivan - Rohan Singla Bigtable: Introduction Resembles a database. Does not support
More informationBigtable. Presenter: Yijun Hou, Yixiao Peng
Bigtable Fay Chang, Jeffrey Dean, Sanjay Ghemawat, Wilson C. Hsieh, Deborah A. Wallach Mike Burrows, Tushar Chandra, Andrew Fikes, Robert E. Gruber Google, Inc. OSDI 06 Presenter: Yijun Hou, Yixiao Peng
More informationProjected by: LUKA CECXLADZE BEQA CHELIDZE Superviser : Nodar Momtsemlidze
Projected by: LUKA CECXLADZE BEQA CHELIDZE Superviser : Nodar Momtsemlidze About HBase HBase is a column-oriented database management system that runs on top of HDFS. It is well suited for sparse data
More informationDistributed Systems. 05r. Case study: Google Cluster Architecture. Paul Krzyzanowski. Rutgers University. Fall 2016
Distributed Systems 05r. Case study: Google Cluster Architecture Paul Krzyzanowski Rutgers University Fall 2016 1 A note about relevancy This describes the Google search cluster architecture in the mid
More informationBig Data Processing Technologies. Chentao Wu Associate Professor Dept. of Computer Science and Engineering
Big Data Processing Technologies Chentao Wu Associate Professor Dept. of Computer Science and Engineering wuct@cs.sjtu.edu.cn Schedule (1) Storage system part (first eight weeks) lec1: Introduction on
More informationEmbedded Technosolutions
Hadoop Big Data An Important technology in IT Sector Hadoop - Big Data Oerie 90% of the worlds data was generated in the last few years. Due to the advent of new technologies, devices, and communication
More informationMassive Online Analysis - Storm,Spark
Massive Online Analysis - Storm,Spark presentation by R. Kishore Kumar Research Scholar Department of Computer Science & Engineering Indian Institute of Technology, Kharagpur Kharagpur-721302, India (R
More informationDATABASE SCALE WITHOUT LIMITS ON AWS
The move to cloud computing is changing the face of the computer industry, and at the heart of this change is elastic computing. Modern applications now have diverse and demanding requirements that leverage
More informationApril Final Quiz COSC MapReduce Programming a) Explain briefly the main ideas and components of the MapReduce programming model.
1. MapReduce Programming a) Explain briefly the main ideas and components of the MapReduce programming model. MapReduce is a framework for processing big data which processes data in two phases, a Map
More informationColumn-Family Databases Cassandra and HBase
Column-Family Databases Cassandra and HBase Kevin Swingler Google Big Table Google invented BigTableto store the massive amounts of semi-structured data it was generating Basic model stores items indexed
More informationNoSQL systems. Lecture 21 (optional) Instructor: Sudeepa Roy. CompSci 516 Data Intensive Computing Systems
CompSci 516 Data Intensive Computing Systems Lecture 21 (optional) NoSQL systems Instructor: Sudeepa Roy Duke CS, Spring 2016 CompSci 516: Data Intensive Computing Systems 1 Key- Value Stores Duke CS,
More informationThe Google File System. Alexandru Costan
1 The Google File System Alexandru Costan Actions on Big Data 2 Storage Analysis Acquisition Handling the data stream Data structured unstructured semi-structured Results Transactions Outline File systems
More informationUsing space-filling curves for multidimensional
Using space-filling curves for multidimensional indexing Dr. Bisztray Dénes Senior Research Engineer 1 Nokia Solutions and Networks 2014 In medias res Performance problems with RDBMS Switch to NoSQL store
More informationBig Data Programming: an Introduction. Spring 2015, X. Zhang Fordham Univ.
Big Data Programming: an Introduction Spring 2015, X. Zhang Fordham Univ. Outline What the course is about? scope Introduction to big data programming Opportunity and challenge of big data Origin of Hadoop
More informationPart 1: Indexes for Big Data
JethroData Making Interactive BI for Big Data a Reality Technical White Paper This white paper explains how JethroData can help you achieve a truly interactive interactive response time for BI on big data,
More informationFaster HBase queries. Introducing hindex Secondary indexes for HBase. ApacheCon North America Rajeshbabu Chintaguntla
Security Level: Faster HBase queries Introducing hindex Secondary indexes for HBase ApacheCon North America 2014 www.huawei.com Rajeshbabu Chintaguntla rajeshbabu@apache.org HUAWEI TECHNOLOGIES CO., LTD.
More informationProgramming model and implementation for processing and. Programs can be automatically parallelized and executed on a large cluster of machines
A programming model in Cloud: MapReduce Programming model and implementation for processing and generating large data sets Users specify a map function to generate a set of intermediate key/value pairs
More informationRAMCloud. Scalable High-Performance Storage Entirely in DRAM. by John Ousterhout et al. Stanford University. presented by Slavik Derevyanko
RAMCloud Scalable High-Performance Storage Entirely in DRAM 2009 by John Ousterhout et al. Stanford University presented by Slavik Derevyanko Outline RAMCloud project overview Motivation for RAMCloud storage:
More informationBIG DATA TESTING: A UNIFIED VIEW
http://core.ecu.edu/strg BIG DATA TESTING: A UNIFIED VIEW BY NAM THAI ECU, Computer Science Department, March 16, 2016 2/30 PRESENTATION CONTENT 1. Overview of Big Data A. 5 V s of Big Data B. Data generation
More informationDistributed Systems. 15. Distributed File Systems. Paul Krzyzanowski. Rutgers University. Fall 2017
Distributed Systems 15. Distributed File Systems Paul Krzyzanowski Rutgers University Fall 2017 1 Google Chubby ( Apache Zookeeper) 2 Chubby Distributed lock service + simple fault-tolerant file system
More informationExtreme Computing. NoSQL.
Extreme Computing NoSQL PREVIOUSLY: BATCH Query most/all data Results Eventually NOW: ON DEMAND Single Data Points Latency Matters One problem, three ideas We want to keep track of mutable state in a scalable
More informationAchieving Horizontal Scalability. Alain Houf Sales Engineer
Achieving Horizontal Scalability Alain Houf Sales Engineer Scale Matters InterSystems IRIS Database Platform lets you: Scale up and scale out Scale users and scale data Mix and match a variety of approaches
More informationMassive Scalability With InterSystems IRIS Data Platform
Massive Scalability With InterSystems IRIS Data Platform Introduction Faced with the enormous and ever-growing amounts of data being generated in the world today, software architects need to pay special
More informationDistributed computing: index building and use
Distributed computing: index building and use Distributed computing Goals Distributing computation across several machines to Do one computation faster - latency Do more computations in given time - throughput
More informationBigTable. CSE-291 (Cloud Computing) Fall 2016
BigTable CSE-291 (Cloud Computing) Fall 2016 Data Model Sparse, distributed persistent, multi-dimensional sorted map Indexed by a row key, column key, and timestamp Values are uninterpreted arrays of bytes
More informationCS 655 Advanced Topics in Distributed Systems
Presented by : Walid Budgaga CS 655 Advanced Topics in Distributed Systems Computer Science Department Colorado State University 1 Outline Problem Solution Approaches Comparison Conclusion 2 Problem 3
More informationIndexing Large-Scale Data
Indexing Large-Scale Data Serge Abiteboul Ioana Manolescu Philippe Rigaux Marie-Christine Rousset Pierre Senellart Web Data Management and Distribution http://webdam.inria.fr/textbook November 16, 2010
More informationJargons, Concepts, Scope and Systems. Key Value Stores, Document Stores, Extensible Record Stores. Overview of different scalable relational systems
Jargons, Concepts, Scope and Systems Key Value Stores, Document Stores, Extensible Record Stores Overview of different scalable relational systems Examples of different Data stores Predictions, Comparisons
More informationData Storage Infrastructure at Facebook
Data Storage Infrastructure at Facebook Spring 2018 Cleveland State University CIS 601 Presentation Yi Dong Instructor: Dr. Chung Outline Strategy of data storage, processing, and log collection Data flow
More informationThe State of Apache HBase. Michael Stack
The State of Apache HBase Michael Stack Michael Stack Chair of the Apache HBase PMC* Caretaker/Janitor Member of the Hadoop PMC Engineer at Cloudera in SF * Project Management
More information