CON Apache Kafka
|
|
- Hannah Baldwin
- 6 years ago
- Views:
Transcription
1 CON Apache Kafka Scalable Message Processing and more! Guido Schmutz guidoschmutz.wordpress.com BASEL BERN BRUGG DÜSSELDORF FRANKFURT A.M. FREIBURG I.BR. GENF HAMBURG KOPENHAGEN LAUSANNE MÜNCHEN STUTTGART WIEN ZÜRICH
2 Guido Schmutz Working at Trivadis for more than 20 years Oracle ACE Director for Fusion Middleware and SOA Consultant, Trainer Software Architect for Java, Oracle, SOA and Big Data / Fast Data Head of Trivadis Architecture Board Technology Trivadis More than 30 years of software development experience Contact: guido.schmutz@trivadis.com Blog: Slideshare: Twitter: gschmutz
3 With over 600 specialists and IT experts in your region. COPENHAGEN HAMBURG 14 Trivadis branches and more than 600 employees 200 Service Level Agreements Over 4,000 training participants DÜSSELDORF Research and development budget: CHF 5.0 million FRANKFURT Financially self-supporting and sustainably profitable BASEL FREIBURG STUTTGART BRUGG ZURICH MUNICH VIENNA Experience from more than 1,900 projects per year at over 800 customers GENEVA BERN LAUSANNE
4 Agenda 1. What is Apache Kafka? 2. Kafka Connect 3. Kafka Streams 4. KSQL 5. Kafka and "Big Data" / "Fast Data" Ecosystem 6. Kafka in Enterprise Architecture
5 What is Apache Kafka?
6 Apache Kafka History 0.11 Exactly Once Semantics Performance Improvements 0.10 Data Processing (Streams API) 0.9 Data Integration (Connect API) KSQL Developer Preview 0.7 Cluster mirroring data compression 0.8 Intra-cluster replication
7 Apache Kafka - Unix Analogy KSQL Kafka Connect API Kafka Streams API Kafka Connect API $ cat < in.txt grep "kafka" tr a-z A-Z > out.txt Kafka Core (Cluster) Adapted from: Confluent
8 Kafka High Level Architecture The who is who Producers write data to brokers. Consumers read data from brokers. All this is distributed. The data Data is stored in topics. Topics are split into partitions, which are replicated. Zookeeper Ensemble Producer Producer Producer Kafka Cluster Broker 1 Broker 2 Broker 3 Consumer Consumer Consumer
9 Apache Kafka P 0 Kafka Broker 1 Movement Topic P Truck Kafka Broker 2 Movement Topic P P Kafka Broker 3 Movement Topic P Movement Processor Movement Processor Movement Processor P
10 Kafka Producer Write Ahead Log / Commit Log Producers always append to tail (append to file, i.e. segment) Truck Order is preserved for messages within same partition Kafka Broker Movement Topic
11 Kafka Consumer - Partition offsets Offset A sequential id number assigned to messages in the partitions. Uniquely identifies a message within a partition. Consumers track their pointers via (offset, partition, topic) tuples Kafka 0.10: seek to offset by given timestamp using method KafkaConsumer#offsetsForTimes New data from Producer Consumer Group A Consumer Group B Consumer at earliest offset Consumer at specific offset Consumer at latest offset
12 Data Retention 3 options 1. Never: 2. Time based (TTL): log.retention.{ms minutes hours} 3. Size based: log.retention.bytes 4. Log compaction based (entries with same key are removed): kafka-topics.sh --zookeeper zk:2181 \ --create --topic customers \ --replication-factor 1 \ --partitions 1 \ --config cleanup.policy=compact
13 Data Retention - Log Compaction ensures that Kafka always retain at least the last known value for each message key within a single topic partition compaction is done in the background by periodically recopying log segments. Offset Key Value K1 K2 K1 K1 K3 K2 K4 K5 K5 K2 K6 K2 V1 V2 V3 V4 V5 V6 V7 V8 V9 V10 V11 Compaction Offset Key Value K1 K3 K4 K5 K2 K6 V4 V5 V7 V9 V10 V11
14 Topic Viewed as Event Stream or State Stream (Change Log) Event Stream State Stream (Change Log Stream) T20:18: T20:18: T20:18: T20:19: T20:19: T20:19:23 11,Normal,41.87, ,Normal,40.38, ,Normal,42.23, ,Normal,41.71, ,Normal,38.65, ,Normal41.71, T20:18:46,11,Normal,41.87, T20:18:55,11,Normal,40.38, T20:18:59, 21,Normal,42.23, T20:19:01,21,Normal,41.71, T20:19:02,11,Normal,38.65, T20:19:23,21,Normal41.71,-91.32
15 Demo (I) Truck-1 Truck-2 truck position console consumer Truck :39: Wichita to Little Rock Route2 Normal Testdata-Generator by Hortonworks
16 Demo (I) Create Kafka Topic $ kafka-topics --zookeeper zookeeper: create \ --topic truck_position --partitions 8 --replication-factor 1 $ kafka-topics --zookeeper zookeeper:2181 list consumer_offsets _confluent-metrics _schemas docker-connect-configs docker-connect-offsets docker-connect-status truck_position
17 Demo (I) Run Producer and Kafka-Console-Consumer
18 Demo (I) Java Producer to truck_position Constructing a Kafka Producer private Properties kafkaprops = new Properties(); kafkaprops.put("bootstrap.servers","broker-1:9092); kafkaprops.put("key.serializer", "...StringSerializer"); kafkaprops.put("value.serializer", "...StringSerializer"); producer = new KafkaProducer<String, String>(kafkaProps); ProducerRecord<String, String> record = new ProducerRecord<>( truck_position", driverid, eventdata); try { metadata = producer.send(record).get(); } catch (Exception e) {}
19 Demo (II) devices send to MQTT instead of Kafka Truck-1 Truck-2 truck/nn/ position Truck :39: Wichita to Little Rock Route2 Normal
20 Demo (II) devices send to MQTT instead of Kafka
21 Demo (II) - devices send to MQTT instead of Kafka how to get the data into Kafka? Truck-1 Truck-2 truck/nn/ position? truck position raw Truck :39: Wichita to Little Rock Route2 Normal
22 Kafka Connect
23 Kafka Connect - Overview Source Connector Sink Connector
24 Kafka Connect Single Message Transforms (SMT) Simple Transformations for a single message Defined as part of Kafka Connect some useful transforms provided out-of-the-box Easily implement your own Optionally deploy 1+ transforms with each connector Modify messages produced by source connector Modify messages sent to sink connectors Makes it much easier to mix and match connectors Some of currently available transforms: InsertField ReplaceField MaskField ValueToKey ExtractField TimestampRouter RegexRouter SetSchemaMetaData Flatten TimestampConverter
25 Kafka Connect Many Connectors 60+ since first release (0.9+) 20+ from Confluent and Partners Certified Connectors Community Connectors Confluent supported Connectors Source:
26 Demo (III) Truck-1 Truck-2 truck/nn/ position mqtt to kafka truck_ position console consumer Truck :39: Wichita to Little Rock Route2 Normal
27 Demo (III) Create MQTT Connect through REST API #!/bin/bash curl -X "POST" " \ -H "Content-Type: application/json" \ -d $'{ "name": "mqtt-source", "config": { "connector.class": "com.datamountaineer.streamreactor.connect.mqtt.source.mqttsourceconnector", "connect.mqtt.connection.timeout": "1000", "tasks.max": "1", "connect.mqtt.kcql": "INSERT INTO truck_position SELECT * FROM truck/+/position", "name": "MqttSourceConnector", "connect.mqtt.service.quality": "0", "connect.mqtt.client.id": "tm-mqtt-connect-01", "connect.mqtt.converter.throw.on.error": "true", "connect.mqtt.hosts": "tcp://mosquitto:1883 } }'
28 Demo (III) Call REST API and Kafka Console Consumer
29 Demo (III) Truck-1 what about some analytics? Truck-2 truck/nn/ position mqtt to kafka truck_ position console consumer Truck :39: Wichita to Little Rock Route2 Normal
30 Kafka Streams
31 Kafka Streams - Overview Designed as a simple and lightweight library in Apache Kafka no external dependencies on systems other than Apache Kafka Part of open source Apache Kafka, introduced in Leverages Kafka as its internal messaging layer Supports fault-tolerant local state Event-at-a-time processing (not microbatch) with millisecond latency Windowing with out-of-order data using a Google DataFlow-like model
32 Kafka Stream DSL and Processor Topology KStream<Integer, String> stream1 = builder.stream( in-1"); KStream<Integer, String> stream2= builder.stream( in-2"); KStream<Integer, String> joined = stream1.leftjoin(stream2, ); KTable<> aggregated = joined.groupby( ).count( store ); aggregated.to( out-1 ); 1 2 lj a t State
33 Kafka Stream DSL and Processor Topology KStream<Integer, String> stream1 = builder.stream( in-1"); KStream<Integer, String> stream2= builder.stream( in-2"); KStream<Integer, String> joined = stream1.leftjoin(stream2, ); KTable<> aggregated = joined.groupby( ).count( store ); aggregated.to( out-1 ); 1 2 lj a t State
34 Processor Topology Kafka Streams Cluster Kafka Cluster input input-2 lj a store (changelog) t State output
35 Processor Topology Kafka Streams 1 Kafka Cluster input-1 Partition 0 Partition 1 Partition 2 Partition 3 Kafka Streams 2 input-2 Partition 0 Partition 1 Partition 2 Partition 3
36 Processor Topology Kafka Streams 1 Kafka Streams 2 Kafka Cluster input-1 Partition 0 Partition 1 Partition 2 Partition 3 input-2 Partition 0 Kafka Streams 3 Kafka Streams 4 Partition 1 Partition 2 Partition 3
37 KSQL
38 KSQL: a Streaming SQL Engine for Apache Kafka Enables stream processing with zero coding required The simples way to process streams of data in real-time Powered by Kafka and Kafka Streams: scalable, distributed, mature All you need is Kafka no complex deployments available as Developer preview! STREAM and TABLE as first-class citizens STREAM = data in motion TABLE = collected state of a stream join STREAM and TABLE
39 KSQL Deployment Models Standalone Mode Cluster Mode Source: Confluent
40 Demo (IV) Truck-1 Truck-2 truck/nn/ position mqtt to kafka truck_ position_s detect_danger ous_driving dangerous_ driving console consumer Truck :39: Wichita to Little Rock Route2 Normal
41 Demo (IV) - Start Kafka KSQL $ docker-compose exec ksql-cli ksql-cli local --bootstrap-server broker-1:9092 ====================================== = _ = = / // / \ = = ' / ( = = < \ \ = =. \ ) = = _ \_\ / \ \_\ = = = = Streaming SQL Engine for Kafka = Copyright 2017 Confluent Inc. CLI v0.1, Server v0.1 located at Having trouble? Type 'help' (case-insensitive) for a rundown of how things work! ksql>
42 Demo (IV) - Create Stream ksql> CREATE STREAM truck_position_s \ (ts VARCHAR, \ truckid VARCHAR, \ driverid BIGINT, \ routeid BIGINT, \ routename VARCHAR, \ eventtype VARCHAR, \ latitude DOUBLE, \ longitude DOUBLE, \ correlationid VARCHAR) \ WITH (kafka_topic='truck_position', \ value_format='delimited'); Message Stream created
43 Demo (IV) - Create Stream ksql> CREATE STREAM truck_position_s \ (ts VARCHAR, \ truckid VARCHAR, \ driverid BIGINT, \ routeid BIGINT, \ routename VARCHAR, \ eventtype VARCHAR, \ latitude DOUBLE, \ longitude DOUBLE, \ correlationid VARCHAR) \ WITH (kafka_topic='truck_position', \ value_format='delimited'); Message Stream created
44 Demo (IV) - Create Stream ksql> describe truck_position_s; Field Type ROWTIME BIGINT ROWKEY VARCHAR(STRING) TS VARCHAR(STRING) TRUCKID VARCHAR(STRING) DRIVERID BIGINT ROUTEID BIGINT ROUTENAME VARCHAR(STRING) EVENTTYPE VARCHAR(STRING) LATITUDE DOUBLE LONGITUDE DOUBLE CORRELATIONID VARCHAR(STRING)
45 Demo (IV) - Create Stream ksql> SELECT * FROM truck_position_s; "truck/13/position0! t07:28: Memphis to Little Rock Normal "truck/16/position0! t07:28: Joplin to Kansas City Route 2 Normal "truck/30/position0! t07:28: Des Moines to Chicago Route 2 Normal "truck/23/position0! t07:28: Peoria to Ceder Rapids Route 2 Normal "truck/12/position0! t07:28: Saint Louis to Memphis Normal "truck/14/position0! t07:28: Springfield to KC Via Columbia Normal
46 Demo (IV) - Create Stream ksql> SELECT * FROM truck_position_s WHERE eventtype!= 'Normal'; "truck/11/position0! t07:31: Saint Louis to Tulsa Route2 Lane Departure "truck/11/position0! t07:31: Saint Louis to Tulsa Route2 Unsafe tail distance "truck/10/position0! t07:31: Joplin to Kansas City Unsafe following distance "truck/11/position0! t07:31: Saint Louis to Tulsa Route2 Unsafe following distance
47 Demo (IV) - Create Stream ksql> CREATE STREAM dangerous_driving_s \ WITH (kafka_topic= dangerous_driving_s', \ value_format='json') \ AS SELECT * FROM truck_position_s \ WHERE eventtype!= 'Normal'; Message Stream created and running ksql> select * from dangerous_driving_s; "truck/11/position0! t07:40: Des Moines to Chicago Route 2 Overspeed "truck/11/position0! t07:41: Des Moines to Chicago Route 2 Overspeed
48 Demo (V) Truck Driver 27, Mark Lochbihler, :19:00 jdbc-source trucking_ driver {"id":10,"name":"george Vetticaden","last_update": } Truck-1 join_truck_ position_driver truck_position _driver Truck-2 truck/nn/ position mqttsource truck_ position Truck :39: Wichita to Little Rock Route2 Normal detect_danger ous_driving dangerous_ driving console consumer
49 Demo (V) Create JDBC Connect through REST API #!/bin/bash curl -X "POST" " \ -H "Content-Type: application/json" \ -d $'{ "name": "jdbc-driver-source", "config": { "connector.class": "JdbcSourceConnector", "connection.url":"jdbc:postgresql://db/sample?user=sample&password=sample", "mode": "timestamp", "timestamp.column.name":"last_update", "table.whitelist":"driver", "validate.non.null":"false", "topic.prefix":"trucking_", "key.converter":"org.apache.kafka.connect.json.jsonconverter", "key.converter.schemas.enable": "false", "value.converter":"org.apache.kafka.connect.json.jsonconverter", "value.converter.schemas.enable": "false", "name": "jdbc-driver-source", "transforms":"createkey,extractint", "transforms.createkey.type":"org.apache.kafka.connect.transforms.valuetokey", "transforms.createkey.fields":"id", "transforms.extractint.type":"org.apache.kafka.connect.transforms.extractfield$key", "transforms.extractint.field":"id" } }'
50 Demo (V) Create JDBC Connect through REST API
51 Demo (V) - Create Table with Driver State ksql> CREATE TABLE driver_t \ (id BIGINT, \ name VARCHAR) \ WITH (kafka_topic= trucking_driver', \ value_format='json'); Message Table created
52 Demo (V) - Create Table with Driver State ksql> CREATE STREAM truck_position_and_driver_s \ WITH (kafka_topic='truck_position_and_driver_s', \ value_format='json') \ AS SELECT driverid, name, truckid, routeid,routename, eventtype \ FROM truck_position_s \ LEFT JOIN driver_t \ ON truck_position_s.driverid = driver_t.id; Message Stream created and running ksql> select * from truck_position_and_driver_s; "truck/11/position0! t07:40: Des Moines to Chicago Route 2 Overspeed "truck/11/position0! t07:41: Des Moines to Chicago Route 2 Overspeed
53 Demo (V) - Create Table with Driver State ksql> CREATE STREAM truck_position_and_driver_s \ WITH (kafka_topic='truck_position_and_driver_s', \ value_format='json') \ AS SELECT driverid, name, truckid, routeid,routename, eventtype \ FROM truck_position_s \ LEFT JOIN driver_t \ ON truck_position_s.driverid = driver_t.id; Message Stream created and running ksql> select * from truck_position_and_driver_s; Jamie Engesser Saint Louis to Memphis Normal Jamie Engesser Saint Louis to Memphis Normal Jamie Engesser Saint Louis to Memphis Overspeed
54 Kafka and "Big Data" / "Fast Data" Ecosystem
55 Kafka and the Big Data / Fast Data ecosystem Kafka integrates with many popular products / frameworks Apache Spark Streaming Apache Flink Apache Storm Apache Apex Apache NiFi StreamSets Oracle Stream Analytics Oracle Service Bus Oracle GoldenGate Oracle Event Hub Cloud Service Debezium CDC Additional Info:
56 Kafka in Enterprise Architecture
57 Traditional Big Data Architecture Enterprise Data Warehouse Billing & Ordering Hadoop Clusterd Hadoop Cluster Big Data Cluster SQL BI Tools CRM / Profile Marketing Campaigns File Import / SQL Import Distributed Filesystem Parallel Batch Processing NoSQL Search Search / Explore Machine Learning Graph Algorithms Natural Language Processing Online & Mobile Apps
58 Event Hub handle event stream data Enterprise Data Warehouse Billing & Ordering CRM / Profile Marketing Campaigns Hadoop Clusterd Hadoop Cluster Big Data Cluster SQL BI Tools Parallel Batch Processing Search Search / Explore Location Social Click stream Mobile Apps Weather Data Call Center Data Flow Event Event Event Hub Hub Hub Distributed Filesystem NoSQL Machine Learning Graph Algorithms Natural Language Processing Online & Mobile Apps Sensor Data
59 Event Hub taking Velocity into account Enterprise Data Warehouse Billing & Ordering File Import / SQL Import Hadoop Clusterd Hadoop Cluster Big Data Cluster CRM / Profile Marketing Campaigns Event Event Event Hub Hub Hub Distributed Filesystem Parallel Batch Processing Results Batch Analytics SQL Search BI Tools Search / Explore Location Mobile Apps Streaming Analytics Social Weather Data Stream Analytics NoSQL Online & Mobile Apps Click stream Call Center Sensor Data Reference / Models Dashboard
60 Event Hub Asynchronous Microservice Architecture Billing & Ordering File Import / SQL Import Hadoop Clusterd Hadoop Cluster Big Data Cluster Enterprise Data Warehouse CRM / Profile Marketing Campaigns Location Mobile Apps Event Event Event Hub Hub Hub Distributed Filesystem Container Parallel Batch Processing { } SQL Search BI Tools Search / Explore Social Click stream Weather Data Call Center Microservice RDBMS API NoSQL Online & Mobile Apps Sensor Data
61 Guido Schmutz Technology guidoschmutz.wordpress.com
Microservices with Kafka Ecosystem. Guido Schmutz
Microservices with Kafka Ecosystem Guido Schmutz @gschmutz doag2017 Guido Schmutz Working at Trivadis for more than 20 years Oracle ACE Director for Fusion Middleware and SOA Consultant, Trainer Software
More informationUn'introduzione a Kafka Streams e KSQL and why they matter! ITOUG Tech Day Roma 1 Febbraio 2018
Un'introduzione a Kafka Streams e KSQL and why they matter! ITOUG Tech Day Roma 1 Febbraio 2018 R E T H I N K I N G Stream Processing with Apache Kafka Kafka the Streaming Data Platform 1.0 Enterprise
More informationEmpfehlungen vom BigData Admin
Empfehlungen vom BigData Admin an den Oracle DBA Florian Feicht, Alexander Hofstetter @FlorianFeicht @lxdba doag2017 Our company. Trivadis is a market leader in IT consulting, system integration, solution
More informationBest Practices for Testing SOA Suite 11g based systems
Best Practices for Testing SOA Suite 11g based systems ODTUG 2010 Guido Schmutz, Technology Manager / Partner Trivadis AG 29.06.2010, Washington Basel Baden Bern Lausanne Zürich Düsseldorf Frankfurt/M.
More informationSchema Registry Overview
3 Date of Publish: 2018-11-15 https://docs.hortonworks.com/ Contents...3 Examples of Interacting with Schema Registry...4 Schema Registry Use Cases...6 Use Case 1: Registering and Querying a Schema for
More informationFluentd + MongoDB + Spark = Awesome Sauce
Fluentd + MongoDB + Spark = Awesome Sauce Nishant Sahay, Sr. Architect, Wipro Limited Bhavani Ananth, Tech Manager, Wipro Limited Your company logo here Wipro Open Source Practice: Vision & Mission Vision
More informationLet the data flow! Data Streaming & Messaging with Apache Kafka Frank Pientka. Materna GmbH
Let the data flow! Data Streaming & Messaging with Apache Kafka Frank Pientka Wer ist Frank Pientka? Dipl.-Informatiker (TH Karlsruhe) Verheiratet, 2 Töchter Principal Software Architect in Dortmund Fast
More informationIntroducing Kafka Connect. Large-scale streaming data import/export for
Introducing Kafka Connect Large-scale streaming data import/export for Kafka @tlberglund My Secret Agenda 1. Review of Kafka 2. Why do we need Connect? 3. How does Connect work? 4. Tell me about these
More informationThe Microsoft Big Data architecture approach
The Microsoft Big ata architecture approach Marc Schöni (Microsoft) Meinrad Weiss (Trivadis) 7. February 2014 BASEL BERN BRUGG LAUSANNE ZUERICH UESSELORF FRANKFURT A.M. FREIBURG I.BR. HAMBURG MUNICH STUTTGART
More informationDomain Services Clusters Centralized Management & Storage for an Oracle Cluster Environment Markus Flechtner
s Centralized Management & Storage for an Oracle Cluster Environment Markus Flechtner BASLE BERN BRUGG DÜSSELDORF FRANKFURT A.M. FREIBURG I.BR. GENEVA HAMBURG COPENHAGEN LAUSANNE MUNICH STUTTGART VIENNA
More informationREALTIME WEB APPLICATIONS WITH ORACLE APEX
REALTIME WEB APPLICATIONS WITH ORACLE APEX DOAG Conference 2012 Johannes Mangold Senior Consultant, Trivadis AG BASEL BERN LAUSANNE ZÜRICH DÜSSELDORF FRANKFURT A.M. FREIBURG I.BR. HAMBURG MÜNCHEN STUTTGART
More informationBig Data Technology Ecosystem. Mark Burnette Pentaho Director Sales Engineering, Hitachi Vantara
Big Data Technology Ecosystem Mark Burnette Pentaho Director Sales Engineering, Hitachi Vantara Agenda End-to-End Data Delivery Platform Ecosystem of Data Technologies Mapping an End-to-End Solution Case
More informationDatabase Sharding with Oracle RDBMS
Database Sharding with Oracle RDBMS First Impressions Robert Bialek Principal Consultant BASEL BERN BRUGG DÜSSELDORF FRANKFURT A.M. FREIBURG I.BR. GENEVA HAMBURG COPENHAGEN LAUSANNE MUNICH STUTTGART VIENNA
More informationAnalytic Views: Use Cases in Data Warehouse. Dani Schnider, Trivadis AG DOAG Conference, 21 November 2017
Analytic Views: Use Cases in Data Warehouse Dani Schnider, Trivadis AG DOAG Conference, 21 November 2017 @dani_schnider DOAG2017 Our company. Trivadis is a market leader in IT consulting, system integration,
More informationOracle In-Memory & Data Warehouse: The Perfect Combination?
: The Perfect Combination? UKOUG Tech17, 6 December 2017 Dani Schnider, Trivadis AG @dani_schnider danischnider.wordpress.com BASEL BERN BRUGG DÜSSELDORF FRANKFURT A.M. FREIBURG I.BR. GENEVA HAMBURG COPENHAGEN
More informationData Vault Partitioning Strategies. Dani Schnider, Trivadis AG DOAG Conference, 23 November 2017
Data Vault Partitioning Strategies Dani Schnider, Trivadis AG DOAG Conference, 23 November 2017 @dani_schnider DOAG2017 Our company. Trivadis is a market leader in IT consulting, system integration, solution
More informationDesigning for Performance: Database Related Worst Practices ITOUG Tech Day, 11 November 2016, Milano (I) Christian Antognini
Designing for Performance: Database Related Worst Practices ITOUG Tech Day, 11 November 2016, Milano (I) Christian Antognini BASLE BERN BRUGG DÜSSELDORF FRANKFURT A.M. FREIBURG I.BR. GENEVA HAMBURG COPENHAGEN
More informationIncrease Value from Big Data with Real-Time Data Integration and Streaming Analytics
Increase Value from Big Data with Real-Time Data Integration and Streaming Analytics Cy Erbay Senior Director Striim Executive Summary Striim is Uniquely Qualified to Solve the Challenges of Real-Time
More informationCloud Acceleration. Performance comparison of Cloud vendors. Tobias Deml DOAG2017
Performance comparison of Cloud vendors Tobias Deml Consultant @TobiasDemlDBA DOAG2017 About Consultant, Trivadis GmbH, Munich Since more than 9 years working in Oracle environment Focus areas Cloud Computing
More informationHortonworks DataFlow Sam Lachterman Solutions Engineer
Hortonworks DataFlow Sam Lachterman Solutions Engineer 1 Hortonworks Inc. 2011 2017. All Rights Reserved Disclaimer This document may contain product features and technology directions that are under development,
More informationBackup Methods from Practice
Backup Methods from Practice Optimized and Intelligent Roland Stirnimann @rstirnimann_ch BASEL BERN BRUGG DÜSSELDORF FRANKFURT A.M. FREIBURG I.BR. GENEVA HAMBURG COPENHAGEN LAUSANNE MUNICH STUTTGART VIENNA
More informationApplication Containers an Introduction
Application Containers an Introduction Oracle Database 12c Release 2 - Multitenancy for Applications Markus Flechtner BASLE BERN BRUGG DÜSSELDORF FRANKFURT A.M. FREIBURG I.BR. GENEVA HAMBURG COPENHAGEN
More informationData pipelines with PostgreSQL & Kafka
Data pipelines with PostgreSQL & Kafka Oskari Saarenmaa PostgresConf US 2018 - Jersey City Agenda 1. Introduction 2. Data pipelines, old and new 3. Apache Kafka 4. Sample data pipeline with Kafka & PostgreSQL
More informationKafka Connect the Dots
Kafka Connect the Dots Building Oracle Change Data Capture Pipelines With Kafka Mike Donovan CTO Dbvisit Software Mike Donovan Chief Technology Officer, Dbvisit Software Multi-platform DBA, (Oracle, MSSQL..)
More informationIBM Data Replication for Big Data
IBM Data Replication for Big Data Highlights Stream changes in realtime in Hadoop or Kafka data lakes or hubs Provide agility to data in data warehouses and data lakes Achieve minimum impact on source
More informationApplication Containers an Introduction
Application Containers an Introduction Oracle Database 12c Release 2 Multitenancy for Applications Markus Flechtner BASLE BERN BRUGG DÜSSELDORF FRANKFURT A.M. FREIBURG I.BR. GENEVA HAMBURG COPENHAGEN LAUSANNE
More informationData Acquisition. The reference Big Data stack
Università degli Studi di Roma Tor Vergata Dipartimento di Ingegneria Civile e Ingegneria Informatica Data Acquisition Corso di Sistemi e Architetture per Big Data A.A. 2016/17 Valeria Cardellini The reference
More informationData Acquisition. The reference Big Data stack
Università degli Studi di Roma Tor Vergata Dipartimento di Ingegneria Civile e Ingegneria Informatica Data Acquisition Corso di Sistemi e Architetture per Big Data A.A. 2017/18 Valeria Cardellini The reference
More informationOnline Operations in Oracle 12.2
Online Operations in Oracle 12.2 New Features and Enhancements Christian Gohmann BASEL BERN BRUGG DÜSSELDORF FRANKFURT A.M. FREIBURG I.BR. GENEVA HAMBURG COPENHAGEN LAUSANNE MUNICH STUTTGART VIENNA ZURICH
More informationPimping up Industry Devices with Rasperry Pi, Vert.x und Java 8
Pimping up Industry Devices with Rasperry Pi, Vert.x und Java 8 Anatole Tresch Principal Consultant @atsticks BASEL BERN BRUGG DÜSSELDORF FRANKFURT A.M. FREIBURG I.BR. GENEVA HAMBURG COPENHAGEN LAUSANNE
More informationWe are ready to serve Latest Testing Trends, Are you ready to learn?? New Batches Info
We are ready to serve Latest Testing Trends, Are you ready to learn?? New Batches Info START DATE : TIMINGS : DURATION : TYPE OF BATCH : FEE : FACULTY NAME : LAB TIMINGS : PH NO: 9963799240, 040-40025423
More informationOver the last few years, we have seen a disruption in the data management
JAYANT SHEKHAR AND AMANDEEP KHURANA Jayant is Principal Solutions Architect at Cloudera working with various large and small companies in various Verticals on their big data and data science use cases,
More informationDown the event-driven road: Experiences of integrating streaming into analytic data platforms
Down the event-driven road: Experiences of integrating streaming into analytic data platforms Dr. Dominik Benz, Head of Machine Learning Engineering, inovex GmbH Confluent Meetup Munich, 8.10.2018 Integrate
More informationApplication Containers an Introduction
Application Containers an Introduction Oracle Database 12c Release 2 Multitenancy for Applications Markus Flechtner @markusdba doag2017 Our company. Trivadis is a market leader in IT consulting, system
More informationBloom Filters DOAG Webinar, 12 August 2016 Christian Antognini Senior Principal Consultant
DOAG Webinar, 12 August 2016 Christian Antognini Senior Principal Consultant BASEL BERN BRUGG DÜSSELDORF FRANKFURT A.M. FREIBURG I.BR. GENEVA HAMBURG COPENHAGEN LAUSANNE MUNICH STUTTGART VIENNA ZURICH
More informationMODERN BIG DATA DESIGN PATTERNS CASE DRIVEN DESINGS
MODERN BIG DATA DESIGN PATTERNS CASE DRIVEN DESINGS SUJEE MANIYAM FOUNDER / PRINCIPAL @ ELEPHANT SCALE www.elephantscale.com sujee@elephantscale.com HI, I M SUJEE MANIYAM Founder / Principal @ ElephantScale
More informationQuery Optimizer MySQL vs. PostgreSQL
Percona Live, Frankfurt (DE), 7 November 2018 Christian Antognini @ChrisAntognini antognini.ch/blog BASEL BERN BRUGG DÜSSELDORF FRANKFURT A.M. FREIBURG I.BR. GENEVA HAMBURG COPENHAGEN LAUSANNE MUNICH STUTTGART
More informationService discovery in Kubernetes with Fabric8
Service discovery in Kubernetes with Fabric8 Andy Moncsek Senior Consultant Andy.Moncsek@trivadis.com Twitter: @AndyAHCP BASEL BERN BRUGG DÜSSELDORF FRANKFURT A.M. FREIBURG I.BR. GENF HAMBURG KOPENHAGEN
More informationApache Tamaya Configuring your Containers...
Apache Tamaya Configuring your Containers... BASEL BERN BRUGG DÜSSELDORF FRANKFURT A.M. FREIBURG I.BR. GENF HAMBURG KOPENHAGEN LAUSANNE MÜNCHEN STUTTGART WIEN ZÜRICH About Me Anatole Tresch Principal Consultant,
More informationFlash Storage Complementing a Data Lake for Real-Time Insight
Flash Storage Complementing a Data Lake for Real-Time Insight Dr. Sanhita Sarkar Global Director, Analytics Software Development August 7, 2018 Agenda 1 2 3 4 5 Delivering insight along the entire spectrum
More informationHDInsight > Hadoop. October 12, 2017
HDInsight > Hadoop October 12, 2017 2 Introduction Mark Hudson >20 years mixing technology with data >10 years with CapTech Microsoft Certified IT Professional Business Intelligence Member of the Richmond
More informationREAL-TIME ANALYTICS WITH APACHE STORM
REAL-TIME ANALYTICS WITH APACHE STORM Mevlut Demir PhD Student IN TODAY S TALK 1- Problem Formulation 2- A Real-Time Framework and Its Components with an existing applications 3- Proposed Framework 4-
More informationQuery Optimizer MySQL vs. PostgreSQL
Percona Live, Santa Clara (USA), 24 April 2018 Christian Antognini @ChrisAntognini antognini.ch/blog BASEL BERN BRUGG DÜSSELDORF FRANKFURT A.M. FREIBURG I.BR. GENEVA HAMBURG COPENHAGEN LAUSANNE MUNICH
More informationKafka Streams: Hands-on Session A.A. 2017/18
Università degli Studi di Roma Tor Vergata Dipartimento di Ingegneria Civile e Ingegneria Informatica Kafka Streams: Hands-on Session A.A. 2017/18 Matteo Nardelli Laurea Magistrale in Ingegneria Informatica
More informationEsper EQC. Horizontal Scale-Out for Complex Event Processing
Esper EQC Horizontal Scale-Out for Complex Event Processing Esper EQC - Introduction Esper query container (EQC) is the horizontal scale-out architecture for Complex Event Processing with Esper and EsperHA
More informationOracle Database 18c New Performance Features
Oracle Database 18c New Performance Features Christian Antognini @ChrisAntognini antognini.ch/blog BASEL BERN BRUGG DÜSSELDORF FRANKFURT A.M. FREIBURG I.BR. GENEVA HAMBURG COPENHAGEN LAUSANNE MUNICH STUTTGART
More informationOracle Data Integrator 12c: Integration and Administration
Oracle University Contact Us: Local: 1800 103 4775 Intl: +91 80 67863102 Oracle Data Integrator 12c: Integration and Administration Duration: 5 Days What you will learn Oracle Data Integrator is a comprehensive
More informationDatabase Rolling Upgrade with Transient Logical Standby Database DOAG Day High Availability Robert Bialek Principal Consultant
Database Rolling Upgrade with Transient Logical Standby Database DOAG Day High Availability Robert Bialek Principal Consultant BASEL BERN BRUGG DÜSSELDORF FRANKFURT A.M. FREIBURG I.BR. GENF HAMBURG KOPENHAGEN
More informationAn Architecture for Intelligent Data Processing on IoT Edge Devices
2017 UKSim-AMSS 19th International Conference on Modelling & Simulation An Architecture for Intelligent on IoT s Roger Young, Sheila Fallon, Paul Jacob Software Research Institute, Athlone Institute of
More informationContainer 2.0. Container: check! But what about persistent data, big data or fast data?!
@unterstein @joerg_schad @dcos @jaxdevops Container 2.0 Container: check! But what about persistent data, big data or fast data?! 1 Jörg Schad Distributed Systems Engineer @joerg_schad Johannes Unterstein
More informationWELCOME. Unterstützung von Tuning- Maßnahmen mit Hilfe von Capacity Management. DOAG SIG Database
WELCOME Unterstützung von Tuning- Maßnahmen mit Hilfe von Capacity Management DOAG SIG Database 28.02.2013 Robert Kruzynski Principal Consultant Partner Trivadis GmbH München BASEL BERN LAUSANNE ZÜRICH
More informationIntegration of Oracle VM 3 in Enterprise Manager 12c
Integration of Oracle VM 3 in Enterprise Manager 12c DOAG SIG Infrastruktur Martin Bracher Senior Consultant Trivadis AG 8. März 2012 BASEL BERN LAUSANNE ZÜRICH DÜSSELDORF FRANKFURT A.M. FREIBURG I.BR.
More informationConfluent Developer Training for Apache Kafka Exercise Manual B7/801/A
Confluent Developer Training for Apache Kafka Exercise Manual B7/801/A Table of Contents Introduction................................................................ 1 Hands-On Exercise: Using Kafka s
More informationHortonworks DataFlow
Getting Started with Streaming Analytics () docs.hortonworks.com : Getting Started with Streaming Analytics Copyright 2012-2018 Hortonworks, Inc. Some rights reserved. Except where otherwise noted, this
More informationBig Data. Big Data Analyst. Big Data Engineer. Big Data Architect
Big Data Big Data Analyst INTRODUCTION TO BIG DATA ANALYTICS ANALYTICS PROCESSING TECHNIQUES DATA TRANSFORMATION & BATCH PROCESSING REAL TIME (STREAM) DATA PROCESSING Big Data Engineer BIG DATA FOUNDATION
More informationIntroduc)on to Apache Ka1a. Jun Rao Co- founder of Confluent
Introduc)on to Apache Ka1a Jun Rao Co- founder of Confluent Agenda Why people use Ka1a Technical overview of Ka1a What s coming What s Apache Ka1a Distributed, high throughput pub/sub system Ka1a Usage
More informationOverview. Prerequisites. Course Outline. Course Outline :: Apache Spark Development::
Title Duration : Apache Spark Development : 4 days Overview Spark is a fast and general cluster computing system for Big Data. It provides high-level APIs in Scala, Java, Python, and R, and an optimized
More informationWelcome. Oracle SOA Suite meets Java The best of both worlds. Guido Schmutz DOAG Konferenz 2013 Nürnberg,
Welcome Oracle SOA Suite meets Java The best of both worlds Guido Schmutz DOAG Konferenz 2013 Nürnberg, BASEL BERN LAUSANNE ZÜRICH DÜSSELDORF FRANKFURT A.M. FREIBURG I.BR. HAMBURG MÜNCHEN STUTTGART WIEN
More informationThe SMACK Stack: Spark*, Mesos*, Akka, Cassandra*, Kafka* Elizabeth K. Dublin Apache Kafka Meetup, 30 August 2017.
Dublin Apache Kafka Meetup, 30 August 2017 The SMACK Stack: Spark*, Mesos*, Akka, Cassandra*, Kafka* Elizabeth K. Joseph @pleia2 * ASF projects 1 Elizabeth K. Joseph, Developer Advocate Developer Advocate
More information@unterstein #bedcon. Operating microservices with Apache Mesos and DC/OS
@unterstein @dcos @bedcon #bedcon Operating microservices with Apache Mesos and DC/OS 1 Johannes Unterstein Software Engineer @Mesosphere @unterstein @unterstein.mesosphere 2017 Mesosphere, Inc. All Rights
More informationLenses 2.1 Enterprise Features PRODUCT DATA SHEET
Lenses 2.1 Enterprise Features PRODUCT DATA SHEET 1 OVERVIEW DataOps is the art of progressing from data to value in seconds. For us, its all about making data operations as easy and fast as using the
More informationIdentifying Performance Problems in a Multitenant Environment
Identifying Performance Problems in a Multitenant Environment Christian Antognini @ChrisAntognini antognini.ch/blog BASEL BERN BRUGG DÜSSELDORF FRANKFURT A.M. FREIBURG I.BR. GENEVA HAMBURG COPENHAGEN LAUSANNE
More informationApache Kafka a system optimized for writing. Bernhard Hopfenmüller. 23. Oktober 2018
Apache Kafka...... a system optimized for writing Bernhard Hopfenmüller 23. Oktober 2018 whoami Bernhard Hopfenmüller IT Consultant @ ATIX AG IRC: Fobhep github.com/fobhep whoarewe The Linux & Open Source
More informationSQT03 Big Data and Hadoop with Azure HDInsight Andrew Brust. Senior Director, Technical Product Marketing and Evangelism
Big Data and Hadoop with Azure HDInsight Andrew Brust Senior Director, Technical Product Marketing and Evangelism Datameer Level: Intermediate Meet Andrew Senior Director, Technical Product Marketing and
More informationOracle Data Integrator 12c: Integration and Administration
Oracle University Contact Us: +34916267792 Oracle Data Integrator 12c: Integration and Administration Duration: 5 Days What you will learn Oracle Data Integrator is a comprehensive data integration platform
More informationBig Data Analytics using Apache Hadoop and Spark with Scala
Big Data Analytics using Apache Hadoop and Spark with Scala Training Highlights : 80% of the training is with Practical Demo (On Custom Cloudera and Ubuntu Machines) 20% Theory Portion will be important
More informationGet Groovy with ODI Trivadis
BASEL 1 BERN BRUGG LAUSANNE ZUERICH DUESSELDORF FRANKFURT A.M. FREIBURG I.BR. HAMBURG MUNICH STUTTGART VIENNA AGENDA 1 What is Groovy? 2 Groovy in ODI 3 What I want to reach 4 Live Demo 5 Helpful documentation
More informationBIG DATA COURSE CONTENT
BIG DATA COURSE CONTENT [I] Get Started with Big Data Microsoft Professional Orientation: Big Data Duration: 12 hrs Course Content: Introduction Course Introduction Data Fundamentals Introduction to Data
More informationDistributed systems for stream processing
Distributed systems for stream processing Apache Kafka and Spark Structured Streaming Alena Hall Alena Hall Large-scale data processing Distributed Systems Functional Programming Data Science & Machine
More informationUsing the SDACK Architecture to Build a Big Data Product. Yu-hsin Yeh (Evans Ye) Apache Big Data NA 2016 Vancouver
Using the SDACK Architecture to Build a Big Data Product Yu-hsin Yeh (Evans Ye) Apache Big Data NA 2016 Vancouver Outline A Threat Analytic Big Data product The SDACK Architecture Akka Streams and data
More informationSwimming in the Data Lake. Presented by Warner Chaves Moderated by Sander Stad
Swimming in the Data Lake Presented by Warner Chaves Moderated by Sander Stad Thank You microsoft.com hortonworks.com aws.amazon.com red-gate.com Empower users with new insights through familiar tools
More informationWHITEPAPER. MemSQL Enterprise Feature List
WHITEPAPER MemSQL Enterprise Feature List 2017 MemSQL Enterprise Feature List DEPLOYMENT Provision and deploy MemSQL anywhere according to your desired cluster configuration. On-Premises: Maximize infrastructure
More informationData Replication With Oracle GoldenGate Looking Behind The Scenes Robert Bialek Principal Consultant Partner
Data Replication With Oracle GoldenGate Looking Behind The Scenes Robert Bialek Principal Consultant Partner BASEL BERN BRUGG DÜSSELDORF FRANKFURT A.M. FREIBURG I.BR. GENEVA HAMBURG COPENHAGEN LAUSANNE
More informationInnovatus Technologies
HADOOP 2.X BIGDATA ANALYTICS 1. Java Overview of Java Classes and Objects Garbage Collection and Modifiers Inheritance, Aggregation, Polymorphism Command line argument Abstract class and Interfaces String
More informationData 101 Which DB, When Joe Yong Sr. Program Manager Microsoft Corp.
17-18 March, 2018 Beijing Data 101 Which DB, When Joe Yong Sr. Program Manager Microsoft Corp. The world is changing AI increased by 300% in 2017 Data will grow to 44 ZB in 2020 Today, 80% of organizations
More informationEvolution of an Apache Spark Architecture for Processing Game Data
Evolution of an Apache Spark Architecture for Processing Game Data Nick Afshartous WB Analytics Platform May 17 th 2017 May 17 th, 2017 About Me nafshartous@wbgames.com WB Analytics Core Platform Lead
More informationOracle Database New Performance Features
Oracle Database 12.1.0.2 New Performance Features DOAG 2014, Nürnberg (DE) Christian Antognini BASEL BERN BRUGG LAUSANNE ZUERICH DUESSELDORF FRANKFURT A.M. FREIBURG I.BR. HAMBURG MUNICH STUTTGART VIENNA
More information1 Big Data Hadoop. 1. Introduction About this Course About Big Data Course Logistics Introductions
Big Data Hadoop Architect Online Training (Big Data Hadoop + Apache Spark & Scala+ MongoDB Developer And Administrator + Apache Cassandra + Impala Training + Apache Kafka + Apache Storm) 1 Big Data Hadoop
More informationLecture 21 11/27/2017 Next Lecture: Quiz review & project meetings Streaming & Apache Kafka
Lecture 21 11/27/2017 Next Lecture: Quiz review & project meetings Streaming & Apache Kafka What problem does Kafka solve? Provides a way to deliver updates about changes in state from one service to another
More informationHow Apache Hadoop Complements Existing BI Systems. Dr. Amr Awadallah Founder, CTO Cloudera,
How Apache Hadoop Complements Existing BI Systems Dr. Amr Awadallah Founder, CTO Cloudera, Inc. Twitter: @awadallah, @cloudera 2 The Problems with Current Data Systems BI Reports + Interactive Apps RDBMS
More informationDeploying Applications on DC/OS
Mesosphere Datacenter Operating System Deploying Applications on DC/OS Keith McClellan - Technical Lead, Federal Programs keith.mcclellan@mesosphere.com V6 THE FUTURE IS ALREADY HERE IT S JUST NOT EVENLY
More informationIndex. Raul Estrada and Isaac Ruiz 2016 R. Estrada and I. Ruiz, Big Data SMACK, DOI /
Index A ACID, 251 Actor model Akka installation, 44 Akka logos, 41 OOP vs. actors, 42 43 thread-based concurrency, 42 Agents server, 140, 251 Aggregation techniques materialized views, 216 probabilistic
More informationTransformation-free Data Pipelines by combining the Power of Apache Kafka and the Flexibility of the ESB's
Building Agile and Resilient Schema Transformations using Apache Kafka and ESB's Transformation-free Data Pipelines by combining the Power of Apache Kafka and the Flexibility of the ESB's Ricardo Ferreira
More informationBuilding Event Driven Architectures using OpenEdge CDC Richard Banville, Fellow, OpenEdge Development Dan Mitchell, Principal Sales Engineer
Building Event Driven Architectures using OpenEdge CDC Richard Banville, Fellow, OpenEdge Development Dan Mitchell, Principal Sales Engineer October 26, 2018 Agenda Change Data Capture (CDC) Overview Configuring
More informationOracle Database Failover Cluster with Grid Infrastructure 11g Release 2
Oracle Database Failover Cluster with Grid Infrastructure 11g Release 2 DOAG Conference 2011 Robert Bialek Principal Consultant Trivadis GmbH BASEL BERN LAUSANNE ZÜRICH DÜSSELDORF FRANKFURT A.M. FREIBURG
More informationLambda Architecture for Batch and Stream Processing. October 2018
Lambda Architecture for Batch and Stream Processing October 2018 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved. Notices This document is provided for informational purposes only.
More informationIntegrating Oracle Databases with NoSQL Databases for Linux on IBM LinuxONE and z System Servers
Oracle zsig Conference IBM LinuxONE and z System Servers Integrating Oracle Databases with NoSQL Databases for Linux on IBM LinuxONE and z System Servers Sam Amsavelu Oracle on z Architect IBM Washington
More informationHortonworks and The Internet of Things
Hortonworks and The Internet of Things Dr. Bernhard Walter Solutions Engineer About Hortonworks Customer Momentum ~700 customers (as of November 4, 2015) 152 customers added in Q3 2015 Publicly traded
More informationDie Wundertüte DBMS_STATS: Überraschungen in der Praxis
Die Wundertüte DBMS_STATS: Überraschungen in der Praxis, 14. Mai 2018 Dani Schnider, Trivadis AG @dani_schnider danischnider.wordpress.com BASEL BERN BRUGG DÜSSELDORF FRANKFURT A.M. FREIBURG I.BR. GENEVA
More informationWHY AND HOW TO LEVERAGE THE POWER AND SIMPLICITY OF SQL ON APACHE FLINK - FABIAN HUESKE, SOFTWARE ENGINEER
WHY AND HOW TO LEVERAGE THE POWER AND SIMPLICITY OF SQL ON APACHE FLINK - FABIAN HUESKE, SOFTWARE ENGINEER ABOUT ME Apache Flink PMC member & ASF member Contributing since day 1 at TU Berlin Focusing on
More informationData Architectures in Azure for Analytics & Big Data
Data Architectures in for Analytics & Big Data October 20, 2018 Melissa Coates Solution Architect, BlueGranite Microsoft Data Platform MVP Blog: www.sqlchick.com Twitter: @sqlchick Data Architecture A
More informationPitfalls & Surprises with DBMS_STATS: How to Solve Them
Pitfalls & Surprises with DBMS_STATS: How to Solve Them Dani Schnider, Trivadis AG @dani_schnider danischnider.wordpress.com BASEL BERN BRUGG DÜSSELDORF FRANKFURT A.M. FREIBURG I.BR. GENEVA HAMBURG COPENHAGEN
More informationExadata Database Machine Resource Management teile und herrsche!
Exadata Database Machine Resource Management teile und herrsche! DOAG Conference 2011 Konrad Häfeli Senior Technology Manager Trivadis AG BASEL BERN LAUSANNE ZÜRICH DÜSSELDORF FRANKFURT A.M. FREIBURG I.BR.
More informationBig Data Hadoop Developer Course Content. Big Data Hadoop Developer - The Complete Course Course Duration: 45 Hours
Big Data Hadoop Developer Course Content Who is the target audience? Big Data Hadoop Developer - The Complete Course Course Duration: 45 Hours Complete beginners who want to learn Big Data Hadoop Professionals
More informationJava Lounge. Integration Solutions madeeasy ComparisonofJava Integration Frameworks. Mario Goller
Java Lounge Integration Solutions madeeasy ComparisonofJava Integration Frameworks Mario Goller 28.05.2013 BASEL BERN LAUSANNE ZÜRICH DÜSSELDORF FRANKFURT A.M. FREIBURG I.BR. HAMBURG MÜNCHEN STUTTGART
More informationMicrosoft Big Data and Hadoop
Microsoft Big Data and Hadoop Lara Rubbelke @sqlgal Cindy Gross @sqlcindy 2 The world of data is changing The 4Vs of Big Data http://nosql.mypopescu.com/post/9621746531/a-definition-of-big-data 3 Common
More informationAdvanced Data Processing Techniques for Distributed Applications and Systems
DST Summer 2018 Advanced Data Processing Techniques for Distributed Applications and Systems Hong-Linh Truong Faculty of Informatics, TU Wien hong-linh.truong@tuwien.ac.at www.infosys.tuwien.ac.at/staff/truong
More informationOracle Access Management
Oracle Access Management Needful things to survive Michael Mühlbeyer, Trivadis GmbH BASEL BERN BRUGG DÜSSELDORF FRANKFURT A.M. FREIBURG I.BR. GENF HAMBURG KOPENHAGEN LAUSANNE MÜNCHEN STUTTGART WIEN ZÜRICH
More informationThe Hadoop Ecosystem. EECS 4415 Big Data Systems. Tilemachos Pechlivanoglou
The Hadoop Ecosystem EECS 4415 Big Data Systems Tilemachos Pechlivanoglou tipech@eecs.yorku.ca A lot of tools designed to work with Hadoop 2 HDFS, MapReduce Hadoop Distributed File System Core Hadoop component
More informationSpatial Analytics Built for Big Data Platforms
Spatial Analytics Built for Big Platforms Roberto Infante Software Development Manager, Spatial and Graph 1 Copyright 2011, Oracle and/or its affiliates. All rights Global Digital Growth The Internet of
More information