Scott Oaks, Oracle Sunil Raghavan, Intel Daniel Verkamp, Intel 03-Oct :45 p.m. - 4:30 p.m. Moscone West - Room 3020

Similar documents
Colin Cunningham, Intel Kumaran Siva, Intel Sandeep Mahajan, Oracle 03-Oct :45 p.m. - 5:30 p.m. Moscone West - Room 3020

THE STORAGE PERFORMANCE DEVELOPMENT KIT AND NVME-OF

Ziye Yang. NPG, DCG, Intel

SPDK China Summit Ziye Yang. Senior Software Engineer. Network Platforms Group, Intel Corporation

Accelerating NVMe-oF* for VMs with the Storage Performance Development Kit

NVMe Over Fabrics: Scaling Up With The Storage Performance Development Kit

Changpeng Liu. Senior Storage Software Engineer. Intel Data Center Group

Accelerate block service built on Ceph via SPDK Ziye Yang Intel

Accelerating NVMe I/Os in Virtual Machine via SPDK vhost* Solution Ziye Yang, Changpeng Liu Senior software Engineer Intel

Jim Harris Principal Software Engineer Intel Data Center Group

FlashGrid Software Enables Converged and Hyper-Converged Appliances for Oracle* RAC

Future of datacenter STORAGE. Carol Wilder, Niels Reimers,

Changpeng Liu. Cloud Storage Software Engineer. Intel Data Center Group

Virtuozzo Hyperconverged Platform Uses Intel Optane SSDs to Accelerate Performance for Containers and VMs

Andreas Schneider. Markus Leberecht. Senior Cloud Solution Architect, Intel Deutschland. Distribution Sales Manager, Intel Deutschland

Data-Centric Innovation Summit ALPER ILKBAHAR VICE PRESIDENT & GENERAL MANAGER MEMORY & STORAGE SOLUTIONS, DATA CENTER GROUP

Ben Walker Data Center Group Intel Corporation

Extremely Fast Distributed Storage for Cloud Service Providers

Intel SSD Data center evolution

Low-Overhead Flash Disaggregation via NVMe-over-Fabrics Vijay Balakrishnan Memory Solutions Lab. Samsung Semiconductor, Inc.

SPDK Blobstore: A Look Inside the NVM Optimized Allocator

Ultimate Workstation Performance

Andrzej Jakowski, Armoun Forghan. Apr 2017 Santa Clara, CA

Ceph BlueStore Performance on Latest Intel Server Platforms. Orlando Moreno Performance Engineer, Intel Corporation May 10, 2018

All-NVMe Performance Deep Dive Into Ceph + Sneak Preview of QLC + NVMe Ceph

Low-Overhead Flash Disaggregation via NVMe-over-Fabrics

Hubert Nueckel Principal Engineer, Intel. Doug Nelson Technical Lead, Intel. September 2017

Re-Architecting Cloud Storage with Intel 3D XPoint Technology and Intel 3D NAND SSDs

Munara Tolubaeva Technical Consulting Engineer. 3D XPoint is a trademark of Intel Corporation in the U.S. and/or other countries.

A U G U S T 8, S A N T A C L A R A, C A

Fast and Easy Persistent Storage for Docker* Containers with Storidge and Intel

Changpeng Liu, Cloud Software Engineer. Piotr Pelpliński, Cloud Software Engineer

Accelerating Data Center Workloads with FPGAs

Red Hat Ceph Storage and Samsung NVMe SSDs for intensive workloads

DataON and Intel Select Hyper-Converged Infrastructure (HCI) Maximizes IOPS Performance for Windows Server Software-Defined Storage

WITH INTEL TECHNOLOGIES

Building an Open Memory-Centric Computing Architecture using Intel Optane Frank Ober Efstathios Efstathiou Oracle Open World 2017 October 3, 2017

Oracle Exadata: Strategy and Roadmap

Accelerating Microsoft SQL Server 2016 Performance With Dell EMC PowerEdge R740

Daniel Verkamp, Software Engineer

Intel and SAP Realising the Value of your Data

Fast-track Hybrid IT Transformation with Intel Data Center Blocks for Cloud

Intel optane memory as platform accelerator. Vladimir Knyazkin

A New Key-value Data Store For Heterogeneous Storage Architecture Intel APAC R&D Ltd.

The Impact of SSD Selection on SQL Server Performance. Solution Brief. Understanding the differences in NVMe and SATA SSD throughput

April 2 nd, Bob Burroughs Director, HPC Solution Sales

FAST FORWARD TO YOUR <NEXT> CREATION

Accelerating Real-Time Big Data. Breaking the limitations of captive NVMe storage

Re- I m a g i n i n g D a t a C e n t e r S t o r a g e a n d M e m o r y

LATEST INTEL TECHNOLOGIES POWER NEW PERFORMANCE LEVELS ON VMWARE VSAN

Accelerating Hadoop Applications with the MapR Distribution Using Flash Storage and High-Speed Ethernet

Identifying Performance Bottlenecks with Real- World Applications and Flash-Based Storage

Implementing SQL Server 2016 with Microsoft Storage Spaces Direct on Dell EMC PowerEdge R730xd

Storage Performance Development Kit (SPDK) Daniel Verkamp, Software Engineer

Oracle Exadata X7. Uwe Kirchhoff Oracle ACS - Delivery Senior Principal Service Delivery Engineer

Jim Harris. Principal Software Engineer. Intel Data Center Group

Oracle Database Exadata Cloud Service Exadata Performance, Cloud Simplicity DATABASE CLOUD SERVICE

March NVM Solutions Group

Supermicro All-Flash NVMe Solution for Ceph Storage Cluster

Hewlett Packard Enterprise HPE GEN10 PERSISTENT MEMORY PERFORMANCE THROUGH PERSISTENCE

Microsoft Exchange Server 2010 workload optimization on the new IBM PureFlex System

Jim Harris. Principal Software Engineer. Data Center Group

Do-It-Yourself 1. Oracle Big Data Appliance 2X Faster than

Data Processing at the Speed of 100 Gbps using Apache Crail. Patrick Stuedi IBM Research

Out-of-band (OOB) Management of Storage Software through Baseboard Management Controller Piotr Wysocki, Kapil Karkra Intel

INTEL NEXT GENERATION TECHNOLOGY - POWERING NEW PERFORMANCE LEVELS

Solaris Engineered Systems

Create a Flexible, Scalable High-Performance Storage Cluster with WekaIO Matrix

Intel Solid State Drive Data Center Family for PCIe* in Baidu s Data Center Environment

Intel. Rack Scale Design: A Deeper Perspective on Software Manageability for the Open Compute Project Community. Mohan J. Kumar Intel Fellow

Accelerating Enterprise Search with Fusion iomemory PCIe Application Accelerators

Data and Intelligence in Storage Carol Wilder Intel Corporation

Pivot3 Acuity with Microsoft SQL Server Reference Architecture

The Transition to PCI Express* for Client SSDs

Ceph in a Flash. Micron s Adventures in All-Flash Ceph Storage. Ryan Meredith & Brad Spiers, Micron Principal Solutions Engineer and Architect

EsgynDB Enterprise 2.0 Platform Reference Architecture

POWER YOUR CREATIVITY WITH THE INTEL CORE X-SERIES PROCESSOR FAMILY

IaaS Vendor Comparison

Designing Next Generation FS for NVMe and NVMe-oF

Intel s Architecture for NFV

Hyper-converged storage for Oracle RAC based on NVMe SSDs and standard x86 servers

HPE SimpliVity. The new powerhouse in hyperconvergence. Boštjan Dolinar HPE. Maribor Lancom

Jim Pappas Director of Technology Initiatives, Intel Vice-Chair, Storage Networking Industry Association (SNIA) December 07, 2018

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme

Dell PowerEdge R730xd Servers with Samsung SM1715 NVMe Drives Powers the Aerospike Fraud Prevention Benchmark

NVMe over Universal RDMA Fabrics

A New Key-Value Data Store For Heterogeneous Storage Architecture

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

SoftNAS Cloud Performance Evaluation on Microsoft Azure

Spark Over RDMA: Accelerate Big Data SC Asia 2018 Ido Shamay Mellanox Technologies

Modernizing Servers and Software

STAR-CCM+ Performance Benchmark and Profiling. July 2014

Table 1 The Elastic Stack use cases Use case Industry or vertical market Operational log analytics: Gain real-time operational insight, reduce Mean Ti

NVMe Takes It All, SCSI Has To Fall. Brave New Storage World. Lugano April Alexander Ruebensaal

Oracle Exadata Statement of Direction NOVEMBER 2017

Hadoop 2.x Core: YARN, Tez, and Spark. Hortonworks Inc All Rights Reserved

Density Optimized System Enabling Next-Gen Performance

NVMFS: A New File System Designed Specifically to Take Advantage of Nonvolatile Memory

Performance Benefits of NVMe over Fibre Channel A New, Parallel, Efficient Protocol

An Exploration into Object Storage for Exascale Supercomputers. Raghu Chandrasekar

Transcription:

Scott Oaks, Oracle Sunil Raghavan, Intel Daniel Verkamp, Intel 03-Oct-2017 3:45 p.m. - 4:30 p.m. Moscone West - Room 3020

Big Data Talk Exploring New SSD Usage Models to Accelerate Cloud Performance 03-Oct-2017, 3:45-4:30PM, Moscone West Room 3020 1. 10 min Scott Oracle Big Data solution 2. 15 min Daniel NVMe and NVMEoF, SPDK 3. 15 min Sunil, Case Study Apache Spark and TeraSort 4. 5 min - QA Best Practices for Big Data in the Cloud - 03-Oct-2017, 4:45-5:30PM, Moscone West - Room 3020 1. 10 min - Sandeep Oracle Big Data solution 2. 15 min Siva FPGA enables new storage use cases 3. 15 min Colin, Case Study Apache Spark, Big Data Analytics 4. 5 min - QA 2

Oracle Safe Harbor Statement The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle s products remains at the sole discretion of Oracle. 3

Intel Notices & Disclaimers Intel technologies features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. Check with your system manufacturer or retailer or learn more at intel.com. No computer system can be absolutely secure. Tests document performance of components on a particular test, in specific systems. Differences in hardware, software, or configuration will affect actual performance. Consult other sources of information to evaluate performance as you consider your purchase. For more complete information about performance and benchmark results, visit http://www.intel.com/performance. Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit http://www.intel.com/performance. Cost reduction scenarios described are intended as examples of how a given Intel-based product, in the specified circumstances and configurations, may affect future costs and provide cost savings. Circumstances will vary. Intel does not guarantee any costs or cost reduction. Intel does not control or audit third-party benchmark data or the web sites referenced in this document. You should visit the referenced web site and confirm whether referenced data are accurate. 2017 Intel Corporation. Intel, the Intel logo, and Intel Xeon are trademarks of Intel Corporation in the U.S. and/or other countries. *Other names and brands may be claimed as property of others. 4

5

Oracle Cloud Platform Develop & Deploy Integrate & Extend Publish & Engage Analyze & Predict Secure & Manage Innovate with a Comprehensive, Open, Integrated and Hybrid Cloud Platform that is Highly Scalable, Secure and Globally Available Copyright 2017, Oracle Copyright and/or its affiliates. 2017, Oracle All rights and/or reserved. its affiliates. All rights reserved. 6

Oracle Cloud Platform Comprehensive Open Integrated Hybrid Data Management Analytics and Big Data Application Development Content & Experience Oracle Public Cloud Enterprise Integration Data Integration Identity & Security Systems Management Oracle Cloud at Customer Oracle Data Center Built on High Performant Oracle Cloud Infrastructure Your Data Center Copyright 2017, Oracle Copyright and/or its affiliates. 2017, Oracle All rights and/or reserved. its affiliates. All rights reserved. 7

Oracle Cloud Platform Momentum 14,000+ Oracle Cloud Platform Customers 3,000+ Apps in the Oracle Cloud Marketplace $1.4 Billion FY17 Oracle Cloud Platform Revenue (60% YoY Growth ) 10 PaaS Categories where Oracle is a According to Industry Analysts Leader Copyright 2017, Oracle Copyright and/or its affiliates. 2017, Oracle All rights and/or reserved. its affiliates. All rights reserved. 8

Oracle Big Data as a Service Work with a stack you are familiar with Maximum portability Maximum performance I/O tuned for Oracle Cloud OS tuned for Oracle Cloud Network tuned for Oracle Cloud Copyright 2017, Oracle and/or its affiliates. All rights reserved. 9

Oracle Cloud Platform does the grunt work Elastic and Scalable Big Data clusters are elastic on demand Storage is scaled independently Choose appropriate compute shapes for your workload Managed Automated lifecycle management Service monitoring via dashboards or REST APIs Copyright 2017, Oracle and/or its affiliates. All rights reserved. 10

Big Data Cloud Service Management Copyright 2017, Oracle and/or its affiliates. All rights reserved. 11

12

The Problem: Software is becoming the bottleneck Latency >2ms >500,000 IO/s >400,000 IO/s <100µs >25,000 IO/s <100µs I/O Performance <500 IO/s <10µs HDD SATA NAND SSD NVMe* NAND SSD Intel Optane SSD The Opportunity: Intel software unlocks new media s potential 13

NVM Express Key Takeaways Support for many independent queues with support for large queue depths I/O scalability without locks for highly-parallel systems PCIe-attached devices with several form factors (add-in card, M.2, U.2) Industry-standard bus with wide support and continued innovation Standard programming interface for all compliant devices No vendor-specific drivers necessary Enterprise storage features T10 DIF, dual-port controllers, reservations 14

NVMe over Fabrics - Key Takeways Extends benefits of multi-queue NVMe protocol over the network Queues are mapped to network connections Allows larger-scale storage systems Routable network protocol allows more flexible system architecture Supports multiple fabrics Mappings for RDMA (RoCE/iWARP) and Fibre Channel are defined today 15

Storage Performance Development Kit Intel Platform Storage Reference Software Optimized for Intel platform characteristics Open source building blocks (BSD licensed) Minimize average and tail latencies Scalable and Efficient Software Ingredients User space, lockless, polled-mode components Up to millions of IOPS per core Designed for faster media like Intel Optane technology latencies 16

Benefits of using SPDK SPDK more performance from Intel CPUs, nonvolatile media, and networking Up to Up to Up to 10X MORE IOPS/core for NVMe-oF* vs. Linux kernel 8X MORE IOPS/core for NVMe vs. Linux kernel 50% BETTER Tail Latency for RocksDB workloads FASTER TTM/ FEWER RESOURCES than developing components from scratch Provides Future Proofing as NVM technologies increase in performance Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more information go to http://www.intel.com/performance 17

Latency (usec) SPDK Host + Target vs. Kernel Host + Target 30 Avg. I/O Round Trip Time Kernel vs. SPDK NVMe-oF Stacks Intel Optane P4800X, Perf, qd=1 25 20 15 14 13 10 7 8 5 11 7 12 8 0 Kernel NVMe-oF Target / Kernel NVMe-oF Initiator 4K Random Read SPDK NVMe-oF Target / SPDK NVMe-oF Initiator 4K Random Read Kernel NVMe-oF Target / Kernel NVMe-oF Initiator 4K Random Write SPDK NVMe-oF Target / SPDK NVMe-oF Initiator 4K Random Write Local Fabric + Software SPDK reduces Optane NVMe-oF latency by 44%, write latency by 32%! System Configuration: 2x Intel Xeon E5-2695v4 (HT on, Intel Speed Step enabled, Intel Turbo Boost Technology enabled, 64GB DDR4 Memory, 8x 8GB DDR4 2400 MT/s, Ubuntu 16.04.1, Linux kernel 4.10.1, 1x 25GbE Mellanox 2P CX-4, CX-4 FW= 14.16.1020, mlx5_core= 3.0-1 driver, 1 ColdStream, connected to socket 0, 4KB Random Read I/0 1 initiators, each initiator connected to bx NVMe-oF subsystems using 2P 25GbE Mellanox. Performance measured by Intel using SPDK perf tool, 4KB Random Read I/O, Queue Depth: 1/NVMe-oF subsystem. numjobs 1, 300 sec runtime, direct=1, norandommap=1, FIO-2.12, SPDK commit # 42eade49 18

Database Use Case - SPDK NVMe devices offer low latency and high throughput. Databases can fully leverage these devices using SPDK. Completely user space model for issuing database I/Os. Asynchronous, lockless and zero copy. Avoids kernel context switches and interrupt handling overhead. Supports Quality of Service natively. Ideal for OLTP databases that have extreme low latency requirements.

Database Use Case NVMe over Fabrics Provides shared storage capability required for Oracle RAC deployments. Ability to scale up from one node for high performance and availability. Ability to create clusters where each node contains local NVMe devices. NVMe over Fabrics lets nodes access remote devices efficiently. High availability can be achieved by replicating writes across nodes. Reads can be satisfied very quickly by directly accessing local devices. Data can also be read from a remote NVMe device using zero copy RDMA.

21

Spark TeraSort Workload Spark TeraSort Map Reduce Hive Tez HBase Spark SQL Spark Streaming MLlib Apache Spark Core GraphX Data processing engine Resource Manager YARN HDFS Distributed File System Spark TeraSort measures time to sort one terabyte of randomly distributed data using Apache Spark 22

How TeraSort works in Spark Sampling: 1, 2, 5, 6, 8, 9 par0 par1 par2 5 8 1 Sample: the data and define range for partitions Input A HDFS file Intermediate Data Each Map s output is sorted by partition Shuffle Phase Output A HDFS file 9 2 1 Map 2 1 9 2 1 1 2 Reduce 1 1 2 2 8 1 5 Map 1 5 8 5 6 5 Reduce 5 5 6 6 5 2 Map 2 6 5 9 8 Reduce 8 9 2 Map: Shuffle the data into partitions 3 Reduce: Merge-sort partitions 23

Key activities at each stage Generate Data 1 Sample 2 Map 3 Reduce Stage 1 Stage 2 Stage 3 Disk Read HDFS (uncompressed) Read HDFS (uncompressed) Write Spark tmp (compressed) CPU Sample Partition Sort Read Spark tmp (compressed) Write HDFS (compressed, replicated) Network Merge TeraSort tends to be a disk intensive workload across all stages using HDFS / Spark tmp heavily. 24

Testing TeraSort ToR Server 1 Server 2 Server 3 Server 4 ToR Server 1 Server 2 Server 3 Server 4 ToR Server 1 Server 2 Server 3 Server 4 Broadwell + HDD Broadwell + NVMe Skylake + NVMe Platform 2S Intel Xeon E5 2699 v4 768 GB DRAM. 10Gbps Ethernet. 2S Intel Xeon E5 2699 v4 768 GB DRAM. 10Gbps Ethernet. 2S Intel Xeon Platinum 8170 768 GB DRAM. 10Gbps Ethernet. Storage 7 x Hard Drives (2TB, 7200 RPM) 2 x Intel NAND NVMes 2 x Intel 3D NAND NVMes OS/ Hypervisor Centos 7.2 / KVM Centos 7.2 / KVM Centos 7.2 / KVM Big Data SW Hortonworks Data Platform 2.4 Hortonworks Data Platform 2.4 Hortonworks Data Platform 2.4 Big Data Cluster 18 Datanode VMs (16 VCPUs 120 GB memory) 2 Namenode VMs (4 VCPUs 30 GB memory) 18 Datanode VMs (16 VCPUs, 120 GB memory) 2 Namenode VMs (4 VCPUs 30GB memory) 22 Datanode VMs (16 VCPUs, 120 GB) 2 Namenode VMs (4VCPUs 30GB memory) Big Data Config HDFS replication factor=2 HDFS replication factor=2 HDFS replication factor=2 25

Results Time taken across TeraSort Stages (mins) Intel Xeon E5 2699 v4 + 7 Hard Drives (Broadwell + HDD) Map Reduce Intel Xeon E5 2699 v4 + 2 Intel NVMe SSDs (Broadwell + NVMe) Map Reduce Intel Xeon Platinum 8170 + 2 Intel NVMe SSDs (Skylake + NVMe) Map Reduce 0 10 20 30 40 50 60 70 80 Overall, time taken TeraSort dropped ~8x from BDW+HDD to SKX+NVMe cluster Reduce phase runs 10x faster in Skylake + NVMe compared to Broadwell + HDD TeraSort spends most of its time (>50%) in the Reduce phase. About 80% of time is spent in reduce phase when using Hard Drives Performance of Map phase improves by ~3x with Skylake + NVMe cluster * Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. * For more information go to http://www.intel.com/performance/datacenter. 26

Closer look at I/O performance Average I/O read bandwidth (MB/s) Average I/O write bandwidth (MB/s) 300 250 200 300 250 200 198.7 253.0 150 100 50 0 30.3 88.6 Map 89.5 3.8 28.3 Reduce 31.1 150 100 50 0 81.2 59.3 36.1 28.0 Map Reduce Intel Xeon E5 2699 v4 + 7 Hard drives (Broadwell + HDD) Intel Xeon E5 2699 v4 + 2 Intel NVMe SSDs (Broadwell + NVMe) Intel Xeon Platinum 8170 + 2 Intel NVMe SSDs (Skylake + NVMe) Intel Xeon E5 2699 v4 + 7 Hard drives (Broadwell + HDD) Intel Xeon E5 2699 v4 + 2 Intel NVMe SSDs (Broadwell + NVMe) Intel Xeon Platinum 8170 + 2 Intel NVMe SSDs (Skylake + NVMe) Average I/O write bandwidth in the Reduce phase increases 9x from Broadwell + HDD to Skylake + NVMe Average I/O write bandwidth in the Reduce phase increases 7x from Broadwell + HDD to BDW+ NVMe Average I/O read bandwidth in Map phase increases ~3x from Broadwell + HDD to NVMes Note: The I/O measurements is as measured on the disks used for both HDFS storage and Spark temp storage. * Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. * For more information go to http://www.intel.com/performance/datacenter. 27

Closer look at I/O performance..continued 600 Average I/O request wait time (milliseconds) 545 14000 Average I/O Transactions per second 13321 500 400 300 200 100 0 121 7 4 8 7 Map Reduce 12000 10000 8000 6000 4000 2000 0 5835 5023 210 638 Map 10450 Reduce Intel Xeon E5 2699 v4 + 7 Hard drives (Broadwell + HDD) Intel Xeon E5 2699 v4 + 2 Intel NVMe SSDs (Broadwell + NVMe) Intel Xeon Platinum 8170 + 2 Intel NVMe SSDs (Skylake + NVMe) Intel Xeon E5 2699 v4 + 7 Hard drives (Broadwell + HDD) Intel Xeon E5 2699 v4 + 2 Intel NVMe SSDs (Broadwell + NVMe) Intel Xeon Platinum 8170 + 2 Intel NVMe SSDs (Skylake + NVMe) With Intel NVMe devices the average I/O wait time for disk requests almost dropped to few milliseconds. Intel NVMe devices support significantly higher I/O transactions per second compared to traditional hard drives. This has especially helped the performance in the Reduce phase. Note: The I/O measurements is as measured on the disks used for both HDFS storage and Spark temp storage. * Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. * For more information go to http://www.intel.com/performance/datacenter. 28

Normalized performance Summary 8x improvement in performance of Big Data workload - As measure by performance of TeraSort workload - Using 4 Intel Xeon Platinum 8170 based servers each with 2 Intel 3D NAND SSD (Intel SSD DC P4600-1.6TB) - Compared to 4 Intel Xeon E5 2699 v4 based servers each with 7 hard drives (2TB, 7200 RPM) 4x faster using Intel NAND SSDs - Compared to cluster of same configuration but using hard drives 9 8 7 6 5 4 3 2 1 0 Performance Improvement of TeraSort Intel Xeon E5 2699 v4 + 7 Hard Drives (Broadwell + HDD) 4x Intel Xeon E5 2699 v4 + 2 Intel NVMe SSDs (Broadwell + NVMe) 8x Intel Xeon Platinum 8170 + 2 Intel NVMe SSDs (Skylake + NVMe) More Hadoop data nodes - Bigger cluster on same number of Xeon servers, enabled by additional cores on Intel Xeon Platinum 8170 processor * Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. * For more information go to http://www.intel.com/performance/datacenter. 29

30

Oracle and Intel Customers adopting both Hadoop and cloud services at a rapid rate Oracle and Intel cloud partnership has extended to optimize Big Data 4 3.5 3 2.5 2 1.5 1 0.5 Performance on Oracle Big Data Cloud Service 3.5X 2X You can do it yourself but Oracle has made Big Data simple 0 TeraSort DIY Hadoop with HDD Analytics workload Oracle Big Data Cloud Service Compute Edition Up to 3.5X faster performance on Oracle BDCSCE over DIY Hadoop Both clusters utilized 18 data node VMs (16VCPU and 120GB RAM) and 2 name node VMs (4VCPU and 30GB RAM), on 4 x 2S Intel Xeon E5-2699 v4 running either Oracle Big Data Cloud Services Compute Edition or DIY (Hortonworks Data Platform 2.4 on CentOS 7.2 with KVM, with 7x2.0TB 7200RPM drives per system) 31

Total Cloud Users Intel Xeon Scalable Processor optimized cache architecture Increase cloud applications capacity, Higher VM density, Better isolation 2.3X MORE Cloud Users Cloud Performance (higher is better) 46,670 E5-2699 v4 (Broadwell) 107,445 Platinum 8180 (Skylake) Intel Xeon Scalable processor features a new/optimized cache hierarchy Increased cloud performance for memory bandwidth-intensive applications Larger non-inclusive Mid-Level Cache (MLC/L2) 4X increase (vs prior gen), enables applications to have more dedicated resources, and reduces impact of other applications on shared Last-Level Cache (LLC/L3 Larger total combined cache (MLC+LLC) available on Xeon Platinum 8180 vs. Xeon E5-2699 v4 (prior gen) Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more information go to http://www.intel.com/benchmarks. Configurations: Xeon E5-2699 v4 processors setup: 16 Oracle WebLogic applications, 4x Intel Memory Latency Checker v3.4 instances running on 1 HW thread per instance running on 2-socket Intel Xeon E5-2699 v4 processors (22-core per socket). Xeon Platinum 8180 processors setup: 22 Oracle WebLogic applications, 4x Intel Memory Latency Checker v3.4 instances running on 1 HW thread per instance running on 2-socket Intel Xeon Platinum 8180 processors (28-core per socket). For both Broadwell and Skylake, we used Oracle WebLogic Server 12.2.1.0.0, Oracle Java HotSpot 64-bit server VM on Linux version 1.8.0_102, Red Hat Enterprise Linux server release 7.3. Both platform has 768GB of DDR4 2133 MHz memory.*other names and brands may be claimed as the property of others. 1 Optimized Cache Hierarchy delivers Improved Application Performance in the Cloud 32