Oracle Coherence + Oracle Exalogic Elastic Cloud

Similar documents
An Oracle White Paper June Enterprise Database Cloud Deployment with Oracle SuperCluster T5-8

Oracle Exadata Statement of Direction NOVEMBER 2017

Sun Fire X4170 M2 Server Frequently Asked Questions

ORACLE EXALOGIC ELASTIC CLOUD

Mark Falco Oracle Coherence Development

Benefits of an Exclusive Multimaster Deployment of Oracle Directory Server Enterprise Edition

Oracle Database Exadata Cloud Service Exadata Performance, Cloud Simplicity DATABASE CLOUD SERVICE

An Oracle White Paper March Consolidation Using the Oracle SPARC M5-32 High End Server

An Oracle Technical White Paper October Sizing Guide for Single Click Configurations of Oracle s MySQL on Sun Fire x86 Servers

Veritas NetBackup and Oracle Cloud Infrastructure Object Storage ORACLE HOW TO GUIDE FEBRUARY 2018

Oracle Solaris 11: No-Compromise Virtualization

An Oracle White Paper April Consolidation Using the Fujitsu M10-4S Server

StorageTek ACSLS Manager Software Overview and Frequently Asked Questions

An Oracle White Paper June Exadata Hybrid Columnar Compression (EHCC)

Extreme Performance Platform for Real-Time Streaming Analytics

NOSQL DATABASE CLOUD SERVICE. Flexible Data Models. Zero Administration. Automatic Scaling.

Oracle WebLogic Server Multitenant:

An Oracle White Paper November Primavera Unifier Integration Overview: A Web Services Integration Approach

Oracle Data Provider for.net Microsoft.NET Core and Entity Framework Core O R A C L E S T A T E M E N T O F D I R E C T I O N F E B R U A R Y

Oracle Java SE Advanced for ISVs

Oracle JD Edwards EnterpriseOne Object Usage Tracking Performance Characterization Using JD Edwards EnterpriseOne Object Usage Tracking

An Oracle White Paper December Accelerating Deployment of Virtualized Infrastructures with the Oracle VM Blade Cluster Reference Configuration

Configuring a Single Oracle ZFS Storage Appliance into an InfiniBand Fabric with Multiple Oracle Exadata Machines

Overview. Implementing Fibre Channel SAN Boot with the Oracle ZFS Storage Appliance. January 2014 By Tom Hanvey; update by Peter Brouwer Version: 2.

Cloud Operations for Oracle Cloud Machine ORACLE WHITE PAPER MARCH 2017

Leverage the Oracle Data Integration Platform Inside Azure and Amazon Cloud

Application Container Cloud

An Oracle White Paper May Oracle VM 3: Overview of Disaster Recovery Solutions

Achieving High Availability with Oracle Cloud Infrastructure Ravello Service O R A C L E W H I T E P A P E R J U N E

An Oracle White Paper October Minimizing Planned Downtime of SAP Systems with the Virtualization Technologies in Oracle Solaris 10

Oracle CIoud Infrastructure Load Balancing Connectivity with Ravello O R A C L E W H I T E P A P E R M A R C H

An Oracle White Paper June StorageTek In-Drive Reclaim Accelerator for the StorageTek T10000B Tape Drive and StorageTek Virtual Storage Manager

Oracle Event Processing Extreme Performance on Sparc T5

An Oracle White Paper September Oracle Integrated Stack Complete, Trusted Enterprise Solutions

MySQL CLOUD SERVICE. Propel Innovation and Time-to-Market

An Oracle White Paper September, Oracle Real User Experience Insight Server Requirements

Oracle Database Appliance X7-2 Model Family

Creating Custom Project Administrator Role to Review Project Performance and Analyze KPI Categories

Generate Invoice and Revenue for Labor Transactions Based on Rates Defined for Project and Task

Oracle JD Edwards EnterpriseOne Object Usage Tracking Performance Characterization Using JD Edwards EnterpriseOne Object Usage Tracking

Oracle Database Appliance X6-2S / X6-2M ORACLE ENGINEERED SYSTEMS NOW WITHIN REACH FOR EVERY ORGANIZATION

StorageTek ACSLS Manager Software

Oracle s Netra Modular System. A Product Concept Introduction

An Oracle White Paper December Oracle Exadata Database Machine Warehouse Architectural Comparisons

Migrating VMs from VMware vsphere to Oracle Private Cloud Appliance O R A C L E W H I T E P A P E R O C T O B E R

Oracle Database 12c: JMS Sharded Queues

An Oracle White Paper October The New Oracle Enterprise Manager Database Control 11g Release 2 Now Managing Oracle Clusterware

JD Edwards EnterpriseOne Licensing

RAC Database on Oracle Ravello Cloud Service O R A C L E W H I T E P A P E R A U G U S T 2017

STORAGE CONSOLIDATION AND THE SUN ZFS STORAGE APPLIANCE

October Oracle Application Express Statement of Direction

ORACLE FABRIC MANAGER

April Understanding Federated Single Sign-On (SSO) Process

Oracle Grid Infrastructure 12c Release 2 Cluster Domains O R A C L E W H I T E P A P E R N O V E M B E R

ORACLE SUPERCLUSTER T5-8

An Oracle White Paper September Oracle Utilities Meter Data Management Demonstrates Extreme Performance on Oracle Exadata/Exalogic

Oracle Exalogic Elastic Cloud: X2-2 Hardware Overview

Maximum Availability Architecture. Oracle Best Practices For High Availability

Oracle Database Appliance X5-2

Craig Blitz Oracle Coherence Product Management

Differentiate Your Business with Oracle PartnerNetwork. Specialized. Recognized by Oracle. Preferred by Customers.

Oracle Clusterware 18c Technical Overview O R A C L E W H I T E P A P E R F E B R U A R Y

August 6, Oracle APEX Statement of Direction

Oracle TimesTen Scaleout: Revolutionizing In-Memory Transaction Processing

Oracle Hyperion Planning on the Oracle Database Appliance using Oracle Transparent Data Encryption

Repairing the Broken State of Data Protection

Highly Available Forms and Reports Applications with Oracle Fail Safe 3.0

ORACLE PRIVATE CLOUD APPLIANCE

New Oracle NoSQL Database APIs that Speed Insertion and Retrieval

Oracle Grid Infrastructure Cluster Domains O R A C L E W H I T E P A P E R F E B R U A R Y

ORACLE SNAP MANAGEMENT UTILITY FOR ORACLE DATABASE

Integrating Oracle SuperCluster Engineered Systems with a Data Center s 1 GbE and 10 GbE Networks Using Oracle Switch ES1-24

CONTAINER CLOUD SERVICE. Managing Containers Easily on Oracle Public Cloud

COMPUTE CLOUD SERVICE. Moving to SPARC in the Oracle Cloud

Oracle Privileged Account Manager

ORACLE SUPERCLUSTER M6-32

Migration Best Practices for Oracle Access Manager 10gR3 deployments O R A C L E W H I T E P A P E R M A R C H 2015

Using the Oracle Business Intelligence Publisher Memory Guard Features. August 2013

An Oracle White Paper February Comprehensive Testing for Siebel With Oracle Application Testing Suite

An Oracle White Paper December A Technical Overview of Oracle s SPARC SuperCluster T4-4

An Oracle White Paper February Optimizing Storage for Oracle PeopleSoft Applications

ORACLE SOLARIS CLUSTER

An Oracle White Paper October Deploying and Developing Oracle Application Express with Oracle Database 12c

Mellanox InfiniBand Solutions Accelerate Oracle s Data Center and Cloud Solutions

Highly Available Mobile Services Infrastructure Using Oracle Berkeley DB O R A C L E W H I T E P A P E R J A N U A R Y

Oracle Business Activity Monitoring 12c Best Practices ORACLE WHITE PAPER DECEMBER 2015

Tutorial on How to Publish an OCI Image Listing

August Oracle - GoldenGate Statement of Direction

ORACLE EXADATA DATABASE MACHINE X2-8

Deploying Apache Cassandra on Oracle Cloud Infrastructure Quick Start White Paper October 2016 Version 1.0

DATA INTEGRATION PLATFORM CLOUD. Experience Powerful Data Integration in the Cloud

A Distinctive View across the Continuum of Care with Oracle Healthcare Master Person Index ORACLE WHITE PAPER NOVEMBER 2015

An Oracle White Paper December, 3 rd Oracle Metadata Management v New Features Overview

See What's Coming in Oracle CPQ Cloud

Key Features. High-performance data replication. Optimized for Oracle Cloud. High Performance Parallel Delivery for all targets

Handling Memory Ordering in Multithreaded Applications with Oracle Solaris Studio 12 Update 2: Part 2, Memory Barriers and Memory Fences

Solaris Engineered Systems

Oracle Forms Services Oracle Traffic Director Configuration

Configuring Oracle Business Intelligence Enterprise Edition to Support Teradata Database Query Banding

SUN BLADE 6000 CHASSIS

Transcription:

An Oracle Business White Paper September 2013 Elastic Cloud Engineered Performance and Scalability for Next-Generation Enterprise Applications

Disclaimer The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle s products remains at the sole discretion of Oracle. 2

Table of Contents Executive Overview... 2 What is Oracle Coherence?... 3 What is Oracle Exalogic?... 4 Exalogic Elastic Cloud Software (EECS)... 5 Exalogic Elastic Cloud Hardware: Converged Fabric... 5 Exalogic Exabus: Performance and Scalability... 5 Coherence, InfiniBand, and Exabus: A Powerful Combination... 6 Coherence Message Bus: Application Portability, Optimized Network Performance... 7 Exalogic: Reliability, Availability, and Serviceability... 8 Coherence on Exalogic: Availability Through Faster Cluster Rebalancing... 9 Exalogic Elastic Cloud Hardware Compute Nodes... 10 Coherence on Exalogic: Larger, More Efficient Data Grids... 11 Why Run Coherence on Exalogic?... 12 Case in Point: a Major Cable Provider... 12 Option A: Commodity Hardware... 13 Option B: Oracle Exalogic... 14 Coherence on Exalogic: Improved Performance, Lower Cost... 14

Executive Overview Oracle Coherence is a distributed in-memory data grid that enables organizations to predictably scale mission-critical applications by providing fast and reliable access to frequently-used data. As a component of Oracle Fusion Middleware and Oracle Cloud Application Foundation, Coherence provides data caching, data replication, real-time data processing and distributed computing services to all types of business applications. As a component of Oracle Fusion Middleware 12c, Coherence is tightly integrated with the Oracle middleware stack and Oracle Exalogic Elastic Cloud. Several optimizations have been incorporated into Coherence to leverage the Exalogic Exabus I/O fabric and the Exalogic Solid State Disks (SSDs). With these optimizations, Coherence on Exalogic facilitates extreme performance, high availability, and massive scalability for demanding business requirements. Customers that use Coherence in custom-developed applications or in conjunction with other Oracle products such as Oracle WebLogic Server, Oracle Event Processor, Oracle Service Bus, and Oracle ATG Commerce will enjoy the benefits described in this white paper without having to make any changes to their application code. 2

What is Oracle Coherence? Oracle Coherence is a highly scalable, fault-tolerant distributed cache engine. It uses many inexpensive computers to create a cluster that can be seamlessly expanded to add more memory and more processing power to business applications. By automatically and dynamically partitioning data in memory across multiple servers, Coherence enables continuous data availability and transactional integrity, even in the event of a server failure. As a shared infrastructure, Coherence combines data locality with virtually unlimited processing power to perform real-time data analysis, in-memory grid computations, and parallel transaction and event processing. Coherence provides replicated and distributed (partitioned) data management and caching services on top of a reliable, highly scalable peer-to-peer clustering protocol. It has no single points of failure and it automatically and transparently fails over and redistributes its clustered data management services when a server becomes inoperative or is disconnected from the network. When a new server is added or when a failed server is restarted, Coherence automatically joins the cluster and fails back services to it, transparently redistributing the cluster load. Coherence includes network-level fault tolerance features and soft re-start capabilities to enable servers to heal themselves following an outage. Figure 1: Oracle Coherence In-Memory Data Grid. 3

What is Oracle Exalogic? Oracle Exalogic Elastic Cloud is an Engineered System consisting of software, firmware and hardware on which enterprises can deploy Oracle business applications, Oracle Fusion Middleware, or software products provided by Oracle partners. Oracle Exalogic is designed to meet the highest standards of reliability, availability, scalability and performance for a wide range of time-sensitive, mission-critical workloads. It dramatically improves applications without requiring code changes and reduces costs across the application lifecycle, from initial set-up to on-going maintenance, as compared to conventional IT platforms assembled from disparate components and sourced from multiple vendors. As an open system, Oracle Exalogic has been assembled from Oracle s best-of-breed portfolio of standards-based products and technologies. Oracle has made unique optimizations and enhancements to Oracle Exalogic components to create an unparalleled platform for running Oracle Fusion Middleware and Oracle Applications. Exalogic customers can support application workloads with less hardware, less power, less heat, less data center space and less software. Since each Exalogic system is pre-integrated by Oracle, the accompanying IT environment is easier to provision, manage and maintain, accelerating timeto-value and lowering total cost of ownership (TCO) for new projects. Exalogic is designed for high availability and zero-down-time maintenance. It can be scaled linearly from a single-rack configuration to an eight-rack configuration without any service disruption. Oracle Exalogic customers can configure these environments to support logical or physical server virtualization for improved manageability and ease of deployment, making it an ideal platform for Infrastructure-as-a-Service (IaaS) clouds. Each Oracle Exalogic system consists of two major elements: Exalogic Hardware: Assembled by Oracle, the Exalogic hardware platform integrates storage and server resources using a high-performance I/O subsystem called Exabus, which is built on Oracle s Quad Data Rate (QDR) InfiniBand networking fabric. Exalogic Elastic Cloud Software (EECS): This essential package of Exalogic-specific software includes device drivers and firmware that are pre-integrated with Oracle Linux and/or Oracle Solaris operating systems. The software ensures advanced performance with support for IaaS capabilities such as server and network virtualization, expandable storage, and cloud management utilities. 4

Exalogic Elastic Cloud Software (EECS) Figure 2: Key Components of Exalogic Elastic Cloud Software. Exalogic Elastic Cloud Hardware: Converged Fabric Oracle Exalogic is based on an ultra-high performance converged I/O backplane that has been specially developed using Oracle s quad data rate (QDR) InfiniBand hardware and software. Each Exalogic hardware configuration contains multiple QDR InfiniBand switches, which serve as gateways to a data center s Ethernet network and connect all of the components inside the Exalogic system as well. This converged fabric directly accesses the shared storage environment. It also serves as the physical platform for creation of virtual Ethernet networks that allow cloud applications to connect to any other application accessible over an Ethernet network. The Exalogic fabric offers extremely low latency (typically 10X faster speeds than Ethernet), 40Gb/s throughput, full redundancy, integrated end-point security, and massive scalability to encompass thousands of virtual and physical servers. Exalogic Exabus: Performance and Scalability Many of today s highly distributed systems are constrained by the communication channels that connect the system components. Exalogic eliminates I/O bottlenecks with a networking hardware and software sub-system called Exabus. Exabus not only makes business applications run faster, it also makes them more efficient even in extremely large deployments with thousands of processor cores and terabytes of memory. This communication fabric ties all of the system components together and provides the basis for Exalogic s incredible reliability, availability, scalability and performance. 5

Figure 3: Key Components of Exalogic Elastic Cloud Hardware and Software Stacks. QDR InfiniBand was selected as the foundation for Exabus for several reasons: Oracle InfiniBand provides the greatest available bandwidth per physical port (40Gb/s) and lowest latency (~1.07µsec) of any standard interconnect technology available today. 1 This unique architecture allows applications to reclaim compute capacity otherwise wasted waiting on slow communication links. InfiniBand provides reliable delivery, security and quality of service at the physical layer in the network stack and natively supports kernel bypass operations, eliminating much of the inefficiency of using system CPU and main memory. Oracle InfiniBand supports upper-stack protocols like IP (IPoIB) and Ethernet (EoIB), enabling existing applications to run without modification and still benefit from enhanced performance. Coherence, InfiniBand, and Exabus: A Powerful Combination Applications that access state or data from an in-memory data grid only perform as well as the supporting network communications layer, which is a byproduct of the data grid s ability to utilize network bandwidth to reduce latency. 1 3.7x the throughput and 1/5 the latency of 10Gb Ethernet, according to http://www.hpcadvisorycouncil.com/pdf/ib_and_10gige_in_hpc.pdf 6

Oracle Coherence leverages the Exabus technology exclusive to Oracle Exalogic Elastic Cloud to dramatically increase scalability and lower the latency of data grid operations. Exabus includes a low-level Java API for the network communications layer that has been optimized for Exalogic. InfiniBand Message Bus (IMB) is a native InfiniBand Exabus implementation that uses Remote Direct Memory Access (RDMA), zero-copy data transfer, and other technologies for low-latency communications. Combined with Exabus technology, Oracle Coherence provides massively scalable, low-latency data- and state-management capabilities for middleware applications. Figure 4: Running Coherence using InfiniBand Message Bus optimizations of Exabus reduces buffer copies, transport processing, and context switches. Coherence Message Bus: Application Portability, Optimized Network Performance In order to provide a high-performance implementation across a variety of network architectures, Oracle created the Coherence Message Bus, a transport-provider-based interface that allows applications to specify the transport, such as TCP or InfiniBand, suitable for the production environment. Figure 5: Coherence services leverage the Message Bus API, which allows an optimal transport to be configured for the deployment platform. Programmers have several options when porting applications to run on InfiniBand fabrics, most of which involve tradeoffs between portability and performance. The simplest choice is to use standard Java sockets APIs to run TCP/IP over InfiniBand. While this is an easy solution, it does not take advantage of the low-level features of InfiniBand, resulting in higher latencies and sub- 7

optimal bandwidth utilization. Another alternative is to use the Socket Direct Protocol (SDP), which does take advantage of low-level InfiniBand features through the socket interface. By using SDP, which is supported in JDK7 on Solaris and Linux, programmers can continue to use a standard socket-based interface and realize many of the benefits of InfiniBand. The combination of portability and performance offered by SDP is an appropriate tradeoff for most applications and middleware products. The third choice is to leverage an optimized InfiniBand interface unconstrained by the socket interface. This is most suitable to Coherence because of a data grid s direct reliance on the network for scalability and performance. For this reason, Coherence introduced IMB. By leveraging IMB to directly talk to the InfiniBand drivers rather than through a socket interface, Coherence is able to further reduce the latency of Coherence operations. Coherence applications can use the IMB transport via a simple configuration setting. IMB and other Exabus optimizations enable extremely fast access to state and application data for Exalogic/Coherence configurations. Shielding internal messaging operations from the application code dramatically improves performance, both for custom developed Coherence solutions as well as for integrated solutions that include other Oracle products such as Oracle WebLogic Server, Oracle ATG Commerce, Oracle Service Bus, and Oracle Event Processors. Figure 6: Exabus Optimizations provide up to 6x better latency than commodity systems. Customers running Coherence on Exalogic can run their data grids directly on physical hardware for the ultimate in performance and density, or run virtually for improved manageability. Exalogic: Reliability, Availability, and Serviceability Exalogic is designed for mission-critical, highly available applications. Achieving five-nines (99.999%) availability requires a system that is both fault tolerant and accommodates zero-downtime for maintenance and administration. Exalogic eliminates single-point-of-failure through 8

hardware redundancy and automated failover for every major component including power, I/O, and cooling (fans). When running scale-out applications, every component in the system can be taken out of service, repaired or maintained, and returned to service with no disruption to applications or users. Exalogic supports block-level storage replication, backup-to-disk, and Automated Service Requests (ASR). ASR allows Oracle to proactively monitor Exalogic X3-2 systems for actual or impending component failure and to proactively dispatch replacement parts and service personnel, minimizing service disruptions. Coherence on Exalogic: Availability Through Faster Cluster Rebalancing Coherence ensures continuous availability through a variety of features. Synchronous backups ensure that Coherence can tolerate the loss of a node, server, rack, or even an entire data center without losing data. GoldenGate HotCache keeps Coherence caches in sync with underlying databases; data changed in an underlying database table is pushed into Coherence. When a node is added or removed from a cluster, Coherence automatically redistributes the load and guarantees against the further loss of cluster members. Partitions are reshuffled so that every cluster member is responsible for its share of primary and backup partitions. Clients requesting data in this rebalancing process may have to wait while a partition is moved (all operations will complete successfully, but the client may notice a temporary spike in latency). 9

Figure 7: Availability is increased through quicker cluster rebalancing when a node joins or leaves the cluster. Thanks to the Exalogic Exabus and InfiniBand Message Bus, partition movement and cluster rebalancing occurs up to 16x faster than on commodity hardware. Exalogic Elastic Cloud Hardware Compute Nodes Exalogic compute nodes are small self-contained servers powered by Intel Xeon CPUs, highspeed DIMM memory, redundant InfiniBand Host Channel Adapters (HCAs), and redundant Solid State Disks (SSDs). Each compute node is capable of running a single instance of Oracle Linux, Oracle Solaris, or Oracle VM Server hypervisor software. Compute nodes can be added to or removed from Exalogic configurations without any downtime. 10

Figure 8: Standard Exalogic Cloud X3-2 Hardware Configurations. Coherence on Exalogic: Larger, More Efficient Data Grids Using Coherence with Exalogic allows developers to build larger, more efficient data grids that simplify operations. Because of the throughput and latency available via Infiniband Message Bus, more data can be stored on a single compute node than on equivalently sized commodity servers, while still meeting latency and throughput targets. Optimizations such as off-heap byte buffers and scalable message busses allow developers to deploy fewer, larger Coherence JVMs rather than a large number of smaller JVMs without causing excessive garbage collection (GC) latency jitter. It is much more efficient to manage a terabyte of data on an Exalogic system than it is to manage the same amount of data on commodity hardware, greatly reducing IT overhead. Figure 9: Elastic Data allows Coherence to manage more data cost-effectively. The Elastic Data feature of Oracle Coherence allows applications to utilize both memory and block-storage devices. Elastic Data uses Exalogic SSD storage or SSD-backed SAN to support 11

large data sets. Each Coherence cache server can manage several times the amount of data in RAM. System administrators can expand the capacity of Coherence clusters and simplify the overall IT configuration by reducing the number of required cache servers. Elastic Data moves or journals data from RAM to SSD when RAM is full. Journaling occurs asynchronously to improve response times, transparent to the application. Compressed keys stored on-heap keep heap requirements to a minimum, while values are stored off-heap in journal form. No coding changes are needed to use Elastic Data. In summary, Oracle Coherence Elastic Data optimizations provide more efficient memory and disk-based storage and improve data-access performance, giving customers greater flexibility, expanded capacity, and a reduced management footprint. Why Run Coherence on Exalogic? Increased throughput and lower response times: Coherence on Exalogic enables up to 4x better throughput and 6x better response times when compared to commodity hardware on a 10 GB Ethernet network. Elastic Scalability: Coherence on Exalogic provides elastic scalability while meeting stringent performance SLAs, with the capacity to scale from an eighth-rack to full-rack. Use of Elastic Data and a large amount of RAM per compute node makes managing large data grids both easy and cost-effective. Increased Availability and Resiliency: Coherence cluster re-balancing is 16x faster with Exalogic systems, thanks to Exabus optimizations that increase availability when adding or removing capacity in a Coherence cluster. Simplified Scalability and Operations: Coherence on Exalogic makes it easier to scale business applications since there are is no need to reconfigure the network and no need to provision new servers and storage devices. Having an engineered system with pre-configured, fully optimized network and compute nodes simplifies deployment while providing predictable performance and elastic scalability. Case in Point: a Major Cable Provider Consider a cable provider that needs to scale its business to provide fast access to user data, including: Authentication and authorization of new services Account and balance information History data to make targeted offers and deliver personalized content to customers This cable provider has several requirements: 12

Elasticity: Although the system must initially support 1TB of data, it must be able to accommodate growth as new services are added, new customer subscriptions accelerate, and acquisitions are folded into this system. Real-Time Response: Recommendations for new services and content must be available as quickly as possible to capture new revenue. Customer frustration grows with any glitch or delay in website performance. Availability: As a corollary to the prior requirement, data must always be available as capacity is added or if any component of the system experiences a scheduled or unscheduled outage. Transaction Rates: More customers are expected to interact with the system as new services are added. Thus the system must have headroom to support revenue growth per customer, which translates into more transactions per customer. This cable provider is considering two potential deployment platforms: Exalogic or commodity hardware connected via a standard 10Gb Ethernet network. The Exalogic compute nodes and the commodity servers are configured similarly with 256GB RAM. The Exalogic compute node is configured with SSD. The commodity server has an option to add SSD. Lets evaluate the pros and cons of each deployment option. Option A: Commodity Hardware A total of 3TB is needed for this data grid: 1TB for primary data, 1TB for backup (to provide high-availability in the event of Coherence node or physical server failure), and 1TB for working space in the JVMs to avoid service availability disruption due to excessive GCs. At 256GB per server, this implies a minimum of 12 servers. Will twelve servers be enough to meet the above requirements? Elasticity: This system may be elastic, but it will require ongoing administration. In order to add capacity, new systems and switches must be provisioned, and rewiring may be necessary. Real-Time Responses: The cable provider expects to be able to meet customer demand with its 10GB network, although network congestion may impact response-times during the evening hours. The cable provider may need to add additional servers to meet these peak loads. Availability: When using the full 256GB per system, losing a full system or adding an additional system implies a several minute period during which data is available, but at a lower SLA. While not devastating, the company sees the value in reducing these service degradation periods. Transaction Rates: This system has no room for growth as customer transaction rates increase. More servers will be needed to accommodate additional transactions, incurring all the provisioning and rewiring costs mentioned above. 13

After considering these factors, the cable company decides to deploy sixteen servers rather than twelve to support peak transaction volume, allow room for growth, and lessen the impact of server loss. Option B: Oracle Exalogic The Oracle Exalogic deployment starts with the same basic set of assumptions. The cable provider determines it will need twelve compute nodes for this deployment, and then looks at the basic set of requirements. Elasticity: Though the RAM on the compute nodes is fully utilized, adding capacity to this system simply involves adding an additional compute node. The Exalogic network fabric has plenty of room for growth, and no rewiring or reconfiguring is required. Real-Time Response: With several times the capacity of 10GB Ethernet, Infiniband has more than enough headroom to support peak transaction rates. Thanks to Exabus optimizations, response times are six times faster than on the commodity hardware. The additional bandwidth provided by InfiniBand increases density by using Elastic Data to move additional customer data to local SSD. The provider conservatively considers adding 64GB more data to each compute node, to be stored on SSD, for a total of 320GB per compute node. Availability: Although Coherence on Exalogic is now managing 320GB per server, this system provides higher availability than the commodity deployment. Coherence can quickly rebalance the cluster across the InfiniBand fabric as nodes are added to or removed from the cluster. Faster rebalancing more than makes up for the added data now being managed per compute node. Transaction Rates: Because of the lower latency of Coherence operations on Exalogic, as well as the increased bandwidth, each Coherence node can tolerate increased transaction rates while delivering better response times than the commodity alternative. Because of these inherent capabilities of Oracle Exalogic, this 3TB deployment, with 320GB per compute node, can be achieved with fewer than ten servers, as compared to at least sixteen servers with the commodity solution. Having fewer Coherence servers to manage simplifies operations since there are fewer components that can fail, reinforcing availability arguments. Coherence on Exalogic: Improved Performance, Lower Cost Running Oracle Coherence on Oracle Exalogic systems allows applications to take advantage of Exabus optimizations to increase throughput and lower response times. When combined with Elastic Data and Exalogic s tremendous memory capacity, business applications can manage more data with less hardware than on commodity systems, while meeting strict service-level agreements. 14

September 2013 Author: Craig Blitz Contributing Authors: David Baum, Jens Eckels Oracle Corporation World Headquarters 500 Oracle Parkway Redwood Shores, CA 94065 U.S.A. Worldwide Inquiries: Phone: +1.650.506.7000 Fax: +1.650.506.7200 oracle.com Copyright 2013, Oracle and/or its affiliates. All rights reserved. This document is provided for information purposes only, and the contents hereof are subject to change without notice. This document is not warranted to be error-free, nor subject to any other warranties or conditions, whether expressed orally or implied in law, including implied warranties and conditions of merchantability or fitness for a particular purpose. We specifically disclaim any liability with respect to this document, and no contractual obligations are formed either directly or indirectly by this document. This document may not be reproduced or transmitted in any form or by any means, electronic or mechanical, for any purpose, without our prior written permission. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered trademark of The Open Group. 0913