1 Oracle Database 12c: JMS Sharded Queues For high performance, scalable Advanced Queuing ORACLE WHITE PAPER MARCH 2015
2 Table of Contents Introduction 2 Architecture 3 PERFORMANCE OF AQ-JMS QUEUES 4 PERFORMANCE OF AQ-JMS TOPICS 7 SYSTEM CONFIGURATION 9 WHEN NOT TO USE JMS SHARDED QUEUES 10 CONVERTING TO JMS SHARDED QUEUES 10 CONCLUSION 10
3 Introduction JMS Sharded Queues are an enhancement to Oracle Advanced Queuing (AQ) in Oracle Database 12cR1. JMS Sharded Queues are high-performance, scalable, database-backed queues. JMS Sharded Queues are the preferred JMS configuration for Oracle Advanced Queuing with:» JMS queues that have enqueuers or dequeuers on multiple Oracle RAC instances» High throughput JMS queues» Unsharded JMS queues that consume too many system resources. Unsharded JMS queues have been a feature of the Oracle database since Release 9.2» JMS Topics with a large number of subscribers AQ allows businesses to take advantage of the Oracle Database 12c for enterprise messaging needs without the need for high-end, message-oriented middleware products. AQ is ideal for 3 Tier Architectures that store application state in a database. By being integrated with the database, AQ allows enqueues or dequeues to be atomically committed at the same time as other database operations without requiring the overhead of two-phase commit or XA interfaces. Persistent AQ messages can be failed over to a standby database, backed up or recovered using Oracle Database functionality. Java Messaging Service Version 1.1 (JMS) is a popular standard API for accessing enterprise messaging systems from Java programs. Although AQ has supported a JMS interface since Oracle Database Release 9.2, JMS Sharded Queues provide significant improvements in performance, manageability and scalability in Oracle Database 12c, especially for high-concurrency architectures including Oracle Exadata Database Machines and Oracle RAC. This technical white paper discusses the architecture and performance of JMS sharded queues and provides guidelines of when and how to use JMS sharded queues. For an overview of Advanced Queuing in general, please refer to Oracle Database 12c: Advanced Queuing.
4 Architecture A sharded queue is a single logical queue that is transparently divided into multiple independent, physical queues through system partitioning. Below is a diagram of a single logical queue. Messages are typically enqueued into one side of the queue and dequeued from the other. FIGURE 1 Diagram of a single logical queue. If the FIFO (first-in, first-out) ordering of messages in a queue is maintained by an IOT (index-organized table) or global index, the two sides of the queue can form hot-spots for high concurrency systems, resulting in excessive inter-instance block transfers in Oracle RAC. If the AQ IOT or global index is experiencing high block contention for the same block due to JMS enqueues or dequeues, the sharded queue architecture is more appropriate. JMS sharded queues replace the global index of the queue with a partitioned table and an inmemory message cache to eliminate these performance issues. In Figure 2, we illustrate the JMS Sharded Queue architecture. While logically presenting a single queue to the JMS developer, the queue is transparently divided into shards. One or more shards are allocated to each instance on a RAC cluster. Each enqueuing session is automatically assigned a shard so that the shard maintains the enqueue-time ordering of the messages within each session. Dequeues have an affinity for locally enqueued messages to minimize cross-instance activity. Underneath the sharded queue abstraction, a system-partitioned table is used. Oracle automatically manages the creation, allocation, population, and truncation of partitions (labeled Part in Figure 2) within the sharded queue table. JMS Sharded Queues use bulk operations such as partition truncates when appropriate to optimize performance. FIGURE 2 JMS Sharded Queue architecture. Instead of maintaining a global index or IOT, a JMS Sharded Queue uses a partitioned table and maintains an in-memory message cache. Local or partial indexes may be defined on a sharded queue table. In the event that the dequeuers are keeping up with the enqueuers, the in-memory message cache provides shard-specific ordering without requiring SQL to select from a shard s partition. The message cache optimizes performance and reduces the disk and CPU overhead of AQ-JMS enqueues and dequeues. The message cache is also used to store nonpersistent messages. Factors such as queue backlogs, message size, and number of subscribers affect message throughput on all queuing systems. The architecture of Oracle s sharded queues is well-designed for all of these dimensions. Since Oracle JMS Sharded Queues are databasebacked, they can handle huge backlogs of large messages. Furthermore, to better utilize SGA (System Global Area), the message cache will not cache the full payload of larger messages, but it will still cache the appropriate metadata of larger messages. Queue
5 metadata is maintained in-memory to avoid disk operations, to reduce the amount of SQL used to maintain the queues, and to handle JMS nondurable subscribers. Creation of a JMS durable subscriber is a relatively low overhead DML-like operation. To support a large number of subscribers efficiently, bitmaps are used to track subscribers to a message. JMS Sharded Queues support a streaming interface to reduce the memory requirement when dealing with large messages. Large message playloads are divided into small chunks rather than sending or receiving a large, contiguous array of bytes. You can use DBMS_AQADM procedures such as SET_MIN_STREAMS_POOL and SET_MAX_STREAMS_POOL to fine-tune the allocation of SGA for the message cache. JMS Sharded Queues perform cross-instance communication but avoid simultaneous writes to the same block across RAC instances. Typically, dequeues occur on a shard that is local to a message s enqueuing instance, but in certain situations, Oracle will efficiently forward messages across instances for dequeuing on another instance. For example, if a JMS Sharded Queue has a single enqueuing session on one Oracle RAC instance and a single dequeuing session on another instance, then JMS Sharded Queues will forward messages between the Oracle RAC instances. The forwarding of messages is done asynchronously to the enqueuing transaction to improve performance. Dequeuers may get an ORA if they are connected to an instance whose shards have no messages. The optimizations described above allow JMS Sharded Queues to improve the performance of queuing on a RAC database as well as a single-instance database significantly. Furthermore, since a JMS Sharded Queue is logically a single queue, there is no need to create your own multi-queue solution for instance affinity on RAC. PERFORMANCE OF AQ-JMS QUEUES A JMS Queue is a single-consumer queue typically used for point-to-point applications. The throughput and scalability of JMS Queues using AQ-JMS sharded queues demonstrate improved performance over unsharded queues, especially on a RAC database. To illustrate this improvement, we perform a simple benchmark test where the number of enqueuer threads and dequeuer threads are incrementally increased for a JMS Sharded Queue on an Oracle Exadata X4-2 Database Machine until performance tails off. Each message is persistent and has a 512-byte payload. Ten enqueue or dequeue operations were in each transaction. As seen in the two log-log graphs below, sharded queues scale linearly with no regression as more instances are added until there are 32 concurrent enqueuer and dequeuer processes. In Oracle Database Release , the bottleneck for sharded queues was a combination of transaction commit overhead and partition maintenance overhead. For a sharded queue, the dequeuers were able to keep up with the equivalent number of enqueuers for the JMS Queue up to 76K dequeues per second. In contrast, unsharded queues do not show a large increase of dequeue throughput as more threads or instances are added. An unsharded queue performs best on a single RAC node with a throughput below 8K dequeues per second. Multiple RAC nodes experienced global index contention, as indicated by Automatic Database Diagnostic Monitor (ADDM) events such as buffer busy or global cache wait. Within a single node, CPU was the resource bottleneck for the unsharded queue. When possible, unsharded queues should leverage instance affinity on RAC and distribute the load across RAC nodes by using multiple unsharded queues.
6 dequeues/second 131,072 65,536 32,768 16,384 8,192 4,096 2,048 1,024 Dequeue Throughput dequeuer threads JMS Queue with same number of enq and deq threads committing after every 10 operations Unsharded - 1 node Unsharded - 2 nodes Unsharded - 4 nodes Unsharded - 8 nodes Sharded - 1 node Sharded - 2 nodes Sharded - 4 nodes Sharded - 8 nodes Figure 3. As the number of enqueuer and dequeuer threads is increased, dequeue throughput of a sharded queue outperforms an unsharded queue. In the same test, enqueue throughput on the JMS Queue behaved similarly to the dequeue counterparts. A sharded queue scales well across RAC nodes. In contrast, an unsharded queue performs best on a single node, but dequeues did not keep up with the enqueue rate when the number of enqueue and dequeue threads for the unsharded queue were kept the same. enqueues/second 131,072 65,536 32,768 16,384 8,192 4,096 2,048 1,024 Enqueue Throughput enqueuer threads JMS Queue with same number of enq and deq threads committing after every 10 operations Unsharded - 1 node Unsharded - 2 nodes Unsharded - 4 nodes Unsharded - 8 nodes Sharded - 1 node Sharded - 2 nodes Sharded - 4 nodes Sharded - 8 nodes Figure 4. On the same test as in Figure 3, enqueue throughput of a sharded queue scales well across RAC nodes. Another interesting result of the test is that sharded queues could dequeue over four times as many messages per CPU compared to the unsharded queue at higher loads. In addition, sharded queues spread the workload well between instances as shown in Figure 5. Although not illustrated, the latency of the dequeues in the sharded queue was typically less than a millisecond.
7 dequeues per CPU 2,500 2,000 1,500 1,000 Dequeues per CPU JMS Queue with same number of enq and deq threads committing after every 10 operations Unsharded - 1 node Unsharded - 2 nodes Unsharded - 4 nodes Unsharded - 8 nodes Sharded - 1 node 500 Sharded - 2 nodes Sharded - 4 nodes Sharded - 8 nodes dequeuer threads Figure 5. Sharded queues could support over four times the dequeue operations per CPU than an unsharded queue at higher loads. In realistic application workloads, the commit overhead (as indicated by the Log file sync wait event in AWR reports) is typically not an issue because AQ enqueues and dequeues often are part of transactions that are sufficiently complex. The Exadata X4-2 Database Machine can handle plenty of additional non-aq load without reducing messaging throughput for a sharded queue. To emulate removal of the commit overhead as a bottleneck, we next modify the tests to enqueue and dequeue messages in much larger commit batches. As a result, sharded queues perform even better with a limit near 100K dequeues per second. Dequeue processes again keeps up with enqueue processes. Without the logarithmic scales, the benefits of adding more RAC instances are more clearly illustrated. In this test on Oracle Database Release , the bottleneck for a single sharded queue is due to partition management of its underlying queue table. Dequeue Throughput with Large Batch Sizes dequeues/second 120, ,000 80,000 60,000 40,000 20, dequeuer threads JMS Queue with same number of enq and deq threads committing after every 1000 operations 1 node 2 nodes 4 node 7 nodes Figure 6. A sharded queue supports over 100K dequeues per second with large commit batch sizes.
8 PERFORMANCE OF AQ-JMS TOPICS A JMS Topic is a multi-consumer queue typically used for publish-subscribe applications. In this variation of the simple benchmark, we demonstrate the throughput of an AQ-JMS Topic by configuring 10 subscribers for each 512-byte message. The database connections of the enqueuer and dequeuer threads and their associated sessions are spread evenly over multiple instances. Similarly, dequeuer threads are spread evenly across subscribers. For example, if we have 4 instances and 4 enqueuers and 10 subscribers, one enqueuer is connected to each RAC node and each node has 10 dequeuer threads, with each of the threads for a different subscriber. With 10 subscribers for each enqueued message, sharded queues achieved approximately 100K dequeues per second. Again, the bottleneck was a combination of commit overhead and partition management overhead. Unsharded queue throughput did not significantly improve with more threads and did not scale well on RAC due to global index contention, as indicated by the statistics such as buffer busy waits. Sharded queues used less than half the CPU consumed by unsharded queues. 81,920 Dequeue Throughput with JMS Topic 10 subscribers/topic. 10 dequeuer threads for every enqueuer thread. 10 operations per commit. dequeues/second 40,960 20,480 10,240 Unsharded - 1 node Unsharded - 4 nodes Unsharded - 8 nodes Sharded - 1 node Sharded - 2 nodes 5, dequeuer threads Sharded - 4 nodes Sharded - 8 nodes Figure 7. With 10 subscribers for each message, sharded queues achieved approximately 100K dequeues per second. In Figure 8, the concurrent enqueue throughput for topics behaved similarly to their dequeue counterparts in Figure 7. Sharded queue performance did not degrade as more RAC nodes were added.
9 16,384 8,192 Enqueue Throughput of JMS Topic 10 subscribers/topic. 10 dequeuer threads for every enqueuer thread. 10 operations per commit. Unsharded - 1 node enqueues/second 4,096 2,048 1,024 Unsharded - 4 nodes Unsharded - 8 nodes Sharded - 1 node Sharded - 2 nodes Sharded - 4 nodes enqueuer threads Sharded - 8 nodes Figure 8. Sharded queue enqueue performance did not degrade as more nodes were added. With larger commit batch sizes to eliminate the commit overhead bottleneck not experienced in typical application environments, sharded queues performed better with approximately 150k combined enqueue and dequeue operations per second or over a half billion operations an hour on a single JMS Topic. Combined Throughput of JMS Topic with Large Batch Size enqueue and dequeue per second combined enqueuer and dequeuer threads 10 subscribers/topic. 10 dequeuer threads for every enqueuer thread operations per commit. 1 node 2 nodes 4 nodes 7 nodes 8 nodes Figure 9: Sharded queues supported approximately 150k combined enqueue and dequeue operations per second or over a half billion operations an hour on a single JMS Topic. As with JMS Queues, the bottleneck for JMS Topics was largely due to partition management of a single sharded queue, so system resources weren't being stressed. As nodes were added, more resources were available for other processing and other queues. For example, the CPU utilization on each of the 8 nodes on the X4-2 was less than 10% for the sharded queue on a single JMS Topic.
10 avg cpu % on each node CPU Usage at Each RAC Node number of enqueuer and dequeuer threads 10 subscribers/topic. 10 dequeuer threads for every enqueuer thread operations per commit. 1 node 2 nodes 4 nodes 7 nodes 8 nodes Figure 10. The CPU utilization on each of the 8 nodes on the X4-2 was less than 10% for the sharded queue. In this same JMS Topic test, the average response time for a dequeue operation stayed relatively flat for sharded queues, whereas the response time for unsharded queues increased with more RAC instances or with more dequeuers as seen in the graph below. With Sharded Queues, dequeuers kept up with enqueuers. avg dequeue (ms) Average Response Time per Dequeue dequeuer threads 10 subscribers/topic. 10 dequeuer threads for every enqueuer thread operations per commit. Unsharded - 1 node Unsharded - 4 nodes Sharded - 1 node Sharded - 2 nodes Sharded - 4 nodes Sharded - 8 nodes Figure 11. The response time for unsharded queues increased with more instances or with more dequeuers. SYSTEM CONFIGURATION The above performance tests were run on an Oracle Exadata X4-2 Database Machine. The Exadata X4-2 has 8 compute nodes and 14 cells. Each node has two 2.70Ghz Xeon E v2 processors. Each processor on the node has 12 cores with 24 threads. The nodes each have 30MB caches and 256GB memory. The Exadata cells have two 2.60Ghz Xeon E v2 processors. Each processor on the cell has 6 cores with 12 threads. The cells each have 15MB caches and 94GB memory.
11 The tests ran Oracle Database Release The configuration included up to eight RAC instances. JMS clients typically ran on one of the non-database compute nodes on the Exadata X4-2. STREAMS_POOL_SIZE was configured to 40GB so that the message cache size was more than sufficient to cache the message backlog. WHEN NOT TO USE JMS SHARDED QUEUES JMS Sharded Queues provide many advantages over unsharded AQ queues for high-end users of JMS. However, unsharded queues are a mature, popular feature of the Oracle Database. If unsharded queues satisfy your requirements, there is no compelling need to migrate to sharded queues. Furthermore, JMS sharded queues in Oracle Database Release 12.1 have many restrictions that do not apply to unsharded AQ queues. JMS Sharded Queues are aimed at the JMS standard and do not support PL/SQL interfaces for enqueue, dequeue, or notifications. Oracle-specific JMS extensions, such as array enqueue and dequeue, are not supported. JMS Sharded Queues must use the thin JDBC driver. Only unsharded queues support AQ features that are not part of the JMS standard including transactional grouping, exception queues, the messaging gateway, propagation, non-jms ordering, and recipient lists. In 12.1, JMS applications that heavily depend upon dequeue by message ID should evaluate unsharded AQ queues. CONVERTING TO JMS SHARDED QUEUES Existing applications built upon the JMS standard should not require changes to use JMS Sharded Queues beyond using the 12.1 JMS and JDBC drivers. You simply need to recreate the queue when it is empty. The first step to convert an unsharded JMS queue to a sharded JMS queue is to stop the unsharded queue for enqueue operations using the DBMS_AQADM.STOP_QUEUE procedure. Then, drain the queue by dequeuing all the messages in the unsharded AQ queue. Next, stop the queue for dequeues, verify that the queue is empty, and drop the empty unsharded queue. Then, set the STREAMS_POOL_SIZE parameters. Finally, use DBMS_AQADM.CREATE_SHARDED_QUEUE() to create a sharded queue of the same name, start the queue to enable enqueues and dequeues, and grant any required privileges on the queue. Sharded Queues introduce new or modified monitoring views or tables. Monitoring scripts based upon views or tables for unsharded AQ queues will require changes. CONCLUSION An AQ-JMS sharded queue provides impressive performance that can exceed a half-billion enqueues and dequeues per hour for a single JMS Topic while consuming less than 10% of the CPU on an Oracle Exadata X4-2 Database Machine. Sharded queues are database-backed, and enqueues and dequeues therefore can be committed in the same transaction as other database operations without requiring two phase commit. Sharded queues perform well on a single-instance database as well as on RAC. Sharded queues have backup and recovery functionality like other database tables and support high availability via Oracle Data Guard. JMS sharded queues have set a new bar for database-backed messaging systems.
12 Oracle Corporation, World Headquarters 500 Oracle Parkway Redwood Shores, CA 94065, USA Worldwide Inquiries Phone: Fax: CONNECT WITH US blogs.oracle.com/oracle facebook.com/oracle twitter.com/oracle oracle.com Copyright 2015, Oracle and/or its affiliates. All rights reserved. This document is provided for information purposes only, and the contents hereof are subject to change without notice. This document is not warranted to be error-free, nor subject to any other warranties or conditions, whether expressed orally or implied in law, including implied warranties and conditions of merchantability or fitness for a particular purpose. We specifically disclaim any liability with respect to this document, and no contractual obligations are formed either directly or indirectly by this document. This document may not be reproduced or transmitted in any form or by any means, electronic or mechanical, for any purpose, without our prior written permission. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered trademark of The Open Group.0115 ORACLE DATABASE 12C: JMS SHARDED QUEUES March 2015
Veritas NetBackup and Oracle Cloud Infrastructure Object Storage ORACLE HOW TO GUIDE FEBRUARY 2018 0. Disclaimer The following is intended to outline our general product direction. It is intended for information
Creating Custom Project Administrator Role to Review Project Performance and Analyze KPI Categories Worked Example ORACLE PPM CLOUD SERVICES SOLUTION OVERVIEW MAY 2018 Disclaimer The following is intended
Oracle DIVArchive Storage Plan Manager Feature Description ORACLE TECHNICAL WHITE PAPER UPDATED MAY 2015 Introduction: What Is a Storage Plan? Storage plans are policies managing the lifecycle of objects
Oracle CIoud Infrastructure Load Balancing Connectivity with Ravello O R A C L E W H I T E P A P E R M A R C H 2 0 1 8 Oracle Cloud Infrastructure Ravello Cloud Service Oracle Cloud Infrastructure Ravello
Generate Invoice and Revenue for Labor Transactions Based on Rates Defined for Project and Task O R A C L E P P M C L O U D S E R V I C E S S O L U T I O N O V E R V I E W D E C E M B E R 2017 Disclaimer
Loading User Update Requests Using HCM Data Loader Oracle Fusion Human Capital Management 11g Release 11 (11.1.11) Update 8 O R A C L E W H I T E P A P E R N O V E M B E R 2 0 1 7 Table of Contents Loading
An Oracle White Paper June 2011 (EHCC) Introduction... 3 : Technology Overview... 4 Warehouse Compression... 6 Archive Compression... 7 Conclusion... 9 Introduction enables the highest levels of data compression
A Joint Oracle Teradata White Paper September 2011 Configuring Oracle Business Intelligence Enterprise Edition to Support Teradata Database Query Banding Introduction... 1 Step 1. Query Band Configuration
New Oracle NoSQL Database APIs that Speed Insertion and Retrieval O R A C L E W H I T E P A P E R F E B R U A R Y 2 0 1 6 1 NEW ORACLE NoSQL DATABASE APIs that SPEED INSERTION AND RETRIEVAL Introduction
RAC Database on Oracle Ravello Cloud Service O R A C L E W H I T E P A P E R A U G U S T 2017 Oracle Ravello is an overlay cloud that enables enterprises to run their VMware and KVM applications with data-center-like
An Oracle White Paper October 2011 Advanced Compression with Oracle Database 11g Oracle White Paper Advanced Compression with Oracle Database 11g Introduction... 3 Oracle Advanced Compression... 4 Compression
Oracle Database Vault DBA Administrative Best Practices ORACLE WHITE PAPER MAY 2015 Table of Contents Introduction 2 Database Administration Tasks Summary 3 General Database Administration Tasks 4 Managing
Using the Oracle Business Intelligence Publisher Memory Guard Features August 2013 Contents What Are the Memory Guard Features?... 3 Specify a maximum data sized allowed for online processing... 3 Specify
An Oracle White Paper September 2011 Oracle Utilities Meter Data Management 2.0.1 Demonstrates Extreme Performance on Oracle Exadata/Exalogic Introduction New utilities technologies are bringing with them
Oracle Fusion Configurator Configurator Modeling Walk Through O R A C L E W H I T E P A P E R M A R C H 2 0 1 8 Table of Contents Introduction 1 Assumptions 1 Product Information Management Setup 2 Item
Oracle JD Edwards EnterpriseOne Object Usage Tracking Performance Characterization Using JD Edwards EnterpriseOne Object Usage Tracking ORACLE WHITE PAPER JULY 2017 Disclaimer The following is intended
ACSLS Manager Software Overview and Frequently Asked Questions Overview Management of distributed tape libraries is both timeconsuming and costlyinvolving multiple libraries, multiple backup applications,
An Oracle White Paper: November 2011 Installation Instructions: Oracle XML DB XFILES Demonstration Table of Contents Installation Instructions: Oracle XML DB XFILES Demonstration... 1 Executive Overview...
Leverage the Oracle Data Integration Platform Inside Azure and Amazon Cloud WHITE PAPER / AUGUST 8, 2018 DISCLAIMER The following is intended to outline our general product direction. It is intended for
Oracle Cloud Applications Oracle Transactional Business Intelligence BI Catalog Folder Management Release 11+ ORACLE WHITE PAPER November 2017 ORACLE WHITE PAPER November 2017 Table of Contents Introduction
Extreme Performance Platform for Real-Time Streaming Analytics Achieve Massive Scalability on SPARC T7 with Oracle Stream Analytics O R A C L E W H I T E P A P E R A P R I L 2 0 1 6 Disclaimer The following
Tutorial on How to Publish an OCI Image Listing Publish an OCI Image Listing F13637-01 JANUARY 2019 DISCLAIMER The following is intended to outline our general product direction. It is intended for information
Siebel CRM Applications on Oracle Ravello Cloud Service ORACLE WHITE PAPER AUGUST 2017 Oracle Ravello is an overlay cloud that enables enterprises to run their VMware and KVM applications with data-center-like
Oracle Data Masking and Subsetting Frequently Asked Questions (FAQ) S E P T E M B E R 2 0 1 6 Product Overview Q: What is Data Masking and Subsetting? A: Data Masking or Static Data Masking is the process
Load Project Organizations Using HCM Data Loader O R A C L E P P M C L O U D S E R V I C E S S O L U T I O N O V E R V I E W A U G U S T 2018 Disclaimer The following is intended to outline our general
Optimizing Storage with SAP and Oracle Database 12c O R A C L E W H I T E P A P E R M A Y 2 0 1 7 Table of Contents Disclaimer 1 Introduction 2 Storage Efficiency 3 Index (Key) Compression and Bitmap Indexes
AUGUST 6, 2018 DISCLAIMER The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment
Automatic Data Optimization with Oracle Database 12c O R A C L E W H I T E P A P E R S E P T E M B E R 2 0 1 7 Table of Contents Disclaimer 1 Introduction 2 Storage Tiering and Compression Tiering 3 Heat
JD Edwards EnterpriseOne Licensing Disabling Client Licensing for Various Tools Releases O R A C L E W H I T E P A P E R O C T O B E R 2 0 1 5 Disclaimer The following is intended to outline our general
Overview Faced with ever increasing computing needs and budget constraints, companies today want to set up infrastructures that offer optimal value, can easily be re-purposed, and have reduced complexity.
An Oracle White Paper September 2010 Handling Memory Ordering in Multithreaded Applications with Oracle Solaris Studio 12 Update 2: Part 2, Memory Introduction... 1 What Is Memory Ordering?... 2 More About
Oracle Flashback Data Archive (FDA) O R A C L E W H I T E P A P E R M A R C H 2 0 1 8 Table of Contents Disclaimer 1 Introduction 2 Tracking/Viewing Changes is Complicated 3 Enabling Flashback Data Archive
Deploying Custom Operating System Images on Oracle Cloud Infrastructure O R A C L E W H I T E P A P E R M A Y 2 0 1 8 Table of Contents Purpose of This White Paper 3 Scope and Assumptions 3 Access Requirements
An Oracle White Paper September 2012 Security and the Oracle Database Cloud Service 1 Table of Contents Overview... 3 Security architecture... 4 User areas... 4 Accounts... 4 Identity Domains... 4 Database
Oracle Data Provider for.net Microsoft.NET Core and Entity Framework Core O R A C L E S T A T E M E N T O F D I R E C T I O N F E B R U A R Y 2 0 1 8 Disclaimer The following is intended to outline our
Oracle Event Processing Extreme Performance on Sparc T5 An Oracle Event Processing (OEP) Whitepaper ORACLE WHITE PAPER AUGUST 2014 Table of Contents Introduction 2 OEP Architecture 2 Server Architecture
Oracle JD Edwards EnterpriseOne Object Usage Tracking Performance Characterization Using JD Edwards EnterpriseOne Object Usage Tracking ORACLE WHITE PAPER NOVEMBER 2017 Disclaimer The following is intended
An Oracle White Paper October 2010 Minimizing Planned Downtime of SAP Systems with the Virtualization Technologies in Oracle Solaris 10 Introduction When business-critical systems are down for a variety
An Oracle White Paper August 2010 Building Highly Scalable Web Applications with XStream Disclaimer The following is intended to outline our general product direction. It is intended for information purposes
Oracle Flash Storage System QoS Plus Operation and Best Practices ORACLE WHITE PAPER OCTOBER 2016 Table of Contents Introduction 1 When to Use Auto-Tiering 1 Access Skews 1 Consistent Access 2 Recommendations
Oracle Business Activity Monitoring 12c Best Practices ORACLE WHITE PAPER DECEMBER 2015 Disclaimer The following is intended to outline our general product direction. It is intended for information purposes
October 2017 Oracle Application Express Statement of Direction Disclaimer This document in any form, software or printed matter, contains proprietary information that is the exclusive property of Oracle.
Oracle Grid Infrastructure 12c Release 2 Cluster Domains O R A C L E W H I T E P A P E R N O V E M B E R 2 0 1 7 Table of Contents Introduction 2 Clustering with Oracle Clusterware 12c Release 2 3 Oracle
An Oracle White Paper December, 3 rd 2014 Oracle Metadata Management v188.8.131.52.1 Oracle Metadata Management version 184.108.40.206.1 - December, 3 rd 2014 Disclaimer This document is for informational purposes.
Oracle WebLogic Server Multitenant: The World s First Cloud-Native Enterprise Java Platform KEY BENEFITS Enable container-like DevOps and 12-factor application management and delivery Accelerate application
Hybrid Columnar Compression (HCC) on Oracle Database 18c O R A C L E W H IT E P A P E R FE B R U A R Y 2 0 1 8 Disclaimer The following is intended to outline our general product direction. It is intended
Oracle Exadata Statement of Direction NOVEMBER 2017 Disclaimer The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated
Migrating VMs from VMware vsphere to Oracle Private Cloud Appliance 2.3.1 O R A C L E W H I T E P A P E R O C T O B E R 2 0 1 7 Table of Contents Introduction 2 Environment 3 Install Coriolis VM on Oracle
Oracle Clusterware 18c Technical Overview O R A C L E W H I T E P A P E R F E B R U A R Y 2 0 1 8 Table of Contents Introduction 1 Cluster Domains 3 Conversions 3 Oracle ACFS Remote Service 5 Single Network
An Oracle White Paper January 2013 Oracle Hyperion Planning on the Oracle Database Appliance using Oracle Transparent Data Encryption Executive Overview... 3 Introduction... 3 Hyperion Planning... 3 Oracle
O R A C L E D A T A S H E E T Oracle Database Mobile Server, Version 12.2 Oracle Database Mobile Server 12c (ODMS) is a highly optimized, robust and secure way to connect mobile and embedded Internet of
Oracle Diagnostics Pack For Oracle Database ORACLE DIAGNOSTICS PACK FOR ORACLE DATABASE Oracle Enterprise Manager is Oracle s integrated enterprise IT management product line, and provides the industry
An Oracle White Paper May 2014 Integrating Oracle SuperCluster Engineered Systems with a Data Center s 1 GbE and 10 GbE s Using Oracle Switch ES1-24 Introduction... 1 Integrating Oracle SuperCluster T5-8
Oracle Privileged Account Manager Disaster Recovery Deployment Considerations O R A C L E W H I T E P A P E R A U G U S T 2 0 1 5 Disclaimer The following is intended to outline our general product direction.
Oracle Database Vault Best Practices ORACLE WHITE PAPER MAY 2015 Table of Contents Executive Overview 2 Installation 3 Pre-Installation Notes 3 Separation of Duty 3 Separation of Duty Matrix 4 Oracle Database
Oracle Secure Backup Getting Started with Cloud Storage Devices O R A C L E W H I T E P A P E R F E B R U A R Y 2 0 1 8 Disclaimer The following is intended to outline our general product direction. It
An Oracle White Paper June 2013 Enterprise Database Cloud Deployment with Oracle SuperCluster T5-8 Introduction Databases form the underlying foundation for most business applications by storing, organizing,
Oracle Advanced Compression An Oracle White Paper June 2007 Note: The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated
An Oracle White Paper February 2012 Benefits of an Exclusive Multimaster Deployment of Oracle Directory Server Enterprise Edition Disclaimer The following is intended to outline our general product direction.
An Oracle White Paper June 2011 StorageTek In-rive Reclaim Accelerator for the StorageTek T10000B Tape rive and StorageTek Virtual Storage Manager Introduction 1 The Tape Storage Space Problem 3 The StorageTek
Repairing the Broken State of Data Protection April 2016 Oracle s modern data protection solutions address both business continuity and disaster recovery, reducing IT costs and simplifying data protection
An Oracle Technical White Paper October 2011 Sizing Guide for Single Click Configurations of Oracle s MySQL on Sun Fire x86 Servers Introduction... 1 Foundation for an Enterprise Infrastructure... 2 Sun
An Oracle White Paper September, 2011 Oracle Real User Experience Insight Server Requirements Executive Overview Oracle Enterprise Manager is Oracle s integrated enterprise IT management product line and
Performance Tuning the Oracle ZFS Storage Appliance for Microsoft Exchange 2013 ORACLE WHITE PAPER MARCH 2016 Table of Contents Introduction 2 Performance Tuning 3 Tuning the File System 4 Tuning the Oracle
ERP CLOUD Subledger Accounting ing Journals s Oracle Financials for EMEA Table of Contents 1. Purpose of the document 3 2. Assumptions and Prerequisites 3 3. Feature Specific Setup 4 Implementation 4 Assign
An Oracle White Paper September 2009 Methods for Upgrading to Oracle Database 11g Release 2 Introduction... 1 Database Upgrade Methods... 2 Database Upgrade Assistant (DBUA)... 2 Manual Upgrade... 3 Export
Technical White Paper August 2010 Recovering from Catastrophic Failures Using Data Replicator Software for Data Replication. Recovering from Catastrophic Failures Using Data Replicator Software for Data
An Oracle White Paper September 2010 Upgrade Methods for Upgrading to Oracle Database 11g Release 2 Introduction... 1 Database Upgrade Methods... 2 Database Upgrade Assistant (DBUA)... 2 Manual Upgrade...
Improve Data Integration with Changed Data Capture An Oracle Data Integrator Technical Brief Updated December 2006 Improve Data Integration with Changed Data Capture: An Oracle Data Integrator Technical
An Oracle Technical White Paper September 2012 Detecting and Resolving Oracle Solaris LUN Alignment Problems Overview... 1 LUN Alignment Challenges with Advanced Storage Devices... 2 Detecting and Resolving
An Oracle Technical White Paper December 2013 Configuring a Single Oracle ZFS Storage Appliance into an InfiniBand Fabric with Multiple Oracle Exadata Machines A configuration best practice guide for implementing
An Oracle White Paper October 2012 Release Notes - V220.127.116.11.0 Oracle Utilities Application Framework Introduction... 2 Disclaimer... 2 Deprecation of Functionality... 2 New or Changed Features... 4 Native
Oracle Service Cloud Agent Browser UI November 2017 What s New TABLE OF CONTENTS REVISION HISTORY... 3 OVERVIEW... 3 WORKSPACES... 3 Rowspan Workspace Designer Configuration Option... 3 Best Answer Incident
StorageTek ACSLS Manager Software Management of distributed tape libraries is both time-consuming and costly involving multiple libraries, multiple backup applications, multiple administrators, and poor
To succeed in today s competitive environment, you need real-time information. This requires a platform that can unite information from disparate systems across your enterprise without compromising availability