Effective resource utilization by In-Memory Parallel Execution in Oracle Real Application Clusters 11g Release 2

Similar documents
An Oracle White Paper April 2010

RAC Database on Oracle Ravello Cloud Service O R A C L E W H I T E P A P E R A U G U S T 2017

An Oracle White Paper October The New Oracle Enterprise Manager Database Control 11g Release 2 Now Managing Oracle Clusterware

Open storage architecture for private Oracle database clouds

Oracle Social Network

An Oracle White Paper June Exadata Hybrid Columnar Compression (EHCC)

Oracle Grid Infrastructure 12c Release 2 Cluster Domains O R A C L E W H I T E P A P E R N O V E M B E R

Oracle Enterprise Performance Reporting Cloud. What s New in September 2016 Release (16.09)

Configuring Oracle Business Intelligence Enterprise Edition to Support Teradata Database Query Banding

An Oracle White Paper November Oracle RAC One Node 11g Release 2 User Guide

Oracle Database 10g Resource Manager. An Oracle White Paper October 2005

Oracle Grid Infrastructure Cluster Domains O R A C L E W H I T E P A P E R F E B R U A R Y

Selecting Server Hardware for FlashGrid Deployment

An Oracle White Paper November Primavera Unifier Integration Overview: A Web Services Integration Approach

Siebel CRM Applications on Oracle Ravello Cloud Service ORACLE WHITE PAPER AUGUST 2017

Hyper-converged storage for Oracle RAC based on NVMe SSDs and standard x86 servers

ORACLE FABRIC MANAGER

An Oracle White Paper September Oracle Utilities Meter Data Management Demonstrates Extreme Performance on Oracle Exadata/Exalogic

Oracle JD Edwards EnterpriseOne Object Usage Tracking Performance Characterization Using JD Edwards EnterpriseOne Object Usage Tracking

Sun Fire X4170 M2 Server Frequently Asked Questions

Oracle Clusterware 18c Technical Overview O R A C L E W H I T E P A P E R F E B R U A R Y

Oracle Event Processing Extreme Performance on Sparc T5

Oracle Learn Cloud. Taleo Release 16B.1. Release Content Document

Oracle JD Edwards EnterpriseOne Object Usage Tracking Performance Characterization Using JD Edwards EnterpriseOne Object Usage Tracking

Oracle WebLogic Server Multitenant:

Oracle Service Cloud Agent Browser UI. November What s New

Create Individual Membership. This step-by-step guide takes you through the process to create an Individual Membership.

ORACLE SERVICES FOR APPLICATION MIGRATIONS TO ORACLE HARDWARE INFRASTRUCTURES

Oracle CIoud Infrastructure Load Balancing Connectivity with Ravello O R A C L E W H I T E P A P E R M A R C H

ORACLE ENTERPRISE MANAGER 10g ORACLE DIAGNOSTICS PACK FOR NON-ORACLE MIDDLEWARE

Oracle Java SE Advanced for ISVs

An Oracle White Paper. Released April 2013

Oracle Financial Consolidation and Close Cloud. What s New in the March Update (17.03)

Oracle Financial Consolidation and Close Cloud. What s New in the November Update (16.11)

FlashGrid Software Enables Converged and Hyper-Converged Appliances for Oracle* RAC

Oracle Database 12c: JMS Sharded Queues

Maximum Availability Architecture. Oracle Best Practices For High Availability

An Oracle White Paper October Advanced Compression with Oracle Database 11g

Performance and Scalability Benchmark: Siebel CRM Release 7 on HP-UX Servers and Oracle9i Database. An Oracle White Paper Released October 2003

Achieving High Availability with Oracle Cloud Infrastructure Ravello Service O R A C L E W H I T E P A P E R J U N E

Oracle Financial Consolidation and Close Cloud. October 2017 Update (17.10) What s New

Sun Dual Port 10GbE SFP+ PCIe 2.0 Networking Cards with Intel GbE Controller

Creating Custom Project Administrator Role to Review Project Performance and Analyze KPI Categories

REFERENCE ARCHITECTURE Microsoft SQL Server 2016 Data Warehouse Fast Track. FlashStack 70TB Solution with Cisco UCS and Pure Storage FlashArray//X

Establishing secure connections between Oracle Ravello and Oracle Database Cloud O R A C L E W H I T E P A P E R N O V E M E B E R

Oracle Utilities Customer Care and Billing Release Utility Reference Model a Load Meter Reads

Create Faculty Membership Account. This step-by-step guide takes you through the process to create a Faculty Membership Account.

Microsoft SQL Server 2012 Fast Track Reference Architecture Using PowerEdge R720 and Compellent SC8000

Overview. Implementing Fibre Channel SAN Boot with the Oracle ZFS Storage Appliance. January 2014 By Tom Hanvey; update by Peter Brouwer Version: 2.

TABLE OF CONTENTS DOCUMENT HISTORY 3

ORACLE SNAP MANAGEMENT UTILITY FOR ORACLE DATABASE

An Oracle White Paper February Optimizing Storage for Oracle PeopleSoft Applications

Veritas NetBackup and Oracle Cloud Infrastructure Object Storage ORACLE HOW TO GUIDE FEBRUARY 2018

An Oracle White Paper Released April 2008

White Paper FUJITSU Storage ETERNUS DX S4/S3 series Extreme Cache/Extreme Cache Pool best fit for fast processing of vast amount of data

An Oracle Technical White Paper October Sizing Guide for Single Click Configurations of Oracle s MySQL on Sun Fire x86 Servers

An Oracle White Paper September Security and the Oracle Database Cloud Service

QLE10000 Series Adapter Provides Application Benefits Through I/O Caching

VISUAL APPLICATION CREATION AND PUBLISHING FOR ANYONE

An Oracle White Paper October Minimizing Planned Downtime of SAP Systems with the Virtualization Technologies in Oracle Solaris 10

An Oracle White Paper September, Oracle Real User Experience Insight Server Requirements

Mission-Critical Databases in the Cloud. Oracle RAC in Microsoft Azure Enabled by FlashGrid Software.

Microsoft SQL Server 2012 Fast Track Reference Configuration Using PowerEdge R720 and EqualLogic PS6110XV Arrays

Oracle Fusion Applications Connect Program. Release 11gRelease 2

Four-Socket Server Consolidation Using SQL Server 2008

STORAGE CONSOLIDATION AND THE SUN ZFS STORAGE APPLIANCE

E-BUSINESS SUITE APPLICATIONS R12 (R12.1.3) iprocurement (OLTP) BENCHMARK - USING ORACLE DATABASE 11g ON FUJITSU S M10-4S SERVER RUNNING SOLARIS 11

Oracle Financial Consolidation and Close Cloud. What s New in the February Update (17.02)

Oracle Risk Management Cloud

Oracle Developer Studio Performance Analyzer

Oracle Solaris 11: No-Compromise Virtualization

Accelerating Microsoft SQL Server Performance With NVDIMM-N on Dell EMC PowerEdge R740

Reduce Costs & Increase Oracle Database OLTP Workload Service Levels:

Benefits of an Exclusive Multimaster Deployment of Oracle Directory Server Enterprise Edition

An Oracle White Paper June StorageTek In-Drive Reclaim Accelerator for the StorageTek T10000B Tape Drive and StorageTek Virtual Storage Manager

Managing Zone Configuration

Oracle Fusion General Ledger Hierarchies: Recommendations and Best Practices. An Oracle White Paper April, 2012

Oracle Profitability and Cost Management Cloud. November 2017 Update (17.11) What s New

Oracle Data Provider for.net Microsoft.NET Core and Entity Framework Core O R A C L E S T A T E M E N T O F D I R E C T I O N F E B R U A R Y

StorageTek ACSLS Manager Software Overview and Frequently Asked Questions

See What's Coming in Oracle Taleo Business Edition Cloud Service

An Oracle White Paper December, 3 rd Oracle Metadata Management v New Features Overview

CONTAINER CLOUD SERVICE. Managing Containers Easily on Oracle Public Cloud

COMPUTE CLOUD SERVICE. Moving to SPARC in the Oracle Cloud

Oracle Enterprise Performance Reporting Cloud. What s New in the November Update (16.11)

Oracle Exadata Statement of Direction NOVEMBER 2017

Your New Autonomous Data Warehouse

MySQL CLOUD SERVICE. Propel Innovation and Time-to-Market

Technical White Paper August Recovering from Catastrophic Failures Using Data Replicator Software for Data Replication

EMC CLARiiON CX3-40. Reference Architecture. Enterprise Solutions for Microsoft Exchange Enabled by MirrorView/S

An Oracle White Paper June Enterprise Database Cloud Deployment with Oracle SuperCluster T5-8

FlashStack 70TB Solution with Cisco UCS and Pure Storage FlashArray

Oracle TimesTen Scaleout: Revolutionizing In-Memory Transaction Processing

Application Container Cloud

Oracle Developer Studio Code Analyzer

An Oracle White Paper May Oracle VM 3: Overview of Disaster Recovery Solutions

Introduction. Architecture Overview

Tutorial on How to Publish an OCI Image Listing

Oracle Database Appliance characterization with JD Edwards EnterpriseOne

ORACLE SOLARIS CLUSTER INTEROPERABILITY MATRIX

Transcription:

An Oracle White Paper Feb 2010 Effective resource utilization by In-Memory Parallel Execution in Oracle Real Application Clusters 11g Release 2 Server Hardware Sponsored by Copyright 2010 NS Solutions and Oracle, All Rights Reserved

Introduction NS Solutions Corporation (NSSOL) has been working to the business that utilized Oracle products as a strong partner for a long time since strategic partnership conclusion of Oracle Corporation and Nippon Steel Corporation (that is parent company of NSSOL) in 1991. In continued strategic business development, In November 2006, Oracle Japan established a collaborative system with NSSOL and other grid-strategy partners and developed the advanced Oracle GRID Center 1 with the goal of building next-generation grid-based business solutions to optimize corporate system platforms. Lending its full support for the establishment of the Oracle GRID Center, NSSOL has been active in joint technical validation experiments with Oracle Japan based on its servers and storage products. This white paper has been prepared with technological, hardware, and software support from Dell Japan Inc., Intel Corporation and CISCO Systems, Inc., both of which endorse the goals of the Oracle Grid Center. We are grateful for the strong support provided by the participating companies and engineers. 1 Oracle GRID Center publishes many technical reports. Oracle GRID Center (http://www.oracle.co.jp/solutions/grid_center/index.html) 1

Executive Overview... 3 In-Memory Parallel Query... 3 Oracle Real Application Clusters... 5 Verification Environment... 7 In-Memory PQ execution by utilized server pools... 8 Summary... 12 2

Executive Overview In-Memory Parallel Execution (In-Memory PQ) is introduced in Oracle Database 11g Release(Oracle Database 11g R2), which enhances its parallel execution features. In the area of daily/monthly batch data processing, or large data warehouses (DWH) such as business intelligence, which needs a high-speed data processing with a large volume of data, conventional parallel execution of SQL gain performance by maximizing cpu resources of the servers and disks of storage systems. In the recent business environment that develops and changes rapidly, it is a key factor to analyze large amount of data assets in the company faster and more effective to decide business improvement and business judgment timely. With a remarkable achievement of the cpu performance and the capacity of the memory resources in a server today, even a single server has a enormous ability for data processing. On the other hand, with a increase of volume of a single data disk, it has become a trend to design storage systems with a relative small amount of disks, rather than setting a number of disks to improve read/write performance. And the marginal performance of the storage systems results in a possible bottleneck for the entitle systems to execute SQL. In order to leverage all hardware resources effectively, Oracle Database 11g R2 has extended its features to simplify and accelerate SQL execution. The In-Memory PQ is one of the new features which loads a large amount of data into the database cache on the physical memory of the server(s) and execute data processing using multiple cpu resources. And this feature has a capacity to address the business challenge above. The paper introduces performance improvement using In-Memory PQ in Oracle Database 11g R2, based on our test results of the technical validation experiments which Oracle Japan and NSSOL jointly worked out in Oracle GRID Center. In-Memory Parallel Query Traditional parallel query(pq) execution, it adopted Direct Path Read to load data which bypassed the database buffer cache(buffer cache) and load directly from the disks. Of course reading data from memories has smaller latency than that from disks. It is, however, virtually very difficult to cache a large amount of data in the buffer cache because it is expensive to resource a sufficient number of high-volume memory modules and, in addition, the server is not capable to host such amount of physical memories. So if large amount of data is loaded via buffer cache like a normal 3

fashion, it cannot leverage a benefit of the cache as each data is purged one after another by the newly loaded follow-on data parts. That's why conventional parallel execution of the Oracle Database adopts the way to load data from the disks directly. However, with the latest hardware facilities, some middle-range servers can now host sufficient amount of physical memories to load large amount of data into a buffer cache and the computing environment turns out to be ready for leveraging in-memory parallel executing in Oracle Database 11g R2. For example, DELL PowerEdge R710 of Dell Corporation, which came into market in September 2009 with Intel Xeon Processor E5540 (2.53GHz) by Intel Corp., has memory capacity of maximum 72GB per one CPU socket. Figure 1 Direct Path Read and In-Memory PQ. In-Memory PQ uses data of database segments (such as tables, indexes and so on) cached on the buffer cache. And it offers high-speed SQL execution environment eliminating the issue of storage performance. In case the target data is not cached, response time is less faster than direct path read method because there can be some performance degradation due to the overhead to load data into the buffer cache from disks. If PQ deals with more data than the capacity of buffer size, it automatically detect it during generating SQL execution plan and change to use direct path read instead. In-memory PQ can be set by the initialization parameter PARALLEL_DEGREE_POLICY to "Auto" (default: MANUAL) without any change of the application. In the Oracle Real Application Clusters(Oracle RAC) environment, buffer caches of each database instances is aggregated into one virtual database buffer cache and data(database segment of the table, indexes and so on) can be distributed among the buffer, so larger amount of data can be cached than single database instance by adding nodes into Oracle RAC and expand its buffer cache. 4

Figure 2 Oracle RAC and In-Memory PQ Oracle Real Application Clusters Oracle Real Application Clusters is original technology by Oracle to bind multiple servers and storages so to be operable virtually as a single system, offering high-scalability, high-availability, and easy administration environment, and it keeps evolved going through releases since Oracle 9i Database R1. Oracle RAC 11g R2 is used together with Oracle Grid Infrastructure. Oracle Infrastructure refers to the combined products of Oracle Clusterware, which administers cluster configuration, and Oracle Automatic Storage Management, which manages storages. It enables to offer wide-ranged and advanced optimization of the hardware resources than the previous release. Oracle Infrastructure offers a virtual technology called server pool, it eliminates the bindings between the RAC systems and the physical servers, and enables the resource optimization of the hardware resources across the entire server systems.. 5

Figure 3 An example of the server resource optimization by server pool technology. In figure 3, the number of servers for Oracle RAC ties a number of servers assigned to the server pool. The change of server pool configuration is very flexible via command line tool(srvctl commands) and web browsers(oracle Enterprise Manager consoles), easily changing the assigned number of servers among server pools. In reducing the size of server pool, in which case a necessary numbers of instances must be stopped beforehand, changing can be done without stopping other database instances. By using this server pool technology in Oracle RAC 11g R2, we can temporarily allocate necessary amount of server resources and increase the size of buffer cache in order to load all data into it and execute In-Memory PQ. Figure 3 shows an example of user scenario: There are two server pools defined in Oracle Grid Infrastructure. Srvpool1 is assigned to CustomerDB which serves online transaction and srvpool2 for AccountDB. At night, when AccountDB starts to batch processing but there's not sufficient server resources to execute In-Memory PQ, some nodes of the srvpool1 is relocated to srvpool2 to be ready for In-Memory PQ temporarily until it completes. 6

Verification Environment Described below is the verification environment established at the Oracle GRID Center for the evaluations discussed in this document. Installed Oracle Database 11g R2 on PowerEdge R710 which has Intel Xeon processor E5540 (2.53GHz) and created a database on CX4-240 connected with 4 Fibre Channel (FC) of 4Gbps. Database Server: DELL Power R710 x 4 CPU Intel Xeon processor E5540 (2.53GHz) [cores:4] x 2 Memory 36GB OS Oracle Enterprise Linux 5 Update 3 x86-64 Oracle Database Oracle Database 11g Enterprise Edition Release 11.2.0.1 Storage: EMC CX4-240 Controller Physical Memory Max. Cache Memory 2Controller (Intel Xeon processor x 1 [2core] per Controller) 8GB (4GB per Controller) 2.5GB (1.26GB per Controller) 7

In-Memory PQ execution by utilized server pools Assuming the following system environment in Figure 5 that two nodes are assigned to srvpool1 for online transaction in the daytime. It does not satisfy the buffer cache size to use In-Memory PQ with a large amount of data when executing batch processing at night. We verify the scenario that by adding one node into a srvpool1 temporarily and increase the pool size to 3 nodes as long as it completes batch processing and executing batch processing with In-Memory PQ. At first we create the integrated database environment where two Oracle RAC database (each two nodes) are built onto the 4-node Grid Infrastructure environment. Figure 4 Executing In-Memory PQ by changing server pool configuration To change configuration of each server pool, you can use srvctl command or Oracle Enterprise Manager. The followings is an example of srvctl command. $ ### Reducing the nodes in srvpool2 (2 1) $ srvctl stop instance d dbname n nodename $ srvctl modify srvpool g srvpool2 u 1 $ srvctl status srvpool -g srvpool2 Server Pool Name: srvpool2 Active Servers: 1 $ ### Expanding the nodes in srvpool1 (2 3) $ srvctl modify srvpool g srvpool1 u 3 $ srvctl status srvpool -g srvpool1 By publishing srvctl stop instance command, database is shut down with immediate mode by dafault. If you want to guarantee completion of transaction, you can use transactional mode (In this case you add "-o t" option). 8

When you use Oracle Enterprise Manager to change server pool configuration, do the follwoing: After login to Oracle Enterprise Manager, press the "cluster" tab on the right above. Choose "Manage Server Pools" in "Administration" tab. Select the server pool to shrink and press "edit" button. 9

Select "Specify Maximum Size" and set the number of size after changing(in this case set '1'). Then press "Submit" button. Select the server pool to expand and press "edit" button Select "Specify Maximum Size" and set the number of size after changing(in this case set '3'). Then press "Submit" button. 10

Figure 6 shows the result of relative response time between executing Table Full Scan queries using Direct Read PQ with 2 nodes(left column) and In-Memory PQ with 3 nodes(right column). Figure 5 Effect of the In-Memory PQ. We confirmed that query performance reaches 10 times faster than conventional Direct Read PQ by increasing the number of nodes in server pool "srvpool1" and enables In-Memory PQ. In addition, we also clear up that increasing the total amount of buffer cache of the target Oracle RAC database and every database segments for the query is cached across the buffer cache of the each database instance, In-Memory PQ is executed. As for In-Memory PQ, it is supposed that every target data is cached on the buffer cache in advance, the result above is obtained after configuring server pool (2nodes to 3 nodes) and loading all data into buffer cache. Executing query before the target data is not loaded into the buffer cache, the response time become slower than Direct Path Read as there is an overhead for caching. 11

Summary Throughout the verification test, we demonstrated the effective resource utilization by In-Memory Parallel Query on Oracle Real Application Clusters environment. Combination of server pool technologies and aggregation of multiple database systems by Oracle RAC helps easily construct high volume RAC systems (8nodes, 16nodes and more). Server pool technology helps expand/shrink the scale of systems among databases easily, enables resource allocation in accordance with the traffic and resource requirement of each database. So it is possible to build systems with fewer server resources than taking a conventional way in which the number of each database nodes was calculated based on the individual peak load. And with server pool technology, the size of the database can expand temporarily and increase the size of buffer cache of Oracle RAC database. It leads to be able to configure In-Memory PQ for the large amount of data segments, for which more resource allocation is usually necessary. Using In- Memory PQ instead of Direct Path Read, we can maximize the cpu utilization and minimize disk I/O of the storage which is often the cause of performance bottleneck of Parallel Query with a large amount of data. As In-Memory PQ deals with the cached data on the buffer cache, it is suitable to process data on the same segment repeatedly(e.g. creating multiple materialized-view from one fact table), promising overwhelming performance. In this verification, we were able to prove the effectiveness of In-Memory Parallel Query on Oracle Real Application Clusters. 12

Contributing Authors; Satoru Yagi Ryoichi Tanaka Copyright 2010 NS Solutions Corporation, All Rights Reserved. NS Solutions Corporation 2-20-15, Shinkawa, Chuo-ku, Tokyo Dell Japan Inc. Solid Square East Tower 20F 580 Horikawa-cho, Saiwai-ku, Kawasaki, 212-8589 Japan Dell and the DELL logo are trademarks of Dell Inc. Other company names and product name are the trademarks or registered trademarks of their respective companies. Intel, Intel logo, Xeon, Xeon Inside are trademarks, or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. Intel Corporation Kokusai Bldg., 5F 3-1-1 Marunouchi, Chiyoda-Ku 100-0005 Tokyo, Japan Effective resource utilization using by In-Memory Parallel Query on Oracle Real Application Clusters Feb 2010 Author: Tsukasa Shibata Contributing Authors: Nobuaki Tanigawa Akira Kusakabe Oracle Corporation Japan Oracle Aoyama Center 2-5-8, Kita Aoyama, Minato-ku, 107-0061 Tokyo, Japan Copyright 2010, Oracle and/or its affiliates. All rights reserved. This document is provided for information purposes only and the contents hereof are subject to change without notice. This document is not warranted to be error-free, nor subject to any other warranties or conditions, whether expressed orally or implied in law, including implied warranties and conditions of merchantability or fitness for a particular purpose. We specifically disclaim any liability with respect to this document and no contractual obligations are formed either directly or indirectly by this document. This document may not be reproduced or transmitted in any form or by any means, electronic or mechanical, for any purpose, without our prior written permission. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. 0109