Resource Consumption Management in Oracle WebLogic Server Multitenant (MT)

Similar documents
Oracle WebLogic Server Multitenant:

Oracle JD Edwards EnterpriseOne Object Usage Tracking Performance Characterization Using JD Edwards EnterpriseOne Object Usage Tracking

Oracle JD Edwards EnterpriseOne Object Usage Tracking Performance Characterization Using JD Edwards EnterpriseOne Object Usage Tracking

SOA Cloud Service Automatic Service Migration

An Oracle White Paper June Enterprise Database Cloud Deployment with Oracle SuperCluster T5-8

ORACLE DATABASE LIFECYCLE MANAGEMENT PACK

Using the Oracle Business Intelligence Publisher Memory Guard Features. August 2013

Oracle Grid Infrastructure 12c Release 2 Cluster Domains O R A C L E W H I T E P A P E R N O V E M B E R

Oracle Solaris 11: No-Compromise Virtualization

NOSQL DATABASE CLOUD SERVICE. Flexible Data Models. Zero Administration. Automatic Scaling.

An Oracle White Paper September Security and the Oracle Database Cloud Service

Hard Partitioning with Oracle VM Server for SPARC O R A C L E W H I T E P A P E R J U L Y

Oracle Java SE Advanced for ISVs

Oracle Enterprise Performance Reporting Cloud. What s New in September 2016 Release (16.09)

An Oracle White Paper May Oracle VM 3: Overview of Disaster Recovery Solutions

Migration Best Practices for Oracle Access Manager 10gR3 deployments O R A C L E W H I T E P A P E R M A R C H 2015

MySQL CLOUD SERVICE. Propel Innovation and Time-to-Market

Oracle Grid Infrastructure Cluster Domains O R A C L E W H I T E P A P E R F E B R U A R Y

Oracle Enterprise Data Quality New Features Overview

StorageTek ACSLS Manager Software

An Oracle White Paper September, Oracle Real User Experience Insight Server Requirements

See What's Coming in Oracle Taleo Business Edition Cloud Service

Oracle Financial Consolidation and Close Cloud

VISUAL APPLICATION CREATION AND PUBLISHING FOR ANYONE

Veritas NetBackup and Oracle Cloud Infrastructure Object Storage ORACLE HOW TO GUIDE FEBRUARY 2018

Oracle Database Exadata Cloud Service Exadata Performance, Cloud Simplicity DATABASE CLOUD SERVICE

ORACLE SERVICES FOR APPLICATION MIGRATIONS TO ORACLE HARDWARE INFRASTRUCTURES

Oracle Financial Consolidation and Close Cloud. What s New in the November Update (16.11)

Oracle Best Practices for Managing Fusion Application: Discovery of Fusion Instance in Enterprise Manager Cloud Control 12c

Application Container Cloud

Oracle Database Appliance X6-2S / X6-2M ORACLE ENGINEERED SYSTEMS NOW WITHIN REACH FOR EVERY ORGANIZATION

An Oracle White Paper October Minimizing Planned Downtime of SAP Systems with the Virtualization Technologies in Oracle Solaris 10

Oracle CIoud Infrastructure Load Balancing Connectivity with Ravello O R A C L E W H I T E P A P E R M A R C H

Oracle Diagnostics Pack For Oracle Database

Oracle Adapter for Salesforce Lightning Winter 18. What s New

Oracle Database 12c: JMS Sharded Queues

High density deployments using Weblogic Multitenancy

Cloud Operations for Oracle Cloud Machine ORACLE WHITE PAPER MARCH 2017

ORACLE DATA SHEET KEY FEATURES AND BENEFITS ORACLE WEBLOGIC SUITE

Oracle Clusterware 18c Technical Overview O R A C L E W H I T E P A P E R F E B R U A R Y

ORACLE SOLARIS CLUSTER

Introduction. Architecture Overview

ORACLE ENTERPRISE MANAGER 10g ORACLE DIAGNOSTICS PACK FOR NON-ORACLE MIDDLEWARE

Your New Autonomous Data Warehouse

VIRTUALIZATION WITH THE SUN ZFS STORAGE APPLIANCE

An Oracle White Paper November Primavera Unifier Integration Overview: A Web Services Integration Approach

An Oracle White Paper October Release Notes - V Oracle Utilities Application Framework

Achieving High Availability with Oracle Cloud Infrastructure Ravello Service O R A C L E W H I T E P A P E R J U N E

ORACLE FABRIC MANAGER

Oracle Data Provider for.net Microsoft.NET Core and Entity Framework Core O R A C L E S T A T E M E N T O F D I R E C T I O N F E B R U A R Y

STORAGE CONSOLIDATION AND THE SUN ZFS STORAGE APPLIANCE

TABLE OF CONTENTS DOCUMENT HISTORY 3

Contents Overview of the Performance and Sizing Guide... 5 Architecture Overview... 7 Performance and Scalability Considerations...

Oracle Exadata Statement of Direction NOVEMBER 2017

Oracle Developer Studio Code Analyzer

See What's Coming in Oracle CPQ Cloud

An Oracle White Paper October Advanced Compression with Oracle Database 11g

Oracle Utilities CC&B V2.3.1 and MDM V2.0.1 Integrations. Utility Reference Model Synchronize Master Data

Loading User Update Requests Using HCM Data Loader

Oracle Risk Management Cloud

CONTAINER CLOUD SERVICE. Managing Containers Easily on Oracle Public Cloud

What s New for Oracle Java Cloud Service. On Oracle Cloud Infrastructure and Oracle Cloud Infrastructure Classic. Topics: Oracle Cloud

Oracle Financial Consolidation and Close Cloud. What s New in the February Update (17.02)

Oracle DIVArchive Storage Plan Manager

Oracle API Platform Cloud Service

Oracle Financial Consolidation and Close Cloud. What s New in the December Update (16.12)

Oracle Learn Cloud. Taleo Release 16B.1. Release Content Document

Prerequisites for Using Enterprise Manager with Your Primavera Applications

Oracle Service Cloud Agent Browser UI. November What s New

Oracle Utilities Work and Asset Management Integration to Primavera P6 Enterprise Project Portfolio Management

RAC Database on Oracle Ravello Cloud Service O R A C L E W H I T E P A P E R A U G U S T 2017

Oracle Flash Storage System QoS Plus Operation and Best Practices ORACLE WHITE PAPER OCTOBER 2016

An Oracle White Paper February Comprehensive Testing for Siebel With Oracle Application Testing Suite

Autonomous Data Warehouse in the Cloud

Oracle Fusion Middleware

COMPUTE CLOUD SERVICE. Moving to SPARC in the Oracle Cloud

An Oracle White Paper October Deploying and Developing Oracle Application Express with Oracle Database 12c

Overview. Implementing Fibre Channel SAN Boot with the Oracle ZFS Storage Appliance. January 2014 By Tom Hanvey; update by Peter Brouwer Version: 2.

ORACLE ENTERPRISE COMMUNICATIONS BROKER

Oracle Database Vault

Oracle Event Processing Extreme Performance on Sparc T5

TABLE OF CONTENTS DOCUMENT HISTORY 3

Contents Overview of the Gateway Performance and Sizing Guide... 5 Primavera Gateway System Architecture... 7 Performance Considerations...

Known Issues for Oracle Oracle Autonomous API Platform Cloud Service. Topics: Oracle Cloud

Oracle Data Masking and Subsetting

Automatic Receipts Reversal Processing

Sun Fire X4170 M2 Server Frequently Asked Questions

Oracle Social Network

DATA INTEGRATION PLATFORM CLOUD. Experience Powerful Data Integration in the Cloud

Configuring Oracle Business Intelligence Enterprise Edition to Support Teradata Database Query Banding

New Oracle NoSQL Database APIs that Speed Insertion and Retrieval

INTEGRATION CLOUD SERVICE. Accelerate Your Application Integration Across the Cloud and On Premises

Oracle Developer Studio Performance Analyzer

An Oracle White Paper June Exadata Hybrid Columnar Compression (EHCC)

Extreme Performance Platform for Real-Time Streaming Analytics

TABLE OF CONTENTS DOCUMENT HISTORY 3

Oracle Mobile Application Framework

Corente Cloud Services Exchange

Protecting Your Investment in Java SE

Correction Documents for Poland

Transcription:

Resource Consumption Management in Oracle WebLogic Server Multitenant (MT) Oracle Fusion Middleware 12c (12.2.1) Flexibility and Control Over Resource Usage in Consolidated Environments v1.0 [20151027] Introduction Traditionally, enterprise application deployments have been separately provisioned, deployed and managed in distinct environments with silo-ed physical hardware/virtual-machine, database and middleware infrastructure. This separation were usually aligned with departments or line of businesses in the organization. Such deployments typically utilize hardware resources inefficiently with concomitant wasteful operational and capital expenditure. To address these challenges, organizations have started moving to private enterprise clouds to get simplified operational management, streamlined and uniform provisioning and lifecycle management, while delivering significant cost benefits by consolidating their deployments. WebLogic Server Multitenant enables the consolidation of previously disparate WebLogic Domains into a single shared application server or domain through Domain Partitions. A Multitenant consolidated deployment enables greater efficiencies in meeting your business goals while significantly enhancing density and hardware utilization (reduced capital expenditure through reduced hardware requirements) and reducing administrative and operational costs (reduced operational expenditure through lowered use of power, space; Easy, quick and streamlined administration and rollout of patches and upgrades throughout a Domain and Domain Partitions). 1

WebLogic Server Multitenant also helps a system administrator ensure fair allocation of shared resources to collocated Domain Partitions and tries to limit the effect that one Domain Partition can have on another. The Resource Consumption Management feature in WebLogic Server Multitenant 12.2.1 enables a system administrator to reach these goals by allowing the setting of thresholds and policies that govern the use of shared resources by collocated Domain Partitions. Isolation Requirements in Shared Environments Isolation around various dimensions (operational, fault, configuration, security, performance, resource use) may be required when consolidating multiple Domain Partitions in a single WebLogic Server Multitenant Domain. Typically, inter-tenant isolation requirements in a multitenant system are nonnegotiable based on business requirements, and eventually determine the degree of sharing that can be designed and realized in a given system. For example, isolation requirements of an enterprise deployment may relate to non-negotiable governmental compliance, regulatory mandates or business requirements. They may also include requirements around operating system or application server version restrictions. The ideas of sharing and isolation, in general, are conflicting in nature. Isolation can be achieved through physical and/or logical means, and can be considered in the following areas: fault, security, resource, and operational. Though sharing and isolation are at odds, there exists a continuum of deployment options to choose between separate-infrastructure-and-completely-isolated execution environments and shared-infrastructure-but-non-isolated execution environments. The general recommendation while considering the deployment of WebLogic Server Multitenant is to only isolate as needed to meet the tenant isolation requirements of the specific deployment(s). While Multitenant WebLogic provides the flexibility for hosting multiple previously separate WebLogic Domains as collocated Domain Partitions in a 2

Multitenant Domain, it also provides capabilities to control how the collocated Domain Partitions are isolated from each other. This allows enterprise deployers and system administrators to enjoy the benefits of shared environments (listed above) while still providing a degree of isolation between collocated Domain Partitions. Resource Consumption Management When applications that are deployed to multiple collocated Domain Partitions access shared resources (low-level resources such as CPU, Heap, File I/O), two key problems are likely to be faced: Contention and unfairness during allocation: Multiple requests for a shared resource results in contention and interference. Abnormal resource consumption requests may happen due to benign reasons (hightraffic - genuine or DDoS), or unexpected behavior due to bugs/errors in applications. These requests could overload the capacity of a shared resource, thereby preventing another consumer's access to the resource. Variable performance leading to potential Service Level Agreement (SLA) violations: From a cloud operations perspective, predictable and uniform runtime performance for different collocated consumers is desired to avoid SLA violations. It is therefore critical to manage and isolate access to shared resources in the WebLogic application server by Domain Partitions to ensure fairness in allocation, prevent contention/interference of access to shared resources and to provide consistent performance for multiple co-resident tenants. The Resource Consumption Management (RCM) feature in WebLogic Server Multitenant allows WebLogic system administrators to specify resource consumption management policies (allows the specification of constraints, recourse actions and notifications) on shared resources (namely CPU, Heap and Files). Limited isolation and resource management offered to co-resident applications was already available in past releases of WebLogic through ClassLoader-based isolation (classes not belonging to shared libraries of applications are isolated from each other) and the WebLogic Work Manager feature (that let an 3

administrator configure how an application prioritizes the execution of its work by configuring a set of scheduling guidelines; this enables the WebLogic Work Manager to manage the threads allocated to an application, manage the scheduling of Work instances to those threads, and help maintain service-level agreements). The Resource Consumption Management feature builds on these features by providing a flexible, dynamic mechanism to specify policies on resources, and recourse actions to be taken when those policies are violated. Note The Resource Consumption Management feature in WebLogic Server 12.2.1 is built on top of the resource management support built in Oracle JDK 8u40. WebLogic RCM requires Oracle JDK 8u40+ and the G1 Garbage Collector. For Oracle JDK 8u60 and beyond, the G1 Garbage Collector is a required JVM argument only if support for the Heap Retained resource is required. In WebLogic Server Multitenant, you would need to pass the following additional JVM arguments to enable WebLogic RCM: -XX:+UnlockCommercialFeatures -XX:+ResourceManagement -XX: +UseG1GC Supported Resources In WebLogic Server Multitenant, access by Domain Partitions to the following shared resources can be managed through resource consumption management policies. Open File Descriptors (file-open): Tracks the number of open files. This includes files open through FileInputStream, FileOutputstream, RandomAccessFile NIO File channels Heap - retained bytes (heap-retained): Tracks the amount of Heap retained or in use by a Domain Partition CPU Utilization (cpu-utilization): Tracks the percentage of CPU time utilized by a Partition with respect to the available CPU time to the WebLogic process 4

Policy Model Triggers and Fair Share, Recourse Actions A system administrator specifies resource consumption management policies on shared resources on a per- Domain Partition basis through a Resource Manager. A Resource Manager may consist of multiple resource management policies for multiple resources. The resource consumption management model in WebLogic consists of one or more policies/constraints for a resource, and a recourse action to take when those constraints are breached. Two kinds of policies are supported for the resources discussed in the Section above in WebLogic Server Multitenant. They are: 1. Trigger: A trigger is a static upper-threshold of usage of a resource. When the consumption of that resource crosses the specified threshold, the specified recourse action is performed. This policy type is best suited for environments where the resource usage by Domain Partitions are predictable. As an example, a system administrator may limit a Partition say P1 to not use more than 100 open files, by setting a Trigger for 100 units of the file-open resource in the resource-manager for the P1 Partition. System administrators may use the resource consumption metrics reported by the PartitionResourceMetricsRuntimeMBean (during peak and average traffic loads to the Domain Partition; See the section titled "Partition-scoped Resource Consumption Monitoring" for more information on PartitionResourceMetricsRuntimeMBean), as inputs while determining the thresholds for trigger policies. 2. Fair Share: Similar to the Fair Share Request Class support in the WebLogic WorkManager (see Work Manager Request Classes in the WebLogic Server 12.2.1 documentation), a RCM fair-share policy provides the following assurance to the system administrator (in line with work manager's definition of fair-share): When there is contention for a resource, and there is uniform load by two resource domains over a period of time, the share of resources allocated to the two domains is roughly as per the fair share configured by the system administrator for those two domains. Contention for resources occurs when (current resource usage per Partition) + (new requests for resource 5

consumption per Partition) maximum limit of resource usage available for all partitions A Fair share policy is typically used by a system administrator to ensure that a bounded-size shared resource is shared effectively (yet fairly) by competing consumers. A Fair share policy may also be employed by a system administrator when a clear understanding of the exact usage of a resource by a Partition cannot be accurately determined in advance, and the system administrator would like efficient utilization of resources while ensuring fair allocation of shared resources to co-resident Partitions. A system administrator allocates a 'share' to a Partition by specifying an integer value to let the fair share policy know the share of resources that must be allocated to the Partition during contention over time. The sum of all Partition shares need not equal 100. The fair share policy is supported for the cpu-utilization and heapretained resource types in WebLogic Server 12.2.1. The fairness in resource allocation is calculated over a period of time to account for variation in resource consumption requests from individual domain partitions. [Since fairness is computed and enforced over time, fairness does not necessarily imply equality (when fairshares are equal) of distribution in a particular window of time.] On realizing that a particular Partition has not used their share of resources in the window of time, the fair share policy implementation may allocate transiently a share of resources greater than the specified fair-share for that consumer. However over time, allocations are adjusted such that they align with the fair-shares specified by the system administrator. The fair share policy behind the scenes uses the Partition Work Manager's fair-share to control the amount of computing resources available to the Domain Partition. The fair share policy increases or decreases the Partition Work Manager's fair share, as applicable, to constrain the Domain Partition so that its resource usage is constrained to meet the configured fair share values. A fair share policy is especially useful when complementary (differing peak traffic times) work loads are consolidated in the same WebLogic Server Multitenant domain, as a Domain Partition is allowed to steal resources temporarily when there is no contention for the resource from collocated Domain Partitions. As an example, a system administrator may specify a fair-share value of 60 for Partition P1 and 40 for a Partition P2 so that allocations to Partitions P1 and P2 fall in the ratio 3:2 over time. 6

Since the fair share policy is only applicable during points of contention and over a period of time, there may be interim periods of times when a Partition (say) P1 may be allocated ('steal') more than its configured share (60% in this case) of the resource (to say 80 %). This could happen when P2 is not requesting for the resource at that time. Note: This behavior works well for resources such as CPU Utilization where the resource can be 'revoked' and reassigned by the application server. For bounded-size, non-revokable resources such as Heap Retained it is recommended to establish worst-case upper limits of usage (also referred to as a Circuit Breaker ) for Partitions by explicitly specifying a Trigger with an upper-bound value for Partitions. This particular attribute of the fair-share policy affords great density benefits (and related cost savings) to complementary workload deployments. For instance, with well-behaved and designed applications deployed to different Domain Partitions that have traffic that don't overlap in time, it would now be possible to reuse the same (reduced) infrastructure for multiple collocated partitions. What would have taken 'n' siloed hardware and software infrastructure, could now be effectively hosted in one collocated Multitenant runtime. When a Trigger value is breached, a system administrator may instruct that a recourse action be automatically performed by the WebLogic Resource Consumption Management infrastructure. Recourse Actions are classified into the following recourse action types in WebLogic Server 12.2.1: Notify: A Notification is provided to system administrators as an informational update of the trigger being breached. The notification message follows a standard notification scheme (including information about the current and previous usage of the resource by the partition, information on the policy whose trigger was breached etc as part of the message). The system administrator may also use existing WLDF facilities (such as WLDF Harvester and Watch Rules) to configure a watch rule to listen to the standard log message, and use existing Notification functionality in the WebLogic Watch and Notifications framework to send advanced notifications. Slow: Throttle (typically slow-down) the rate at which the resource is consumed. The rate at which a resource is consumed is controlled indirectly by tweaking the Work Manager assigned to the Domain 7

Partition. This reduces the Domain Partition's ability to consume resources. When a Slow recourse action is triggered, WebLogic would constrain the amount of threads allocated to the Domain Partition by reducing the Partition Work Manager's fair-share value. The relationship between the number of threads allocated to the Domain Partition and its consumption of a contended resource may not be exactly proportional. Therefore the ability to throttle a partition's consumption of resources may not be commensurate with the reduced amount of threads allocated to that Domain Partition. The extent to which a Slow action succeeds is largely dependent on the application's profile (rate, size, number) of requests for resources. Fail: Fails subsequent resource consumption requests after the usage reaches the configured upper threshold. This action is only supported for the file-open resource type in WebLogic Server 12.2.1. Shutdown: Attempt to stop resource consumption by initiating the shutdown sequence of the Domain Partition while allowing cleanup. This recourse action is useful when the Domain Partition has crossed expected peak time usage patterns and reasonable buffer values, and may cause adverse effects to collocated Partitions. The Domain Partition is only shutdown in the Managed Server where the Policy violation has occurred.. These recourse actions may happen in the same thread where the resource consumption request was made (synchronous), or may be performed in a different thread from where the resource consumption request was made (asynchronous). For instance, the Fail recourse action is a synchronous recourse action for the file-open Resource. When a specified trigger is breached, the request to open a File fails synchronously in the same Thread that requested the file open. Other actions (such as a slow recourse action configured for a Heap Retained Resource) happen asynchronously. The combination of triggers and actions discussed above helps a system administrator to shape, control and limit the usage of a resource by a Partition. When a resource management policy is not explicitly set on a Domain Partition, that Domain Partition's usage of shared resources is unconstrained. 8

CPU Utilization The CPU Utilization percentage indicates the percentage of CPU time utilized by a partition with respect to total available CPU time to the WLS process. The metric considers the process load that the WebLogic process exerts in the system and the system load factor, to provide a friendly CPU Utilization % metric. CPU Utilization computation is performed periodically by sampling the threads that are active for a Domain Partition and hence updates may be delayed due to sampling. CPU Utilization is an excellent metric to track contention of CPU by collocated Domain Partitions, and is especially useful in Fair Share policies for CPU-bound workloads. Heap Retained The Retained Heap value for a Domain Partition tracks the amount of Heap retained or in use by that Domain Partition and is available after a G1 garbage collection cycle. Since GC cycles are not periodic, the retained heap information may not be accurate and timely. The WebLogic resource consumption management uses heuristics to periodically track retained heap values for a Domain Partition based on configured values, and takes recourse actions as it detects policy breaches. The slow recourse action (as explained above) is performed through the tweaking of work manager settings. This will reduce the fair-share of the Domain Partition's work manager, thus potentially reducing subsequent object allocations by that partition. This would in turn indirectly reduce consumption of heap for that partition. However the amount of reduction will be based on the traffic pattern of that partition with respect to other partitions, as well as how heap memory has been allocated or de-allocated by various entities (for example, applications deployed) in that partition. It is important to note that a slow recourse action may also inadvertently result in increased heap retained usage. As an example, reducing the fair-share of a Domain Partition's Work Manager may result in slower consumption of JMS messages by JMS message listeners, that could result in more memory being retained for holding the messages. 9

Discrimination of heap usage for objects in static fields, and singleton objects of classes loaded from system and shared classloaders are problematic and may not be accurately represented in the final accounting values. If an instance of a class that is loaded from system and shared classloaders is created by a Partition, the instance's use of heap is accounted against that partition. GCs are also not isolated to specific Domain Partitions in WebLogic Server 12.2.1/Oracle JDK 8u40. Configuration Resource consumption management policies are configured through resourcemanagers. A resource-manager can be created once by a system administrator at the domain level and reused as the policy for a new Partition while the Partition is being created. A resource-manager may also be created within a Partition if the policy is specific to that Partition and is not expected to be reused across Partitions. All changes to a resource management policy are dynamically applied to all Domain Partitions that use that policy. An illustrative example of a resource consumption management policy configuration is shown below. In this example, a system administrator within an enterprise would like to define policies for two kinds of Domain Partitions an Approved kind of Domain Partition, for Partitions that have probably gone through organizational approval, and should be given higher resource usage thresholds, and a Trial kind of Domain Partitions, that are for users who are trying out Domain Partitions, and have lower resource usage thresholds. To achieve this resource management strategy, the system administrator defines An Approved resource-manager in the Domain (representing the set of resource consumption management polices the system administrator would like to establish for all Approved Domain Partitions in the Domain) The Approved resource-manager has policies for various resources. For the file-open resource type, three triggers are specified. An 10

Approved2000 trigger ensures that the Domain Partition must be shutdown when the Partition's usage of open file descriptors crosses 2000. An Approved1700 trigger specifies that when the number of open file descriptors cross 1700, the Domain Partition must be slowed down. An Approved1500 trigger specifies a notify action when the number of open file descriptors crosses 1500. For the heap-retained resource type, an Approved2GB trigger is created to ensure that when the Domain Partition's retained heap value reaches 2GB, the Domain Partition must be shutdown. A combination of policies may also be set for a Resource for a Partition, and in this case, a Fair Share value of 60 is assigned to the Approved Partition. This ensures that during contention, an Approved Partition would be provided a fair-share of '60' of the total available heap in that Managed Server instance over time. A Trial resource-manager defines a different (reduced) set of policies for Silver ties of Partitions. A Partition may then be associated with a resource-manager during Partition creation. In this example, the Partition-0 Partition has been assigned the Approved resource-manager, and therefore has all the policies specified in the Approved resource-manager applicable to it. <domain>... <!--Define RCM Configuration --> <resource-management> <resource-manager> <name>approved</name> <file-open> <trigger> <name>approved2000</name> <value>2000</value><!-- in units--> <action>shutdown</action> </trigger> <trigger> <name>approved1700</name> <value>1700</value> <action>slow</action> </trigger> <trigger> <name>approved1500</name> <value>1500</value> 11

<action>notify</action> </trigger> </file-open> <heap-retained> <trigger> <name>approved2gb</name> <value>2097152</value> <action>shutdown</action> </trigger> <fair-share-constraint> <name>fs-approvedshare</name> <value>60</value> </fair-share-constraint> </heap-retained> </resource-manager> <resource-manager> <name>trial</name> <file-open> <trigger> <name>trial1000</name> <value>1000</value><!-- in units--> <action>shutdown</action> </trigger> <trigger> <name>trial700</name> <value>700</value> <action>slow</action> </trigger> <trigger> <name>trial500</name> <value>500</value> <action>notify</action> </trigger> </file-open>... </resource-manager> </resource-management> <partition> <name>partition-0</name> <resource-group> <name>resourcetemplate-0_group</name> <resource-group-template>resourcetemplate-0</resource-grouptemplate> </resource-group>... <partition-id>1741ad19-8ca7-4339-b6d3-78e56d8a5858</partition-id> 12

<!-- RCM Managers are then targetted to Partitions during partition creation time or later by system administrators --> <resource-manager-ref>approved</resource-manager-ref>... </partition>.. </domain> Using WLST Resource Consumption Management policies could be created and established through WLST. Configuration and Runtime MBeans are available for configuring policies and tracking when policies are executed respectively. The policy discussed above could be created through WLST using the following script (assuming the domain's name is available through the domainname variable and the Domain Partition is partition-1 : startedit() cd('/resourcemanagement') cd(domainname) # create an Approved ResourceManager rm=cmo.createresourcemanager('approved') fo=rm.createfileopen('approved-fo') fo.createtrigger('approved2000',2000,'shutdown') fo.createtrigger('approved1700',1700,'slow') fo.createtrigger('approved1500',1500,'notify') hr=rm.createheapretained('approved-hr') hr.createtrigger('approved2gb',2097152,'shutdown') hr.createfairshareconstraint('fs-approvedshare', 60) # create a Trial ResourceManager cd('/resourcemanagement') cd(domainname) rm=cmo.createresourcemanager('trial') fo=rm.createfileopen('trial-fo') fo.createtrigger('trial1000',1000,'shutdown') fo.createtrigger('trial700',700,'slow') fo.createtrigger('trial500',500,'notify') 13

save() activate() startedit() # Assign Approved ResourceManager to Partition-0 cd('/partitions') cd(partition-0) cmo.setresourcemanagerref(getmbean('/resourcemanagement/' + domainname + '/ResourceManager/Approved')) save() activate() Using Fusion Middleware Control Oracle Enterprise Manager Fusion Middleware Control 12c serves as the central integration for all configuration and manageability aspects of the Fusion Middleware Product Line including WebLogic Server Multitenant 12.2.1. It delivers comprehensive functionality for WebLogic system administrators to manage Multitenant WebLogic environments. The Fusion Middleware Control provides excellent support for easily creating and managing resource consumption management policies, and assigning them to Domain Partitions. A domain-level resource manager can be created by navigating to the Environment > Resource Consumption Managers entry in the pull down menu of the WebLogic Domain as shown in the image below. 14

The same configuration discussed in the above two sections can be recreated through the Fusion Middleware Control as follows Create the Approved Resource Manager by clicking on Add Resource Manager. Choose the resource type as File Open and specify values corresponding to the Approved Resource Manager. Add the Heap Retained fair share and trigger policy to the Approved resource manager, by selecting the Approved Resource Manager and 15

then clicking on Add Policy. In the Add Policy dialog, specify fair share values and shutdown trigger values for the Heap Retained resource. Repeat the same steps for creating the Trial resource manager. To associate a Resource Manager with a Domain Partition (say Partition0), in the Domain Partition screen for Partition-0, click on ResourceSharing available under Domain Partition > Administration > Resource Sharing. 16

17

From the available resource-managers, choose the appropriate RM for Partition-0 A partition-scoped resource-manager may also be created in the same screen. Partition-scoped Resource Consumption Monitoring Resource consumption metrics for shared resources on a per partition basis is 18

provided through a PartitionResourceMetricsRuntimeMBean. Detailed usage metrics are available through this monitoring Mbean and System administrators may use these metrics for the purposes of tracking, sizing analysis, monitoring, configuring business-specific Watch and Harvester WLDF rules etc. Sizing and Policy Guidance General Recourse actions must be selected carefully by a system administrator. A lot of resources have complex interactions between them. For instance slowing down CPU utilization (resulting in fewer threads allocated to the Domain Partition) may result in increased heap residency, thereby impacting retained heap usage. Complementary Workloads For obtaining maximum density savings in your consolidation exercise, it is important to ensure that complementary workloads are housed in the same WebLogic Server Multitenant server instance. Complementary work loads have different peak usage times. Try to ensure that the sum of their averages are not above their maximum Peak value. Antagonistic workloads on the other hand have overlapping peak usage times and their sum of averages go beyong their maximum Peak values. Poor performance in a consolidated environment, missed SLAs, and outages can be a consequence of mixing non-complementary, or antagonistic workloads. While evaluating workloads for inclusion in a consolidated environment, use WebLogic Server 12.2.1's Partition scoped resource consumption metrics to obtain average and peak usage of resources, and use them judiciously while establishing resource consumption management policies. 19

CPU When consolidating workloads ensure that the consolidated workloads peak CPU utilization does not greatly exceed the average CPU utilization. The gap between peak and average should be kept to a minimum, which ensures that the CPUs are being utilized as fully as possible. Initial CPU sizing will depend on what applications will be housed in the collocated Domain Partitions. It is recommended to allow an additional 10% for operational tasks such as backup or any other administrative or scheduled tasks, and 15% to account for cluster failover. Establishing resource management policies so that nodes in a cloud pool operate at a 75% CPU capacity provides a good balance between general usage and headroom. Memory Ensure that you do not over-commit memory. Leave enough headroom for the Global Partition, and any other system work. While evaluating workloads for inclusion in a consolidated environment and crafting resource consumption management policies, study the low, average, steady-state and peak Heap retained usage values for a Domain Partition's representative workload. Conclusion The WebLogic multitenant architecture delivers the highest consolidation density, while providing the ability to a System administrator to finely tune resource management policies so that shared resources are fairly allocated to collocated Domain Partitions. Though it is difficult to achieve perfect sharing and isolation at the same time, the Oracle WebLogic Server 12.2.1 Resource Consumption Management feature enables a system administrator to determine, manage, isolate and monitor access to resources in the WebLogic runtime to ensure fairness in allocation, prevent contention/interference of access to shared resources to provide consistent performance for multiple co-resident tenants. 20

Author: Sivakumar Thyagarajan Copyright 2015, Oracle and/or its affiliates. All rights reserved. This document is provided for information purposes only, and the contents hereof are subject to change without notice. This document is not warranted to be error-free, nor subject to any other warranties or conditions, whether expressed orally or implied in law, including implied warranties and conditions of merchantability or fitness for a particular purpose. We specifically disclaim any liability with respect to this document, and no contractual obligations are formed either directly or indirectly by this document. This document may not be reproduced or transmitted in any form or by any means, electronic or mechanical, for any purpose, without our prior written permission. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered trademark of The Open Group.0115 21