Database Services at CERN with Oracle 10g RAC and ASM on Commodity HW

Similar documents
The Oracle Database Appliance I/O and Performance Architecture

for the Physics Community

Dell Reference Configuration for Large Oracle Database Deployments on Dell EqualLogic Storage

CERN IT-DB Services: Deployment, Status and Outlook. Luca Canali, CERN Gaia DB Workshop, Versoix, March 15 th, 2011

Storage Optimization with Oracle Database 11g

Experience the GRID Today with Oracle9i RAC

Under the Hood of Oracle Database Appliance. Alex Gorbachev

Planning & Installing a RAC Database

Oracle Exadata X7. Uwe Kirchhoff Oracle ACS - Delivery Senior Principal Service Delivery Engineer

Oracle Database 10G. Lindsey M. Pickle, Jr. Senior Solution Specialist Database Technologies Oracle Corporation

FlashGrid Software Enables Converged and Hyper-Converged Appliances for Oracle* RAC

Software Defined Storage at the Speed of Flash. PRESENTATION TITLE GOES HERE Carlos Carrero Rajagopal Vaideeswaran Symantec

EMC Unified Storage for Oracle Database 11g/10g Virtualized Solution. Enabled by EMC Celerra and Linux using FCP and NFS. Reference Architecture

STORAGE CONSOLIDATION WITH IP STORAGE. David Dale, NetApp

Adapted from: TRENDS AND ATTRIBUTES OF HORIZONTAL AND VERTICAL COMPUTING ARCHITECTURES

This presentation is for informational purposes only and may not be incorporated into a contract or agreement.

W H I T E P A P E R. Comparison of Storage Protocol Performance in VMware vsphere 4

Cisco HyperFlex All-Flash Systems for Oracle Real Application Clusters Reference Architecture

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated Release 2

Increasing Performance of Existing Oracle RAC up to 10X

BENEFITS AND BEST PRACTICES FOR DEPLOYING SSDS IN AN OLTP ENVIRONMENT USING DELL EQUALLOGIC PS SERIES

Database Solutions Engineering. Best Practices for Deploying SSDs in an Oracle OLTP Environment using Dell TM EqualLogic TM PS Series

HP solutions for mission critical SQL Server Data Management environments

The LCG 3D Project. Maria Girone, CERN. The 23rd Open Grid Forum - OGF23 4th June 2008, Barcelona. CERN IT Department CH-1211 Genève 23 Switzerland

Software-defined Shared Application Acceleration

ORACLE RAC DBA COURSE CONTENT

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated Release 2

Copyright 2012, Oracle and/or its affiliates. All rights reserved.

HIGH PERFORMANCE SANLESS CLUSTERING THE POWER OF FUSION-IO THE PROTECTION OF SIOS

<Insert Picture Here> Introducing Oracle WebLogic Server on Oracle Database Appliance

Hyper-converged storage for Oracle RAC based on NVMe SSDs and standard x86 servers

Copyright 2003 VERITAS Software Corporation. All rights reserved. VERITAS, the VERITAS Logo and all other VERITAS product names and slogans are

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini

Storage Designed to Support an Oracle Database. White Paper

Copyright 2013, Oracle and/or its affiliates. All rights reserved.

Dell Compellent Storage Center and Windows Server 2012/R2 ODX

Oracle 11g Release 2 RAC & Grid Infrastructure Administration Course Overview

Sun N1: Storage Virtualization and Oracle

SAP Applications on IBM XIV System Storage

Solaris Engineered Systems

Mellanox InfiniBand Solutions Accelerate Oracle s Data Center and Cloud Solutions

Microsoft Exchange Server 2010 workload optimization on the new IBM PureFlex System

<Insert Picture Here> Enterprise Data Management using Grid Technology

InfoSphere Warehouse with Power Systems and EMC CLARiiON Storage: Reference Architecture Summary

Ideas for evolution of replication CERN

STORAGE CONSOLIDATION WITH IP STORAGE. David Dale, NetApp

Extremely Fast Distributed Storage for Cloud Service Providers

NAS for Server Virtualization Dennis Chapman Senior Technical Director NetApp

Exam : Title : High-End Disk for Open Systems V2. Version : DEMO

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results

White Paper. EonStor GS Family Best Practices Guide. Version: 1.1 Updated: Apr., 2018

Storage Systems Market Analysis Dec 04

IBM Storwize V7000 Unified

DAHA AKILLI BĐR DÜNYA ĐÇĐN BĐLGĐ ALTYAPILARIMIZI DEĞĐŞTĐRECEĞĐZ

VMware Virtual SAN Technology

THE SUMMARY. CLUSTER SERIES - pg. 3. ULTRA SERIES - pg. 5. EXTREME SERIES - pg. 9

Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS

White Paper. A System for Archiving, Recovery, and Storage Optimization. Mimosa NearPoint for Microsoft

EMC Business Continuity for Microsoft Applications

Assessing performance in HP LeftHand SANs

<Insert Picture Here> Exadata MAA Best Practices Series Session 1: E-Business Suite on Exadata

Deploy a High-Performance Database Solution: Cisco UCS B420 M4 Blade Server with Fusion iomemory PX600 Using Oracle Database 12c

All-Flash High-Performance SAN/NAS Solutions for Virtualization & OLTP

vsphere 4 The Best Platform for Business-Critical Applications Gaetan Castelein Sr Product Marketing Manager VMware, Inc.

Chapter 3 Virtualization Model for Cloud Computing Environment

Verron Martina vspecialist. Copyright 2012 EMC Corporation. All rights reserved.

EMC Symmetrix DMX Series The High End Platform. Tom Gorodecki EMC

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE

FUJITSU Storage ETERNUS AF series and ETERNUS DX S4/S3 series Non-Stop Storage Reference Architecture Configuration Guide

Focus On: Oracle Database 11g Release 2

Comparison of Storage Protocol Performance ESX Server 3.5

Top Trends in DBMS & DW

High Availability Infrastructure for Cloud Computing

<Insert Picture Here> Oracle MAA und RAC Best Practices und Engineered Systems

Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS

B. Using Data Guard Physical Standby to migrate from an 11.1 database to Exadata is beneficial because it allows you to adopt HCC during migration.

Oracle RAC Course Content

Creating the Fastest Possible Backups Using VMware Consolidated Backup. A Design Blueprint

Oracle RAC 10g Celerra NS Series NFS

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini

Mission-Critical Databases in the Cloud. Oracle RAC in Microsoft Azure Enabled by FlashGrid Software.

EMC Backup and Recovery for Microsoft Exchange 2007

Eliminate Idle Redundancy with Oracle Active Data Guard

Use of the Internet SCSI (iscsi) protocol

Oracle Zero Data Loss Recovery Appliance (ZDLRA)

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect

davidklee.net gplus.to/kleegeek linked.com/a/davidaklee

The Right Choice for DR: Data Guard, Stretch Clusters, or Remote Mirroring. Ashish Ray Group Product Manager Oracle Corporation

The Fastest And Most Efficient Block Storage Software (SDS)

Oracle Database 12c: Clusterware & RAC Admin Accelerated Ed 1

Isilon: Raising The Bar On Performance & Archive Use Cases. John Har Solutions Product Manager Unstructured Data Storage Team

VEXATA FOR ORACLE. Digital Business Demands Performance and Scale. Solution Brief

Mike Hughes Allstate Oracle Tech Lead, Oracle Performance DBA

Microsoft SQL Server in a VMware Environment on Dell PowerEdge R810 Servers and Dell EqualLogic Storage

Eliminating Downtime When Migrating or Upgrading to Oracle 10g

iscsi Technology: A Convergence of Networking and Storage

Oracle Database 11g: RAC Administration Release 2 NEW

Sizing and Best Practices for Online Transaction Processing Applications with Oracle 11g R2 using Dell PS Series

IBM Tivoli Storage Manager. Blueprint and Server Automated Configuration for Linux x86 Version 2 Release 3 IBM

IT Infrastructure: Poised for Change

Transcription:

Database Services at CERN with Oracle 10g RAC and ASM on Commodity HW UKOUG RAC SIG Meeting London, October 24 th, 2006 Luca Canali, CERN IT CH-1211 LCGenève 23

Outline Oracle at CERN Architecture of CERN Physics DB Services Sharing experience from CERN s Oracle production. Focus on: Oracle on commodity HW 10g RAC and ASM on Linux Oracle 10g RAC and ASM at CERN - 2

Why DBs at CERN Latest-generation accelerator (LHC) will start next year Ultimate goal: solving open questions in particle physics and cosmology A staggering amount of data will be produced for analysis Long-running collaboration with Oracle Concorde (15 Km) Mt. Blanc (4.8 Km) Balloon (30 Km) CD stack with 1 year LHC data! (~ 20 Km) Oracle 10g RAC and ASM at CERN - 3

Physics Services Support Run database services to meet the requirements of the Physics experiments Central repository for many LHC applications Mission-critical (respect SLA) Requirements High Availability High Performance and Scalability Consolidation Provide a cost-effective solution Oracle 10g RAC and ASM at CERN - 4

Architecture Database Clusters An implementation of grid computing for the database tier for HA and load balancing HW Many interchangeable pieces Cost-effective HW Software Cluster database (Oracle RAC) Cluster volume manager and filesystem (ASM) Oracle 10g RAC and ASM at CERN - 5

Oracle 10g RAC and ASM Enterprise-class HW vs. commodity HW RAC SMP, Scale UP Grid-like, Scale OUT ASM Oracle 10g RAC and ASM at CERN - 6

Growth on Demand Clusters are expanded to meet growth. Servers SAN Storage Oracle 10g RAC and ASM at CERN - 7

Storage Configuration - HW SAN at low cost (not an oxymoron) FC Storage Arrays Infortrend (A08F-G2221) SATA disks FC controller (dual ported, cache, 8 disks) FC switches QLogic SANBox 5600 Qlogic HBAs Dual ported QL2462 Redundant fiber connections (multipathing) Oracle 10g RAC and ASM at CERN - 8

Storage Configuration - SW Oracle ASM 10g R2 Implements SAME (stripe size tuned for Oracle) Mirroring across storage arrays Comes from Oracle Works with RAC Takes storage configuration and tuning closer to DBAs Device name persistency Asmlib Simple way to mark disks for persistency Reduces N# of open file descriptors Devlabel (RHEL 3) Multipathing with load balancing Qlogic driver for Linux Oracle 10g RAC and ASM at CERN - 9

Disk Configuration with ASM Follow the ideas of S.A.M.E. as much as possible (J. Loaiza 1999) Two diskgroups per DB (data,flash recovery area) Destroking Old trick of the trade for performance Even more important for SATA disks (high capacity, low IOPS) Example: DiskGrp1 DiskGrp2 Mirroring Striping Striping Oracle 10g RAC and ASM at CERN - 10

ASM Configuration Coupled configuration: Production (DB1) with low-io DB (DB2) DB N.1 DB N.2 Oracle RAC Nodes Data DG-1 Recovery DG-2 Data DG-2 Recovery DG-1 Disk Groups (ASM) Storage Arrays Oracle 10g RAC and ASM at CERN - 11

Storage Performance Orion for performance measurements http://otn.oracle.com/deploy/availability/htdocs/lowcoststorage.html IOPS Measurements Number of IOPS (small random IO, 8KB) 1600 1400 1200 1000 800 600 400 200 0 IOPS, 4 Disks IOPS, 8 Disks IOPS, 16 Disks 1 3 5 15 25 35 45 55 65 75 85 95 Number of outstanding asynch IOs Oracle 10g RAC and ASM at CERN - 12

ASM performance IOPS and scalability from the DB (synthetic test) 10000 Small Random I/O (8KB block, 32GB probe table) 64 SATA HDs (4 arrays, 1 RAC node) I/O small read per sec (IOPS) 9000 8000 7000 6000 5000 4000 3000 2000 1000 8675 IOPS 0 2 10 30 50 70 90 Oracle 10g RAC and ASM at CERN - 13 110 130 150 170 190 210 230 250 Workload, number of oracle sessions 270 290 310 330 350

ASM Wrap-up Performance ASM and ASMLib scale out, tested with 64 HDs. Small random IO: ~100 IOPS per disk Sequential IO (test 800 MB/s on 4 arrays, FC limit) With ASM: uniform utilization of the HDs (for HA and perf) High capacity/cost Administration: We experienced a few stability issues with 10.1, much better in 10.2 We deploy a custom ASM config using JBOD (HA and performance in return for the added complexity). ASM DBA can streamline operations, but requires additional storage admin skills Single disk failure rate requires DBA time, so far rate as expected: MTBF ~ 60 years Oracle 10g RAC and ASM at CERN - 14

RAC Configuration Cost-effective HW Most nodes are dual Xeon with 4GB of RAM mid-range PC with dual power supply and HBA Linux RHEL 3, 32 bit Gigabit Ethernet 2 private interconnects + public network(s) Cluster size 4 nodes for most production Larger clusters for scalability tests Planned upgrade from 4 to 8 nodes Oracle 10g RAC and ASM at CERN - 15

Economies of Scale Homogeneous HW configuration Clusters can be easily built and grown A pool of servers, storage arrays and network devices are used as standard building blocks Hardware provisioning and installation is simplified Software configuration Same OS and database version on all nodes Ex: Red Hat Linux and Oracle 10g R2 Simplifies installation, administration and troubleshooting Oracle 10g RAC and ASM at CERN - 16

Operations Commodity HW simplifies troubleshooting Node replacement and node reinstallation Often offloaded to junior sysadmin DBAs have root password Can install Oracle independently Configure Storage at the OS level (asmlib etc) Look at system logs FC storage DBAs can logon and perform basic operation on storage arrays and fiber channel controllers Oracle 10g RAC and ASM at CERN - 17

10g for Consolidation Our model is of an application service provider We assign dedicated DB clusters to each customer Each application is assigned a service Services are run in load balanced mode if possible (easier to manage in case of failure) Some application don t scale with RAC They are assigned to one preferred node Some RAC nodes are deployed only to provide redundancy in case of failover (available nodes) Oracle 10g RAC and ASM at CERN - 18

10g Backup and Recovery Technology: RMAN (Oracle s primary solution for HA) Media manager (Tivoli) On-disk backups Our disk layout allows for large recovery areas Low overhead with 10g incremental recovery Incremental backups With block change tracking we reduce performance impact and tape usage Oracle 10g RAC and ASM at CERN - 19

10g Backup Performance Measurements from a production DB Mixed workload, mainly read-only Full backup Full backup to tape: 3.5 TB in 16 hours Incremental backups We use block change tracking Average daily activity for the month August 2006 Block_change_tracking file = 420 MB Daily, incremental level 1 (differential): 9 minutes Backup size to tape: 16 GB Corresponding archived redo log size (daily): 58 GB of redo, 230 files Oracle 10g RAC and ASM at CERN - 20

Distributed Databases Streams replication DB changes are captured at source, propagated at destination and then applied Data transfer inside CERN (LAN) Stream to other laboratories around the world (WLAN) Experience 10gR2 improved stability and performance Successfully tested functionality Challenging to tune the 3-component architecture Apply process is often the bottleneck on LAN Propagation strongly affected by network latency on WAN Capture is CPU intensive (downstream capture under investigation) Oracle 10g RAC and ASM at CERN - 21

Conclusions Physics Database Services at CERN run production Oracle 10g services Positive experience after 1 year of production 10gR2 RAC and ASM on commodity HW Currently 100 CPUs, 200TB of raw data Ramping up: ongoing deployment of new applications and expansion of the RAC clusters Links: http://www.cern.ch/phydb http://www.cern.ch/canali Oracle 10g RAC and ASM at CERN - 22