TransLattice Technology Overview

Similar documents
Protecting Mission-Critical Application Environments The Top 5 Challenges and Solutions for Backup and Recovery

Hyper-Converged Infrastructure: Providing New Opportunities for Improved Availability

W H I T E P A P E R : O P E N. V P N C L O U D. Implementing A Secure OpenVPN Cloud

Veritas Storage Foundation for Windows by Symantec

Business Continuity and Disaster Recovery. Ed Crowley Ch 12

ActiveScale Erasure Coding and Self Protecting Technologies

Optimizing and Managing File Storage in Windows Environments

A Practical Guide to Cost-Effective Disaster Recovery Planning

ECONOMICAL, STORAGE PURPOSE-BUILT FOR THE EMERGING DATA CENTERS. By George Crump

Veritas Storage Foundation for Windows by Symantec

White Paper. A System for Archiving, Recovery, and Storage Optimization. Mimosa NearPoint for Microsoft

EBOOK. FROM DISASTER RECOVERY TO ACTIVE-ACTIVE: NuoDB AND MULTI-DATA CENTER DEPLOYMENTS

VERITAS Volume Replicator. Successful Replication and Disaster Recovery

Real-time Protection for Microsoft Hyper-V

ActiveScale Erasure Coding and Self Protecting Technologies

INTRODUCING VERITAS BACKUP EXEC SUITE

VERITAS Storage Foundation 4.0 TM for Databases

The Microsoft Large Mailbox Vision

Controlling Costs and Driving Agility in the Datacenter

WHITE PAPER. How Virtualization Complements ShoreTel s Highly Reliable Distributed Architecture

CONTENTS. 1. Introduction. 2. How To Store Data. 3. How To Access Data. 4. Manage Data Storage. 5. Benefits Of SAN. 6. Conclusion

Commvault Backup to Cloudian Hyperstore CONFIGURATION GUIDE TO USE HYPERSTORE AS A STORAGE LIBRARY

Nutanix Tech Note. Virtualizing Microsoft Applications on Web-Scale Infrastructure

Downtime Prevention Buyer s Guide. 6 QUESTIONS to help you choose the right availability protection for your applications

Maximizing Availability With Hyper-Converged Infrastructure

Introduction to iscsi

VERITAS Volume Replicator Successful Replication and Disaster Recovery

High Availability through Warm-Standby Support in Sybase Replication Server A Whitepaper from Sybase, Inc.

Virtual Disaster Recovery

Data Protection for Cisco HyperFlex with Veeam Availability Suite. Solution Overview Cisco Public

Solution Brief: Commvault HyperScale Software

Veritas Storage Foundation for Oracle RAC from Symantec

Hitachi Unified Compute Platform Pro for VMware vsphere

Dell Storage Point of View: Optimize your data everywhere

The Technology Behind Datrium Cloud DVX

Disaster Recovery Solutions for Oracle Database Standard Edition RAC. A Dbvisit White Paper By Anton Els

Data Sheet: Storage Management Veritas Storage Foundation for Oracle RAC from Symantec Manageability and availability for Oracle RAC databases

HCI: Hyper-Converged Infrastructure

Reliable, scalable storage for mass-market consumption

Benefits of Multi-Node Scale-out Clusters running NetApp Clustered Data ONTAP. Silverton Consulting, Inc. StorInt Briefing

Reasons to Deploy Oracle on EMC Symmetrix VMAX

Veritas Storage Foundation from Symantec

White Paper BC/DR in the Cloud Era

Linux Automation.

Hyperconvergence and Medical Imaging

Veritas InfoScale Enterprise for Oracle Real Application Clusters (RAC)

3 Ways Businesses Use Network Virtualization. A Faster Path to Improved Security, Automated IT, and App Continuity

SolidFire and Ceph Architectural Comparison

Avoiding the Cost of Confusion: SQL Server Failover Cluster Instances versus Basic Availability Group on Standard Edition

Nimble Storage Adaptive Flash

Hyperconverged Infrastructure: Cost-effectively Simplifying IT to Improve Business Agility at Scale

Microsoft E xchange 2010 on VMware

Simplifying Downtime Prevention for Industrial Plants. A Guide to the Five Most Common Deployment Approaches

Deliver Office 365 Without Compromise

Maximum Availability Architecture: Overview. An Oracle White Paper July 2002

A Guide to Architecting the Active/Active Data Center

WHITE PAPER. Header Title. Side Bar Copy. Header Title 5 Reasons to Consider Disaster Recovery as a Service for IBM i WHITEPAPER

Dell DR4000 Replication Overview

Protect enterprise data, achieve long-term data retention

A Digium Solutions Guide. Switchvox On-Premise Options: Is it Time to Virtualize?

How To Guide: Long Term Archive for Rubrik. Using SwiftStack Storage as a Long Term Archive for Rubrik

HYPER-CONVERGED INFRASTRUCTURE 101: HOW TO GET STARTED. Move Your Business Forward with a Software-Defined Approach

EMC VPLEX Geo with Quantum StorNext

SolidFire and Pure Storage Architectural Comparison

The VERITAS VERTEX Initiative. The Future of Data Protection

DELIVERING PERFORMANCE, SCALABILITY, AND AVAILABILITY ON THE SERVICENOW NONSTOP CLOUD

C H A P T E R Overview Figure 1-1 What is Disaster Recovery as a Service?

Cohesity Flash Protect for Pure FlashBlade: Simple, Scalable Data Protection

Carbonite Availability. Technical overview

Using Virtualization to Reduce Cost and Improve Manageability of J2EE Application Servers

Move, manage, and run SAP applications in the cloud. SAP-Certified Infrastructure from IBM Cloud

Copyright 2010 EMC Corporation. Do not Copy - All Rights Reserved.

REDUCING THE IT BURDEN WITH HCI:

Protecting Mission-Critical Workloads with VMware Fault Tolerance W H I T E P A P E R

CENTRALIZED MANAGEMENT DELL POWERVAULT DL 2100 POWERED BY SYMANTEC

Technical Brief. NVIDIA Storage Technology Confidently Store Your Digital Assets

Step into the future. HP Storage Summit Converged storage for the next era of IT

iscsi Technology Brief Storage Area Network using Gbit Ethernet The iscsi Standard

Genomics on Cisco Metacloud + SwiftStack

White Paper. How to select a cloud disaster recovery method that meets your requirements.

Deliver Office 365 Without Compromise Ensure successful deployment and ongoing manageability of Office 365 and other SaaS apps

Consolidated Disaster Recovery. Paul Kangro Applied Technology Strategiest

CONFIGURATION GUIDE WHITE PAPER JULY ActiveScale. Family Configuration Guide

Veritas NetBackup Appliance Family OVERVIEW BROCHURE

WHY SECURE MULTI-TENANCY WITH DATA DOMAIN SYSTEMS?

IT your way - Hybrid IT FAQs

MONITORING STORAGE PERFORMANCE OF IBM SVC SYSTEMS WITH SENTRY SOFTWARE

If you knew then...what you know now. The Why, What and Who of scale-out storage

Backup Exec 9.0 for Windows Servers. SAN Shared Storage Option

FOUR WAYS TO LOWER THE COST OF REPLICATION

Symantec NetBackup 7 for VMware

SQL Server Consolidation with Server Virtualization on NetApp Storage

Hitachi Adaptable Modular Storage and Hitachi Workgroup Modular Storage

Nutanix White Paper. Hyper-Converged Infrastructure for Enterprise Applications. Version 1.0 March Enterprise Applications on Nutanix

Data safety for digital business. Veritas Backup Exec WHITE PAPER. One solution for hybrid, physical, and virtual environments.

Disaster Recovery Guide

IBM Storwize V7000: For your VMware virtual infrastructure

FAQ. Frequently Asked Questions About Oracle Virtualization

Ten things hyperconvergence can do for you

Disk-Based Data Protection Architecture Comparisons

Transcription:

WHITE AER TransLattice Technology Overview ABSTRACT By combining necessary computing elements into easily managed appliances, the TransLattice Application latform (TA) offers enterprises a fundamentally different way to provision their applications and data. This paper explains how Lattice Computing, a unique approach that combines globally distributed computing resources into a cohesive system, not only simplifies IT infrastructure and deployment but also delivers exceptional system resilience and data control, while significantly reducing costs.

Table of Contents Introduction. 3 E luribus Unum. 4 Distributed Enterprise Applications.. 4 Distributed Relational Data. 4 Data Distribution and olicy Controls.. 4 Cluster Hierarchy 5 Inherently Resilient Architecture 6 Sophisticated Redundancy Model.. 7 Scalability 8 Network Services. 8 Management & Administration.. 9 Summary 10 2

Introduction Today s typical application infrastructure is an overly complex beast. Most deployments rely on numerous components including database servers, application runtimes, load balancers, storage area networks, and WAN optimizers all of which are provided by a multitude of vendors. The resulting application stack demands considerable integration, which can drive management costs up and availability levels down. And the inherently centralized structure often results in a poor user experience for those not located near the data center. Ultimately, this type of infrastructure is inefficient, rigid, unstable, and costly. TransLattice believes there is a better approach. Lattice Computing is a resilient, distributed application architecture, comprised of a cluster of identical nodes that cooperate to provide data storage and run applications without any master node or centralized points of failure (Fig. 1). Utilizing Lattice Computing, the TransLattice Application latform anticipates workers needs, delivering applications and data when and where they are needed. Furthermore, the ability to easily add nodes (for deployment on-premises, in the cloud, or through a combination of both) increases both capacity and redundancy. This unique, new approach to scalable application computing simplifies IT infrastructure by combining the various necessary computing elements into easily managed appliances. This paper outlines the concepts behind Lattice Computing and explains how our use of intelligent distributed systems helps enterprises boost resilience and data control, while significantly reducing costs, management burdens, and deployment complexity. BACKU DATA CENTER SAN Storage Database Application Servers Load Balancing RI M A RY DATA CENTER SAN Storage Database Application Servers Load Balancing T R A D I TI ON A L A ROA C H LATTI C E C OM U TI N G Figure 1. Unlike conventional deployments, Lattice Computing decentralizes all aspects of an application. This distributed infrastructure ensures business continuity and provides users with local access to information. 3

E luribus Unum Out of many, one. This idea of distributed strength is the basic premise behind the TransLattice solution. TA coalesces computing resources throughout an organization so that administrators and users see one unified system. With TA, nodes may be dispersed across multiple sites and cloud providers, but they work together to form a highly resilient application and data platform (Fig. 2). Applications residing throughout the network can pull from the distributed resources, efficiently delivering the performance of a local application while, in reality, using the world s first truly geographically distributed relational database. Distributed Enterprise Applications TA puts e pluribus unum into action by distributing and decentralizing the application server so that it runs multiple application containers yet it appears as a single application runtime and database. Standard J2EE applications execute seamlessly across the entire computing environment, while actually boosting resilience, scalability, and user performance. Adapting an application to run on TA requires minimal effort. In fact, the majority of time invested centers on testing the application with the TransLattice platform as part of the application s release or deployment process. Distributed Relational Data The TransLattice platform further employs the concept of e pluribus unum through a geographically distributed relational database that delivers high-performance, global redundancy, and cost efficient data storage. Existing applications that transact, access, and transform relational data using SQL can use this storage without any modification. Database tables are automatically partitioned into groups of rows based on attributes, and these partitions are redundantly stored across the computing infrastructure. The database provides full ACID semantics, ensuring reliable processing of database transactions. Data Distribution and olicy Controls TA anticipates future access patterns and then automatically and intelligently distributes data according to those patterns thereby minimizing impact on the network and improving end-user performance. At the same time, when an object or database partition is created, TA combines historical access patterns and automatically-gathered network topology information to determine the relative costs of different storage strategies. Represented in these costs are both the resources utilized by the storage strategy, as well as the anticipated amount of time it will take users to interactively retrieve the information in the future. Figure 2. Each node contains processing, storage, and network resources and works cooperatively with the rest of the cluster to provide key application services and unified management. 4

olicy also plays an important role in how TA distributes and stores data, while giving administrators a high level of control. For example, by establishing location policy rules, administrators can specify that certain tables or portions of tables must or must not be stored in various locations. If critical data must be stored only on a particular continent, or may not be stored at locations with inferior physical security, administrators can pinpoint these restrictions with ease. Similarly, they can use redundancy policy rules to specify how many copies of each type of information must be stored, so organizations can meet business continuity and availability goals. A redundancy policy rule might specify that all transaction data must be stored on at least two different continents, ensuring that the data is preserved even if all computing resources on a given continent fall offline. Within the constraints specified by policy rules, the system then generally turns to the most efficient calculated storage plan to store each object or database partition. Because of this, the same procedure that calculates where to store information can also be used to locate information within the system for access. In some cases, the system may not be able to place information in the most ideal locations because of capacity constraints or an outage. In this case, the system notes the location of the information in a globally distributed exception index. Additionally, usage patterns and the optimal positioning of data may change, resulting in other exception index entries. At times when the network is not fully utilized, the system leverages spare capacity to move items in the exception index to their preferred locations. Cluster Hierarchy To simplify the specification of policy rules and apply further control over the infrastructure, administrators may also define groupings between nodes. These groupings form a cluster hierarchy, which is maintained as a balanced tree (Fig. 3). Cluster hierarchy is useful because it enables administrators to align the infrastructure more closely with business policy. For instance, an administrator may group resources by geographic region to meet business continuity use cases, and then further group them by country to meet compliance goals. The hierarchy need not correspond to actual network topology; instead, it is a grouping that allows the ready specification of policy. Figure 3. Typical Cluster Hierarchy. Grouping nodes in this way allows administrators to easily meet business and disaster recovery use cases through the intelligent placement of information. 5

Inherently Resilient Architecture Administrators manage these groupings through the TransLattice Cluster Hierarchy Wheel, which provides a convenient interface for exploring the current status of a cluster and its associated nodes. For quick and cohesive viewing, cluster information is represented in a circular configuration rather than in a tree (Fig. 4). olicy levels, which correspond to a concept or type of grouping, are represented as rings on the wheel, with the innermost ring representing the broadest grouping (such as a region). The sectors within policy levels are policy domains. A policy domain is a logical grouping of system resources in a cluster (which might correspond, for example, to a specific city or continent within a region). The outermost ring on the wheel ultimately drills down to the node level, and each sector corresponds to a specific node. When a specific policy domain or node is selected, the wheel rotates and zooms in to provide an enhanced view of the status of associated resources. olicy levels and their names are shown in the legend on the left side of the hierarchy wheel. The ability to define levels provides considerable flexibility in the policy specification process. Inherent resilience is another aspect that sets the TransLattice platform apart from traditional infrastructures, which tend to rely on complex replication to allow for disaster recovery. In fact, most traditional frameworks require duplication of the entire application stack at some secondary location, and then use storage area network snapshot replication or database log-shipping to periodically move changes from the primary location to the secondary one. In the TransLattice system, however, resilience against facility failure is a fundamental trait of the distributed architecture. Because the data is stored redundantly across the nodes based on policy, the system can continue processing if a node or location fails, while automatically rebuilding redundancy. Moreover, because all nodes provide the required application services in a resilient fashion, organizations no longer need to set up and maintain dedicated failover sites. No resources are dedicated purely to disaster recovery; instead, surplus resources also satisfy increases in application demand and improve performance for application users. N o d e 6 N o d e 8 Q u e b e c C a n a d a N o d e 4 N o d e 2 N o d e 1 S a n F r a n c i s c o U S A N A M C l u s t e r E M E A G e r m a n n y M u n i c h N o d e 9 LEVEL LEGEND nodes City Country Region root N o d e 7 N e w Y o r k N o d e 3 e F r a n c a r i s N o d e 5 Figure 4. Cluster Hierarchy Wheel. The hierarchy in this case corresponds to that shown in Figure 3. In this case, nodes are grouped first by region, next by country, and finally by city. 6

How much difference does this inherent resilience make when determining how well the system responds to failures? Consider the concepts of RO (Recovery oint Objective) and RTO (Recovery Time Objective). The RO of a system is the specified amount of data that may be lost in the event of a failure, while the RTO of a system is the amount of time that it will take to bring the system back online after a failure. In the case of snapshot replication systems, the RO may be more than a day s worth of changes, and the RTO generally requires manual intervention and may take several hours. In the TransLattice system, the majority of users are not affected by a failure. For the users who are affected, the RTO is generally less than a minute, which represents the amount of time required for their client to reconnect to a functioning node. Furthermore, TA allows applications to choose their RO on a transaction-by-transaction basis; critical transactions can become instantly durable (with an RO of effectively zero, preserving the transaction once success has been returned), while larger and less critical applications can be streamed out as resources allow. In other words, TA provides a framework for maximum resilience with minimum effort, enabling enterprises to feel secure in their ability to preserve business continuity and prevent data loss. Sophisticated Redundancy Model Similarly, TA helps companies avoid the pitfalls of conventional storage architectures, which tightly couple storage components to provide redundancy. For instance, in a RAID-5 or RAID-6 array, a group of drives are combined into an array that maintains parity to cope with drive failure. However, rebuilding after a failure requires all data on the array to be read a lengthy process that s likely to degrade performance and leave data vulnerable to loss in the event of any additional failures. The TA architecture, however, loosely couples all data storage locations and uses different combinations of storage elements to store each object. In the event that a node or storage element fails, only a relatively small amount of work is required to restore redundancy, and this workload is fairly distributed throughout the system (Fig. 5). spare Backup SAN Cabinet (RAID 6) 1 2 6 7 11 12 3 8 13 4 9 5 14 10 15 BACKU DATA CENTER SAN Storage Database Application Servers Load Balancing RIMARY DATA CENTER SAN Storage Database Application Servers Load Balancing 8 13 11 4 9 5 6 12 3 7 10 4 3 9 7 14 8 2 spare 1 6 11 SAN Cabinet (RAID 6) 2 7 12 3 8 13 4 9 5 14 10 15 1 15 6 5 12 1 2 11 10 TRADITIONAL AROACH TRANSLATTICE AROACH Figure 5. TransLattice s redundancy model combines business continuity and storage redundancy into a cohesive architecture, while ensuring that redundancy can be quickly restored after failure. 7

Scalability When analyzing the reliability of a system, we typically look at two key industry standard metrics: MTBF (Mean Time Between Failures), which specifies the rate at which infrastructure component failures are expected to occur, and MTTR (Mean Time To Repair), which is the anticipated amount of time required before the failure is repaired. Conventional redundancy architectures often have substantial MTTRs, during which any subsequent failure may cause loss of data or application availability. In fact, many conventional business continuity architectures often have no redundancy left when a failover site is active, and deactivating the failover site can be a complicated procedure requiring the manual replication of data from the failover site back to the primary site. The TransLattice system improves MTTR by automatically managing redundancy in both normal and failure cases. Furthermore, the way that data placement occurs allows all the nodes to fairly amortize the work of restoring redundancy in the event of a failure, thereby speeding recovery. For example, if an organization has an eight-node cluster spread across four locations, with policy that specifies that each piece of information must be stored on at least three nodes and in at least two physical locations, there are 56 different ways that each piece of data can be stored. The system selects between these different storage plans individually for each object. The large number of plans ensures that only a small proportion of data objects lose redundancy if two nodes fail, while also ensuring that all nodes will share the small amount of work required to restore compliance with the redundancy policy. Because redundancy is automatically managed by the distributed system, failures of individual disks or processing nodes do not require administrator intervention. Administrators can simply replace the damaged resources whenever it s convenient. Additionally, no resources are dedicated purely to redundancy or spares, so all parts of the system can be used to satisfy user requests. The TransLattice Application latform also vastly simplifies and reduces many traditional challenges and expenses associated with scaling including component or vendor limitations, inaccurate planning, or changing business requirements. Traditional architectures often require manual federation of data, complicated partitioning schemes, or careful balancing of components to meet performance and scalability goals. To illustrate, imagine a current infrastructure composed of a multitude of interdependent and complicated tiers of components, each of which is carefully aligned with respect to another. If one component in the infrastructure does not scale well, other components might be subsequently stuck. In some cases, this precarious framework demands a forklift upgrade, where the existing application stack must be completely replaced with another of greater scale. Obviously, this level of complexity in traditional deployments makes capacity planning tremendously important when anticipating future demand. Overbuilding an application is expensive in operating expenses and capital costs but, infrastructure that is originally provisioned too small may need to be replaced prematurely and may not be able to meet business needs. Due to TA s scale-out capabilities, however, organizations can easily expand capacity and storage by adding nodes to the cluster or by leveraging utility computing services in the cloud. This ease of scalability frees organizations from the need to overprovision in the face of uncertain or intermittent future demand, and instead allows them to successfully accommodate business needs with agility. 8

Network Services TransLattice nodes are designed to simplify management because they are self-administering and require minimal local configuration. Additionally, TransLattice nodes are typically deployed on dynamic (DHC) addressing. This eliminates the need for reconfiguration when computing resources are moved or network renumbering occurs. All communications between nodes occur over an SSL-based secure overlay network. Nodes automatically maintain connectivity with other nodes to provide cluster services. When users access the application, they do so through a Service Entry oint (SE). The administrator specifies a number of these entry points where users connect into the cluster to obtain services. Nodes arbitrate with each other for ownership of SE addresses. As long as one node remains functional on the subnet where the SE resides, services will remain available through that SE. When users connect, they are directed to the node that may most effectively handle their requests taking into account loading, data location, and a user s location on the network. This provides linear scalability of load-balancing performance. In the event of node failure, another node takes over its address and continues servicing requests. The end result is that connectivity between elements of the system and end-users ability to reach the system remains consistent even in the event of failure. Management & Administration Because TA s significantly different architecture unites many components of the framework into a single, cohesive system, the platform automates many operations that have traditionally required careful tuning and configuration by personnel. No longer bound by tedious and tactical chores, administrators gain back valuable time to apply towards strategy and fundamental business needs. Instead of worrying about the minute details of network and resource utilization, for example, personnel can focus on delivering new functionality in the application, addressing new business cases, and infrastructure planning. Any remaining administrative actions are tied directly to business requirements. IT staff can carefully determine what types of policy should be in place for data storage and redundancy, the structure of the underlying computing resources, and the types and methods of access provided to end-users. While administrators are no longer required to micromanage information storage, they can always determine the location of data through reports. Reports provide data access locations, which can be useful when analyzing how the system is used and where capacity might need to be added. TransLattice nodes maintain global connectivity on the overlay network through a variety of mechanisms. Nodes attempt to open connections to nodes with which they are not directly connected, using a predetermined static address, an address found using DNS-Service Discovery, or an address provided by another adjacent node that has direct connectivity to the desired node. In short, this means that if two nodes have any connectivity path and can connect to any common node, they can find each other for direct communication. As a result, minimal administration is required, and the system maintains connectivity when network changes occur. 9

Summary TransLattice offers significant advantages in the deployment and operation of relational applications. Unlike conventional infrastructures, the Translattice Application latform offers a geographically distributed architecture, including a decentralized application server and the first truly distributed relational database. Unified, simplified management provides administrators with greater levels of control over policy and data location, while the TransLattice platform intelligently automates the process of distributing data and maintaining redundancy. As a result, enterprise applications become highly resilient against disasters and data loss, while scalability and performance limitations are eliminated. Ultimately, TransLattice is redefining application infrastructure to align more closely with the overall objectives of the business. With the TransLattice system in place, organizations save time, significantly reduce costs and complexity, and become better positioned to focus on value-generating business concerns. This is a dramatic change, indeed but one that offers dramatic rewards. Corporate Headquarters: TransLattice, Inc. 2900 Gordon Avenue, Santa Clara, CA phone: (408) 749-8478 email: info@translattice.com translattice.com 2011 TransLattice, Inc. All Rights Reserved. TransLattice and the TransLattice logo are property of TransLattice, Inc. in the United States and other countries. art # 9800-0001-03