Load Balancing Microservices-Based Applications

Similar documents
Avi Vantage Platform Architecture

Bringing DevOps to Service Provider Networks & Scoping New Operational Platform Requirements for SDN & NFV

Modelos de Negócio na Era das Clouds. André Rodrigues, Cloud Systems Engineer

Transformation Through Innovation

Sentinet for BizTalk Server SENTINET

FROM A RIGID ECOSYSTEM TO A LOGICAL AND FLEXIBLE ENTITY: THE SOFTWARE- DEFINED DATA CENTRE

Build application-centric data centers to meet modern business user needs

A10 HARMONY CONTROLLER

Seven Key Considerations Before Your Upcoming F5 or Citrix Load Balancer Refresh

Oracle Solaris 11: No-Compromise Virtualization

Accelerate Your Enterprise Private Cloud Initiative

The Top Five Reasons to Deploy Software-Defined Networks and Network Functions Virtualization

White Paper. Why Remake Storage For Modern Data Centers

Cloud Security Gaps. Cloud-Native Security.

Flex Tenancy :48:27 UTC Citrix Systems, Inc. All rights reserved. Terms of Use Trademarks Privacy Statement

Sentinet for Microsoft Azure SENTINET

PLEXXI HCN FOR VMWARE ENVIRONMENTS

Pulse Secure Application Delivery

XD Framework (XDF) Overview. For More Information Contact BlueSpace at Tel: (512) Web:

PLEXXI HCN FOR VMWARE VSAN

Merging Enterprise Applications with Docker* Container Technology

Bringing OpenStack to the Enterprise. An enterprise-class solution ensures you get the required performance, reliability, and security

BUILDING MICROSERVICES ON AZURE. ~ Vaibhav

Javaentwicklung in der Oracle Cloud

White Paper BC/DR in the Cloud Era

Nutanix and Big Switch: Cloud-First Networking for the Enterprise Cloud

YOUR APPLICATION S JOURNEY TO THE CLOUD. What s the best way to get cloud native capabilities for your existing applications?

CHARTING THE FUTURE OF SOFTWARE DEFINED NETWORKING

ONUG SDN Federation/Operability

F5 Reference Architecture for Cisco ACI

Achieving Digital Transformation: FOUR MUST-HAVES FOR A MODERN VIRTUALIZATION PLATFORM WHITE PAPER

Enabling Efficient and Scalable Zero-Trust Security

Hedvig as backup target for Veeam

WHITEPAPER. Embracing Containers & Microservices for future-proof application modernization

Solace JMS Broker Delivers Highest Throughput for Persistent and Non-Persistent Delivery

Introduction. Delivering Management as Agile as the Cloud: Enabling New Architectures with CA Technologies Virtual Network Assurance Solution

Hyper-Converged Infrastructure: Providing New Opportunities for Improved Availability

Cisco HyperFlex and the F5 BIG-IP Platform Accelerate Infrastructure and Application Deployments

How to Leverage Containers to Bolster Security and Performance While Moving to Google Cloud

Oracle Application Container Cloud

STRATEGIC WHITE PAPER. Securing cloud environments with Nuage Networks VSP: Policy-based security automation and microsegmentation overview

To Kill a Monolith: Slaying the Demons of a Monolith with Node.js Microservices on CloudFoundry. Tony Erwin,

Unified Application Delivery

I D C T E C H N O L O G Y S P O T L I G H T. V i r t u a l and Cloud D a t a Center Management

Application Centric Microservices Ken Owens, CTO Cisco Intercloud Services. Redhat Summit 2015

Multi-Tenancy Designs for the F5 High-Performance Services Fabric

Global Distributed Service in the Cloud with F5 and VMware

TALK THUNDER SOFTWARE FOR BARE METAL HIGH-PERFORMANCE SOFTWARE FOR THE MODERN DATA CENTER WITH A10 DATASHEET YOUR CHOICE OF HARDWARE

DEFINING SECURITY FOR TODAY S CLOUD ENVIRONMENTS. Security Without Compromise

The importance of monitoring containers

AWS Reference Design Document

Module Day Topic. 1 Definition of Cloud Computing and its Basics

Certeon s acelera Virtual Appliance for Acceleration

The F5 Application Services Reference Architecture

Cisco Crosswork Network Automation

Benefits of SD-WAN to the Distributed Enterprise

Service Mesh and Microservices Networking

The ADC Guide to Managing Hybrid (IT and DevOps) Application Delivery

SOLUTION BRIEF NETWORK OPERATIONS AND ANALYTICS. How Can I Predict Network Behavior to Provide for an Exceptional Customer Experience?

EASILY DEPLOY AND SCALE KUBERNETES WITH RANCHER

DevOps CICD PopUp. Software Defined Application Delivery Fabric. Frey Khademi. Systems Engineering DACH. Avi Networks

Continuous delivery of Java applications. Marek Kratky Principal Sales Consultant Oracle Cloud Platform. May, 2016

August, HPE Propel Microservices & Jumpstart

Discover SUSE Manager

BUILDING A PATH TO MODERN DATACENTER OPERATIONS. Virtualize faster with Red Hat Virtualization Suite

Adobe Digital Marketing s IT Transformation with OpenStack

VMWARE AND NETROUNDS ACTIVE ASSURANCE SOLUTION FOR COMMUNICATIONS SERVICE PROVIDERS

NFV Infrastructure for Media Data Center Applications

1. Introduction. 2. Technology concepts

Nutanix and Big Switch: Cloud-First Networking for the Enterprise Cloud

That Set the Foundation for the Private Cloud

Agile Data Center Solutions for the Enterprise

WHITE PAPER. RedHat OpenShift Container Platform. Benefits: Abstract. 1.1 Introduction

21ST century enterprise. HCL Technologies Presents. Roadmap for Data Center Transformation

Deploying TeraVM in an OpenStack Environment

MODERNIZE INFRASTRUCTURE

BUYING SERVER HARDWARE FOR A SCALABLE VIRTUAL INFRASTRUCTURE

Network Programmability with Cisco Application Centric Infrastructure

Networking for a dynamic infrastructure: getting it right.

FIVE REASONS YOU SHOULD RUN CONTAINERS ON BARE METAL, NOT VMS

WHITE PAPER. Applying Software-Defined Security to the Branch Office

5 Things You Need for a True VMware Private Cloud

When (and how) to move applications from VMware to Cisco Metacloud

Connecting your Microservices and Cloud Services with Oracle Integration CON7348

How to Evaluate a Next Generation Mobile Platform

Transforming Management for Modern Scale-Out Infrastructure

WHITE PAPER NGINX An Open Source Platform of Choice for Enterprise Website Architectures

DEPLOY MODERN APPS WITH KUBERNETES AS A SERVICE

Top five Docker performance tips

Hitachi Enterprise Cloud Container Platform

Dell Software Defined Enterprise

Symantec NetBackup 7 for VMware

Network Programmability and Automation with Cisco Nexus 9000 Series Switches

Transform to Your Cloud

The Evolution of IT Resilience & Assurance

UNIFY SUBSCRIBER ACCESS MANAGEMENT AND EXPLOIT THE BUSINESS BENEFITS OF NOKIA REGISTERS ON VMWARE vcloud NFV

VMware vsphere 4 and Cisco Nexus 1000V Series: Accelerate Data Center Virtualization

The Programmable Network

Managing Openstack in a cloud-native way

CLOUD WORKLOAD SECURITY

Transcription:

Load Balancing Microservices-Based Applications WHITE PAPER DATA SHEET SUMMARY Applications have evolved from client-server to SOA-based to microservicesbased architecture now used in most modern web apps. This evolution has greatly impacted how applications are created, scaled, secured, and managed. Unfortunately, Application Delivery Controllers (ADCs), or load balancers, in the industry today still lag behind. An application that previously needed 1 service now requires 10 or more different services. Unless modern ADCs are microservices aware and have the right hooks (via APIs, automation, and orchestration frameworks) to provision, configure and manage each microservice, IT teams will struggle to keep up with the lifecycle management of applications. This paper describes how the application architecture has evolved (see Figure 1), and how the Avi Vantage Platform Distributed Microservices TM can dramatically reduce the operational impact due to microservices-based application architecture. Application Architecture Monolithic Loosely Coupled Interchangeable Components (Microservices) ABOUT THIS DOCUMENT This white paper describes the evolution of application architecture and the benefits of orchestration platforms for microservices architecture. Avi Vantage natively integrates with microservices apps, offering elastic scale, data plane isolation for tenants and applications, application affinity, programmability, N-way active redundancy, REST API communication, and more. Application Delivery Solutions Load Balancer Appliance Virtualization Hypervisors x86 innovation ADC Appliance/ Software Cloud Mobile Microservices Distributed Network s per Microserve Figure 1: Evolution of Application Architectures vs Application Delivery Systems THE MONOLITHIC ARCHITECTURE Since the earliest days of web application development, the most widely used enterprise application architecture has packaged all application server-side components into a single unit. Many enterprise Java applications consist of a single WAR or EAR file. Let s imagine, for example, that you are building an online store that takes orders from customers, verifies inventory and available credit, and ships orders. The application consists of several components, including the StoreFront user interface () and services for managing the product catalog, processing orders, and managing the customer s account. These services share a domain model consisting of entities such as Product,, and Customer. Despite having a logically modular design, the application is deployed as a monolith. For example, if you were using Java, then the application would consist of a single WAR file running on a web container such as Tomcat (see Figure 2). c07.18

Tomcat WAR Storefront Browser Load Balancer Catalog Product Managment Customer s Customer Database Figure 2: Traditional monolithic application architecture Monolithic architecture has a number of benefits. Monolithic applications are simple to develop, since IDEs and other development tools are oriented around developing a single application. They are easy to test since you just need to launch one application. Monolithic applications are also simple to deploy, since you just have to copy the deployment unit a file or directory to a machine running the appropriate kind of server. This approach works well for relatively small applications. However, the monolithic architecture becomes unwieldy for complex applications. A monolithic architecture also makes it difficult to test and adopt new technologies. It s difficult, for example, to try out a new infrastructure framework without rewriting the entire application, which is risky and impractical. Consequently, you are generally stuck with technology choices that you made at the start of the project. Because all the application code runs in the same process on the server, scaling individual portions of the application is difficult, if not impossible. To deploy new changes to one application component, you have to build and deploy the entire monolith; this not only is complex, risky, and time consuming, but also requires the coordination of many developers and long test cycles as well. If one service is memory intensive and another CPU intensive, the server must be provisioned with enough memory and CPU to handle the baseline load for each service. This becomes expensive if each server needs high amount of CPU and RAM, and is exacerbated if load balancing is used to scale the application horizontally. In other words, the monolithic architecture doesn t scale to support large, long-lived applications. A huge monolithic application can quickly become a delicate house of cards where a fault in one minor part of the application can bring the whole system down. THE MICROSERVICES ARCHITECTURE Microservices architecture was designed to address the issues created by monolithic architecture. The services defined in the monolithic application architecture are decomposed into individual services, and deployed separately on different hosts. Each microservice is aligned with a specific business function, and only defines the operations necessary to that business function. Like service-oriented architecture (SOA), microservices architecture may often make use of a message bus, but the messaging layer has no logic whatsoever it is purely used as a transport for messages from one service to another. With microservices, development is rapid, and services evolve alongside the needs of the business. Benefits of Using Microservices Architecture 1. First, each microservice is relatively small. The code is easier for developers to understand. The small code base doesn t slow down the IDE, and each service typically starts a lot faster than a large monolith. Overall, this allows developers to be more productive and speeds up deployments. 2. Second, each service can be deployed independently of other services. If developers responsible for a service need to release local changes, they can deploy their changes without coordinating with other developers. A microservices architecture makes continuous deployment feasible and attractive, and representational state transfer (REST) offers a lightweight mechanism for communicating between services 3. The ability to scale applications is one of the biggest advantages of microservices architecture. With monolithic architecture, components with wildly different resource requirements (e.g. large CPU vs. large memory requirements) must be deployed together. In contrast, microservices architecture deploys each service on hardware that is best suited to its resource requirements, and scales each service independently of other services. AVINETWORKS.COM 2

SCALING MICROSERVICES ALONG X, Y & Z AXES The most common representation of application scaling is the Scale Cube 3-D model from The Art of Scalability. According to this model, X-axis scaling is commonly used to improve applications capacity and availability, and involves running multiple identical copies of the application behind a load balancer. Similarly, when using Z-axis scaling, each server runs an identical copy of the code. Unlike X-axis scaling, however, each server is responsible for only a subset of the data. Some component of the system is responsible for routing each request to the appropriate server. One commonly used routing criteria is an attribute of the request, such as the primary key of the entity being accessed (i.e. sharding). Y-axis Split by One Monolithic system X-axis Horizontal Duplication Many nodes each a clone of other Z-axis Lookup oriented splits or Data Sharding Figure 3: Application scaling along X, Y, and Z axes Z-axis scaling, like X-axis scaling, improves application capacity and availability. However, neither approach solves the problems of increasing development and application complexity, which is where Y-axis scaling comes into play. Unlike X-axis and Z-axis scaling, which consist of running multiple, identical copies of the application, Y-axis scaling splits the application into multiple different services. Each service is responsible for one or more closely related functions (see Figure 3). Y-axis scaling uses a couple of different ways to decompose applications into services. One approach is to use verb-based decomposition and define services that implement a single use case, like checkout. Load balancing with an ADC is the single most effective way to scale applications. If we apply Y-axis decomposition to the example architecture above, we get the architecture below (see Figure 4). AVINETWORKS.COM 3

Catalog Catalog Checkout Review Management Account Management Recommendation Customer Figure 4: Microservices-based application architecture RISE OF REST Coinciding with modularization of Web applications was the evolution of interprocess communication (IPC), which made use of text-based serialization formats like XML and JSON. Protocols (such as SOAP) allowed IPC across HTTP, and soon web developers were building not just web applications that served content to browsers, but also web services that performed actions and delivered data to other programs. This services-based architecture proved to be very powerful, as it eliminated dependencies on shared code libraries, and allowed application developers to further decouple application components. The SOAP protocol and related WS-* standards soon became increasingly complex and heavily dependent on specific implementations in application servers, so developers migrated to the much more lightweight representational state transfer (REST) protocol. As use of mobile devices exploded, and as web UX development switched to AJAX and JavaScript frameworks, application developers started to make extensive use of REST for transmitting data between the client devices and the web servers. AVI NETWORKS VANTAGE - DESIGNED FOR MICROSERVICES APPS The Avi Vantage Platform is a software-defined, next generation application delivery platform that provides integrated analytics as well as secure, reliable, and scalable network services for cloud applications. At the heart of Avi Vantage is a revolutionary architecture based on software-defined networking (SDN) principles. Avi Vantage separates the data plane from the control plane an industry first for application delivery controllers and load balancers. Avi Vantage architecture enables seamless scaling of application delivery services within and across data center and cloud locations while maintaining a single point of management and control (see Figures 5 and 6). AVINETWORKS.COM 4

AVI CONSOLE REST API AVI CONTROLLER DISTRIBUTED MICROSERVICES AVI SERVICE ENGINES Figure 5: Avi Vantage Platform - system components Distributed Microservices TM is the distributed data plane for the Avi Vantage Platform. Implemented by high performance Avi Engines (SEs), it provides comprehensive application delivery services such as load balancing, application acceleration, and application security. Using Avi s rich set of data, control, and management plane services, Avi SEs can be placed close to application microservices and grouped together for higher performance and faster responses to clients. Additionally, integrated data collectors provide end-to-end timing, metrics, and logs for each user-to-app transaction. Actionable insights about end user experiences, application performance, infrastructure utilization, and anomalous behavior help improve applications. CONTROL MANAGEMENT HARDWARE ADC APPLIANCES VIRTUAL ADC APPLIANCES AVI VANTAGE PLATFORM DATA Runs inside a VM Figure 6: Separation of data, control and management planes AVINETWORKS.COM 5

ANALYTICS-DRIVEN APPLICATION DELIVERY As demand for a particular microservice application grows, Avi Vantage s unique distributed architecture allows the Avi SEs to automatically scale out without any human intervention. Moreover, each microservice scales out independently of other microservices. The Avi s Inline AnalyticsTM engines constantly monitor traffic patterns for each microservice application. When a (customizable) threshold is met, the increasing traffic load is handled seamlessly by newly scaled out Avi SEs. Furthermore, the Avi Networks Inline Analytics engines can also send triggers based on ambient loads to scale up/down the backend microservices applications. DATA PLANE ISOLATION FOR TENANTS AND APPLICATIONS ELASTIC SCALE Avi Networks elastic data plane can scale to match the needs of microservice-based applications in real-time across 100s of tenants and 1000s of applications. Avi SEs allows network services for each of the microservices to be individually scaled out/in or up/down (see Figure 7). To avoid sharing appliances between critical applications, individual Engines are allocated to tenants and applications for data plane isolation. This eliminates the noisy neighbor problem, where rogue microservices or tenants can potentially impact performance of adjacent applications (see Figure 8). TENANT A TENANT B SCALE- OUT Figure 7: As demand increases, Avi SEs are automatically created by Avi Controller Figure 8: Avi engines can be dedicated to microservices and/or tenants for true isolation APPLICATION AFFINITY PROGRAMMABILITY Avi Engines are placed close to microservices applications for best app performance and minimal traffic tromboning in the network. Whether microservices are inside a single physical server, in different servers but in a single data center, or even across different data centers, Avi SEs automatically discover and locate themselves in the closest possible proximity to each microservice (see Figure 9). All interactions with the Avi Controller occur through native REST APIs, since both the Avi and CLI are built on top of REST APIs. Avi Vantage natively supports DevOps automation tools like CFEngine, Chef, Puppet, and Salt (see Figure 10). DEV TEST PRODUCTION REST API Figure 9: Avi SEs can be collocated with each microservice within or across cloud locations AVINETWORKS.COM Figure 10: Avi Controller interacts using REST APIs 6

N-WAY ACTIVE REDUNDANCY Using redundancy principles from web-scale datacenters, Avi Networks provides N-Way Active-Active redundancy along with Active-Active and Active-Standby availability options (see Figure 11). ACTIVE ACTIVE ACTIVE FIgure 11: Avi Networks supports a N-way active redundancy model PUTTING IT ALL TOGETHER Catalog Catalog Review Checkout Recommendation Management DESKTOP/MOBILE CLIENTS Account Management Customer Figure 12: Avi SE s attached to various microservices controlled and managed by Avi Controller Each group of Avi SEs can be can be associated with a specific tenant. In a multitenant environment, traffic for a particular application is isolated to that tenant s group of Avi SEs. A single instance of the Avi Controller has the capability to manage multiple groups of Avi SE. The Avi Controller s role-based access control (RBAC) mechanism ensures that users who are logged into a particular tenant can only view the details of that particular tenant. AVINETWORKS.COM 7

ACHIEVING A MODERN, SOFTWARE-DEFINED DATA CENTER Below are the specific steps required to evolve application delivery for modern data centers: Step No. 1: In terms of architecture, the control and data planes need to be separated within the ADC. Data plane resources need to be dynamically distributed across different hardware platforms and public/private clouds. Step No. 2: ADCs must achieve the concept of application affinity, which is based on the app world concept of processor affinity where resources are aligned/pinned for specific functions. There are two major benefits to this approach. First, with ADC resources side-by-side, microservices improve the application response time. Second, this tight alignment (affinity) enables ADC resources to achieve automatic lifecycle management of microservices without manual intervention, significantly reducing management complexity. Step No. 3: Achieving data plane independence (isolation) enables multitenancy, especially in cloud environments. Multitenant features enable microservices to operate and change independently without disrupting microservices. This is also called the no noisy neighbor impact.. Step No. 4: ADCs must fulfill the self-service programmability and efficiency promises of SDN. Most, if not all, ADC vendors today support REST, the protocol of choice in hyperscale web services. However, ADCs can only achieve SDN promises and enable one-to-one communication between applications controller and control elements through RESTful APIs when control and data planes are separated. SUMMARY Application architecture has evolved from purpose-built, monolithic ( shrink wrapped ) code and products to a tightly federated collection of microservices that are both modular and reusable. It s as if app developers began using a common set of Lego blocks to build any number of web-based apps, limited only by their imagination. For networking teams, the move to microservice-app development means that existing assumptions around traffic patterns, load balancing scale, and service requirements are no longer valid. Instead, what s needed is a greater level of network-wide intelligence and a new application delivery architecture that mirrors microservice apps. Avi Vantage is an elastically scalable load-balancer with a distributed data plane that can span, serve and scale apps across various on-premise and cloud locations. The distributed data plane enables customers to achieve application affinity at microservice levels, dramatically improving application performance. In addition, clean separation of planes with a unified, centralized control plane significantly alleviates the operational complexity associated with individually integrating, operating, and managing each ADC appliance. CASE-STUDY: How a software-defined flexible load-balancer can adapt to changes in application architectures over time Stage 1: In its early stages, the company predominantly focuses on achieving low complexity and overhead, leading to rapid software development and new features. The rush to deliver proof of concept features means that developers usually do not have the luxury to design applications for scalability, high availability, and redundancy. Typically, these applications are deployed on a web server and database server. With Avi Vantage, high availability and scalability can be quickly achieved by running applications on a pair of web servers behind a pair of Avi SEs. This keeps the operation costs at a minimum as well. Stage 2: As demand for business grows, the company can scale quickly by adding more resources (X-axis scaling) behind Avi Vantage. As the number of web servers grow, static content managing can become a nightmare, but can be mitigated by using caching engines on Avi. As the popularity of the company and its product grows, the need to scale and perform better necessitates rearchitecturing the application and breaking it into smaller applications along the lines of services/functions. Database partitions start to make sense, and partitions emerge along geographical locations, names, etc. Stage 3: With Avi Vantage s future-proof, security-focused design, the same ADC used since day 1 of application deployment can still be used as application traffic grows. Avi Vantage can be flexibly deployed in multicloud environments (e.g. applications in local data centers and in the cloud) as the number of SLAs increase. Given Avi Vantage s centralized control and management interface, managing and load balancing enterprise applications will always remain a simple task. AVINETWORKS.COM 8