SE Memory Consumption

Similar documents
SE Memory Consumption

Avi Networks Technical Reference (16.3)

SCALE AND SECURE MOBILE / IOT MQTT TRAFFIC

NetScaler 2048-bit SSL Performance

IPv6 BGP Peering in Avi Vantage

A Library and Proxy for SPDY

SANGFOR AD Product Series

BlackBerry AtHoc Networked Crisis Communication Capacity Planning Guidelines. AtHoc SMS Codes

Network Design Considerations for Grid Computing

Ultra high-speed transmission technology for wide area data movement

SANGFOR AD Product Series

SEDA: An Architecture for Well-Conditioned, Scalable Internet Services

Deployment Guide AX Series with Oracle E-Business Suite 12

An In-depth Study of LTE: Effect of Network Protocol and Application Behavior on Performance

Intel Solid State Drive Data Center Family for PCIe* in Baidu s Data Center Environment

VVD for Cloud Providers: Scale and Performance Guidelines. October 2018

Bandwidth, Latency, and QoS for Core Components

Alteon version 32.0 Recommended OID For SNMP Monitoring. Radware Knowledgebase

How to Make the Client IP Address Available to the Back-end Server

SPDY - A Web Protocol. Mike Belshe Velocity, Dec 2009

OVS-DPDK: Memory management and debugging

Certified Reference Design for VMware Cloud Providers

Scalability Testing with Login VSI v16.2. White Paper Parallels Remote Application Server 2018

Yahoo Traffic Server -a Powerful Cloud Gatekeeper

FUJITSU Software Interstage Information Integrator V11

Managing Caching Performance and Differentiated Services

Media File Options. Deployment and Management of Voice Prompts

Cisco ACI Simulator VM Installation Guide

Detecting Server Maintenance Mode with a Health

Performance Sentry VM Provider Objects April 11, 2012

CSE 473 Introduction to Computer Networks. Midterm Exam Review

Choosing the Right Acceleration Solution

Planning Resources. vrealize Automation 7.1

Cloudamize Agents FAQ

jetnexus Virtual Load Balancer

T E C H N I C A L S A L E S S O L U T I O N S

DEPLOYMENT GUIDE A10 THUNDER ADC FOR EPIC SYSTEMS

Test Methodology We conducted tests by adding load and measuring the performance of the environment components:

Performance implication of elliptic curve TLS

Memory Management Strategies for Data Serving with RDMA

Performance Objects and Counters for the System

Acceleration Systems Technical Overview. September 2014, v1.4

Reference Architecture

davidklee.net gplus.to/kleegeek linked.com/a/davidaklee

QoS Policy Parameters

This document provides an overview of buffer tuning based on current platforms, and gives general information about the show buffers command.

Consulting Solutions WHITE PAPER Citrix XenDesktop XenDesktop Planning Guide: Hosted VM-Based Resource Allocation

Reference Architecture. vrealize Automation 7.0

IaaS. IaaS. Virtual Server

Setting up Microsoft Exchange Server 2016 with Avi

Cisco Wide Area Application Services (WAAS) Mobile

IaaS. IaaS. Virtual Server

Improve Web Application Performance with Zend Platform

ETSF10 Internet Protocols Transport Layer Protocols

Mark Falco Oracle Coherence Development

Efficient HTTP based I/O on very large datasets for high performance computing with the Libdavix library

Performance Benchmark and Capacity Planning. Version: 7.3

IaaS. IaaS. Virtual Server

Questions and Answers Alteon's Virtual Matrix Architecture

IaaS. IaaS. Virtual Server

The Future of Virtualization. Jeff Jennings Global Vice President Products & Solutions VMware

Adobe Acrobat Connect Pro 7.5 and VMware ESX Server

A10 Thunder ADC with Oracle E-Business Suite 12.2 DEPLOYMENT GUIDE

RIGHTNOW A C E

The Future of Virtualization Desktop to the Datacentre. Raghu Raghuram Vice President Product and Solutions VMware

Zevenet EE 4.x. Performance Benchmark.

Features. HDX WAN optimization. QoS

Hardware & System Requirements

The Google File System

CLOUD-SCALE FILE SYSTEMS

Seven Criteria for a Sound Investment in WAN Optimization

IX: A Protected Dataplane Operating System for High Throughput and Low Latency


Cross-layer Optimization for Virtual Machine Resource Management

BEAAquaLogic. Service Bus. Native MQ Transport User Guide

TALK THUNDER SOFTWARE FOR BARE METAL HIGH-PERFORMANCE SOFTWARE FOR THE MODERN DATA CENTER WITH A10 DATASHEET YOUR CHOICE OF HARDWARE

Building a Platform Optimized for the Network Edge

CSE 461 Module 10. Introduction to the Transport Layer

Parallels Remote Application Server. Scalability Testing with Login VSI

Creating Application Containers

IsoStack Highly Efficient Network Processing on Dedicated Cores

Advanced Computer Networks

Resiliency Replication Appliance Installation Guide Version 7.2

Full Disclosure Report

STATEFUL TCP/UDP traffic generation and analysis

WRED Explicit Congestion Notification

Pass4test Certification IT garanti, The Easy Way!

Running VMware vsan Witness Appliance in VMware vcloudair First Published On: April 26, 2017 Last Updated On: April 26, 2017

Using the Gateway Exchange Protocol

Optimize and Accelerate Your Mission- Critical Applications across the WAN

DevOps CICD PopUp. Software Defined Application Delivery Fabric. Frey Khademi. Systems Engineering DACH. Avi Networks

VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014

IaaS. IaaS. Virtual Server

Gateway Design Challenges

EXAM TCP/IP NETWORKING Duration: 3 hours With Solutions

jetnexus Virtual Load Balancer

Cisco HyperFlex Hyperconverged Infrastructure Solution for SAP HANA

Optimizing Performance: Intel Network Adapters User Guide

The Google File System

Zero to Microservices in 5 minutes using Docker Containers. Mathew Lodge Weaveworks

Transcription:

Page 1 of 5

view online Overview Calculating the utilization of memory within a Service Engine (SE) is useful to estimate the number of concurrent connections or the amount of memory that may be allocated to features such as HTTP caching. Service Engines support 1-128 GB memory. Avi's minimum recommendation is 2 GB, though an SE will work with less. Providing more memory greatly increases the scale of capacity, as does adjusting the priorities for memory between concurrent connections and optimized performance buffers. Memory allocation for Avi Vantage SE deployments in write access mode is configured via Infrastructure > Cloud > SE Group properties. Changes to the Memory per Service Engine property only impact newly created SEs. For read or no access modes, the memory is configured on the remote orchestrator such as vcenter. Changes to existing SEs require the SE to be powered down prior to the change. Memory Allocation Service Engine's memory allocation is summarized in the following three buckets: Base 500 MB Required to turn on the SE (Linux plus basic SE functionality) Local 100 MB / core Memory allocated per vcpu core Shared Remaining Remaining memory is split between Connections and HTTP Cache The shared memory pool is divided between two components, Connections and Buffers. A minimum of 10% must be allocated to each component. Changing the Connection Memory Percentage slider will impact newly created SEs but will not impact the existing SEs. Connections consists of the TCP, HTTP, and SSL connection tables. Memory allocated to connections directly impacts the total concurrent connections a Service Engine can maintain. Copyright 2018 Avi Networks, Inc. Page 2 of 5

Buffers consists of application layer packet buffers. These buffers are used for layer 4 through 7 to queue packets to provide improved network performance. For instance, if a client is connected to the Avi SE at 1mb/s with large latency and the server is connected to the SE at no latency and 10Gb/s throughput, the server can respond to client queries by transmitting the entire response and move on to service the next client request. The SE will buffer the response and transmit it to the client at the much reduced speed, handling any retransmissions without needing to interrupt the server. This memory allocation also includes application centric features such as HTTP caching and improved compression. Buffers maximize the number of concurrent connections by changing the priority towards Connections. Avi's benchmark calculations are based on the default setting, which is 50% shared memory available for connections. Concurrent Connections Most Application Delivery Controller (ADC) benchmark numbers are based on an equivalent of Transmission Control Path (TCP) Fast Path, which uses a simple memory table of client IP:port mapped to server IP:port. This uses very little memory, enabling extremely large concurrent connection numbers. However, it is also not relevant to the vast majority of real world deployments which rely on TCP and application layer proxying. Avi's benchmark numbers are based on full TCP proxy (L4), TCP plus HTTP proxy with buffering and basic caching plus DataScript (L7), and the same scenario with Transfer Layer Security Protocol (TLS) 1.2 between the client and Avi. The memory consumption numbers per connection listed below could be higher or lower. For instance, typical buffered HTTP request headers consume 2k, but they could be as high as 48k. The numbers below are intended to provide real-world sizing guidelines, not extreme best or worst case benchmark numbers. Memory consumption per connection: 10 KB L4 20 KB L7 40 KB L7 + SSL (RSA or ECC) To calculate the potential concurrent connections for a Service Engine, use the following formula: Concurrent L4 connections = ((SE memory - 500 MB - (100 MB * num of vcpu)) * Connection Percent) / Memory per Connection To calculate layer 7 sessions (memory per connection = 20KB = 0.02MB) for an SE with 8 vcpu cores and 8 GB RAM, using a Connection Percentage of 50%, the math looks like: ((8000-500 - ( 100 * 8 )) * 0.50) / 0.02 = 167,500. Copyright 2018 Avi Networks, Inc. Page 3 of 5

1 vcpu 4 vcpu 32 vcpu 1 GB 36k 9k n/a 4 GB 306k 279k 27k 32 GB 2.82m 2.80m 2.52m The table above shows the number of concurrent connections for L4 (TCP Proxy mode) optimized for connections (i.e. Connection Percent = 90%). View Allocation via CLI From the CLI: show serviceengine <SE Name> memdist This command shows a truncated breakdown of memory distribution for the SE. This SE has one vcpu core with 633 MB allocated for the shared memory's connection table. By default, we set HTTP cache limit to 50% of system mbufs. The clusters parameter is the number of mbufs. Each mbuf is 2240 bytes. Therefore, the HTTP cache size limit is ( clusters * 2240)/2. [admin:controller]: > show serviceengine Avi-se-ktiwo memdist Field Value se_uuid Avi-se-ktiwo:se-00505688b5d5 proc_id C0_L4 huge_pages 348 clusters 208549 shm_memory_mb 96 conn_memory_mb 633 conn_memory_mb_per_core 633 num_queues 1 num_rxd 2048 num_txd 2048 hypervisor_type 1 shm_conn_memory_mb 33 os_reserved_memory_mb 0 ### View Allocation via API The total memory allocated to the connection table and the percentage in use may be viewed. Use the following commands to query the API: https://<ip Address>/api/analytics/metrics/serviceengine/se-<SE UUID>?metric_id=se_stats. max_connection_mem_total Returns the total memory available to the connection table. In the response snippet below, 141 MB is allocated. "statistics": { } "max": 141, Copyright 2018 Avi Networks, Inc. Page 4 of 5

https://<ip Address>/api/analytics/metrics/serviceengine/se-<SE UUID>?metric_id=se_stats. avg_connection_mem_usage&step=5 Returns the average percent of memory used during the queried time period. In the result snippet below, 5% of the memory was in use. "statistics": { "min": 5, "max": 5, "mean": 5 }, Copyright 2018 Avi Networks, Inc. Page 5 of 5