SE Memory Consumption

Similar documents
SE Memory Consumption

Avi Networks Technical Reference (16.3)

A Library and Proxy for SPDY

How to Make the Client IP Address Available to the Back-end Server

BlackBerry AtHoc Networked Crisis Communication Capacity Planning Guidelines. AtHoc SMS Codes

SANGFOR AD Product Series

SCALE AND SECURE MOBILE / IOT MQTT TRAFFIC

Cloudamize Agents FAQ

VVD for Cloud Providers: Scale and Performance Guidelines. October 2018

Intel Solid State Drive Data Center Family for PCIe* in Baidu s Data Center Environment

SANGFOR AD Product Series

Alteon version 32.0 Recommended OID For SNMP Monitoring. Radware Knowledgebase

NetScaler 2048-bit SSL Performance

Certified Reference Design for VMware Cloud Providers

SPDY - A Web Protocol. Mike Belshe Velocity, Dec 2009

Yahoo Traffic Server -a Powerful Cloud Gatekeeper

Network Design Considerations for Grid Computing

Media File Options. Deployment and Management of Voice Prompts

Cisco Wide Area Application Services (WAAS) Mobile

Bandwidth, Latency, and QoS for Core Components

An In-depth Study of LTE: Effect of Network Protocol and Application Behavior on Performance

T E C H N I C A L S A L E S S O L U T I O N S

SEDA: An Architecture for Well-Conditioned, Scalable Internet Services

Planning Resources. vrealize Automation 7.1

Cisco ACI Simulator VM Installation Guide

Performance implication of elliptic curve TLS

Memory Management Strategies for Data Serving with RDMA

Test Methodology We conducted tests by adding load and measuring the performance of the environment components:

IX: A Protected Dataplane Operating System for High Throughput and Low Latency

FUJITSU Software Interstage Information Integrator V11

IPv6 BGP Peering in Avi Vantage

Reference Architecture

Reference Architecture. vrealize Automation 7.0

STATEFUL TCP/UDP traffic generation and analysis

davidklee.net gplus.to/kleegeek linked.com/a/davidaklee

Setting up Microsoft Exchange Server 2016 with Avi

OVS-DPDK: Memory management and debugging

How to Deploy an OVA Virtual Test Agent Image in VMware

Acceleration Systems Technical Overview. September 2014, v1.4

Performance Sentry VM Provider Objects April 11, 2012

Consulting Solutions WHITE PAPER Citrix XenDesktop XenDesktop Planning Guide: Hosted VM-Based Resource Allocation

Corrigendum 3. Tender Number: 10/ dated

jetnexus Virtual Load Balancer

Deployment Guide AX Series with Oracle E-Business Suite 12

IaaS. IaaS. Virtual Server

Detecting Server Maintenance Mode with a Health

Speeding up Linux TCP/IP with a Fast Packet I/O Framework

Scalability Testing with Login VSI v16.2. White Paper Parallels Remote Application Server 2018

DEPLOYMENT GUIDE A10 THUNDER ADC FOR EPIC SYSTEMS

Infrastructure Tuning

IaaS. IaaS. Virtual Server

vcloud Automation Center Reference Architecture vcloud Automation Center 5.2

The Future of Virtualization. Jeff Jennings Global Vice President Products & Solutions VMware

Profiling Grid Data Transfer Protocols and Servers. George Kola, Tevfik Kosar and Miron Livny University of Wisconsin-Madison USA

Efficient HTTP based I/O on very large datasets for high performance computing with the Libdavix library

IaaS. IaaS. Virtual Server

IaaS. IaaS. Virtual Server

Adobe Acrobat Connect Pro 7.5 and VMware ESX Server

Ultra high-speed transmission technology for wide area data movement

Features. HDX WAN optimization. QoS

The Future of Virtualization Desktop to the Datacentre. Raghu Raghuram Vice President Product and Solutions VMware

Building a Platform Optimized for the Network Edge

Installing on a Virtual Machine

CSE 473 Introduction to Computer Networks. Midterm Exam Review

TALK THUNDER SOFTWARE FOR BARE METAL HIGH-PERFORMANCE SOFTWARE FOR THE MODERN DATA CENTER WITH A10 DATASHEET YOUR CHOICE OF HARDWARE

Apigee Edge Cloud. Supported browsers:

Full Disclosure Report


Pass4test Certification IT garanti, The Easy Way!

Resiliency Replication Appliance Installation Guide Version 7.2

Apigee Edge Cloud. Supported browsers:

Cross-layer Optimization for Virtual Machine Resource Management

Recovering Disk Storage Metrics from low level Trace events

Choosing the Right Acceleration Solution

EXAM TCP/IP NETWORKING Duration: 3 hours With Solutions

Running VMware vsan Witness Appliance in VMware vcloudair First Published On: April 26, 2017 Last Updated On: April 26, 2017

A New Internet? RIPE76 - Marseille May Jordi Palet

DevOps CICD PopUp. Software Defined Application Delivery Fabric. Frey Khademi. Systems Engineering DACH. Avi Networks

Managing Caching Performance and Differentiated Services

IsoStack Highly Efficient Network Processing on Dedicated Cores

Performance Benchmark and Capacity Planning. Version: 7.3

Cisco HyperFlex Hyperconverged Infrastructure Solution for SAP HANA

Data-Driven DevOps: Bringing Visibility to Any Cloud, Any App, & Any Device. Erik Giesa SVP of Marketing and Business Development, ExtraHop Networks

IaaS. IaaS. Virtual Server

How to autoprovision a NetScaler VPX on SDX for load balancing OpenStack workloads

Installing and Upgrading vrealize Automation. vrealize Automation 7.3

Virtual SQL Servers. Actual Performance. 2016

DELL EMC UNITY: DATA REDUCTION

Accelerating Pointer Chasing in 3D-Stacked Memory: Challenges, Mechanisms, Evaluation Kevin Hsieh

Apigee Edge Cloud - Bundles Spec Sheets

RIGHTNOW A C E

Network Capacity Expansion System

Zevenet EE 4.x. Performance Benchmark.

Hardware & System Requirements

Xerox Device Data Collector 1.1 Security and Evaluation Guide

This document provides an overview of buffer tuning based on current platforms, and gives general information about the show buffers command.

Tuning Intelligent Data Lake Performance

IaaS. IaaS. Virtual Server

DEPLOYMENT GUIDE Version 1.1. DNS Traffic Management using the BIG-IP Local Traffic Manager

ZBD: Using Transparent Compression at the Block Level to Increase Storage Space Efficiency

Transcription:

Page 1 of 5

SE Memory Consumption view online Calculating the utilization of memory within a Service Engine is useful to estimate the number of concurrent connections or the amount of memory that may be allocated to features such as HTTP caching. Service Engines support 1-128 GB memory. Avi's minimum recommendation is 2 GB, though an SE will work with less. Providing more memory greatly increases the scale of capacity, as does adjusting the priorities for memory between concurrent connections and optimized performance buffers. Memory allocation for Avi Vantage SE deployments in write access mode is configured via Infrastructure > Cloud > SE Group properties. Changes to the Memory per Service Engine property only impact newly created SEs. For read or no access modes, the memory is configured on the remote orchestrator such as vcenter. Changes to existing SEs require the SE to be powered down prior to the change. Memory Allocation Service Engine's memory allocation is summarized in the following three buckets: Base 500 MB Required to turn on the SE (Linux plus basic SE functionality) Local 100 MB / core Memory allocated per vcpu core Shared Remaining Remaining memory is split between Connections and HTTP Cache The shared memory pool is divided up between two components, Connections and Buffers. A minimum of 10% must be allocated to the each. Changing the Connection Memory Percentage slider will impact newly created SEs but will not impact existing SEs. Connections consists of the TCP, HTTP, and SSL connection tables. Memory allocated to connections directly impacts the total concurrent connections a Service Engine can maintain. Copyright 2018 Avi Networks, Inc. Page 2 of 5

Buffers consists of application layer packet buffers. These buffers are used for layer 4 through 7 to queue packets to provide improved network performance. For instance, if a client is connected to the Avi SE at 1mb/s with large latency and the server is connected to the SE at no latency and 10gb/s throughput, the server can respond to client queries by transmitting the entire response and move on to service the next client request. The SE will buffer the response and transmit it to the client at the much reduced speed, handling any retransmissions without needing to interrupt the server. This memory allocation also includes application centric features such as HTTP caching and improved compression. Maximize the number of concurrent connections by changing the priority towards Connections. Avi's benchmark calculations are based on the default setting, which is 20% of the shared memory available for connections. Concurrent Connections Most ADC benchmark numbers are based on an equivalent of TCP Fastpath, which uses a simple memory table of client IP: port mapped to server IP:port. This uses very little memory, enabling extremely large concurrent connection numbers. But it is also not relevant to the vast majority of real world deployments which rely on TCP and application layer proxying. Avi's benchmark numbers are based on full TCP proxy (L4), TCP plus HTTP proxy with buffering and basic caching plus DataScript (L7), and the same scenario with TLS 1.2 between client and Avi. The memory consumption numbers per connection listed below could be higher or lower. For instance, typical buffered HTTP request headers consume 2k, but they could be as high as 48k. The numbers below are intended to provide real world sizing guidelines, not extreme best or worst case benchmark numbers. Memory consumption per connection: 10 KB L4 20 KB L7 40 KB L7 + SSL (RSA or ECC) To calculate the potential concurrent connections for a Service Engine, use the following formula: Concurrent L4 connections = ((SE memory - 500 MB - (100 MB /* num vcpu)) /* Connection Percent) / 10 KB To calculate layer 7 sessions for an SE with 8 vcpu cores and 8 GB RAM, using the default Connection Percent, the math looks like: ((8000-500 - ( 100 /* 8 )) /*.20) / 20 KB Copyright 2018 Avi Networks, Inc. Page 3 of 5

1 vcpu 4 vcpu 32 vcpu 1 GB 36k 9k n/a 4 GB 306k 279k 27k 32 GB 2.82m 2.80m 2.52m The table above shows the number of concurrent connections for L4 (TCP Proxy mode) optimized for connections. View Allocation via CLI From the CLI: show serviceengine memdist This command shows a truncated breakdown of memory distribution for the SE. This SE has one vcpu core with 141 MB allocated for the shared memory's connection table. The 'huge_pages' value of 91 means there are 91 pages of 2 MB each. This indicates 182 MB has been allocated for the shared memory's HTTP cache table. : > show serviceengine Avi-se-bajip memdist Field Value se_ref Avi-se-bajip:se-0068b1 huge_pages 91 conn_memory_mb 141 conn_memory_mb_per_core 141 View Allocation via API The total memory allocated to the connection table and the percentage in use may be viewed. Use the following commands to query the API: https:// /api/analytics/metrics/serviceengine/se-?metric_id=se_stats.max_connection_mem_total Returns the total memory available to the connection table. In the response snippet below, 141 MB is allocated. "statistics": { } "max": 141, https:// /api/analytics/metrics/serviceengine/se-?metric_id=se_stats.avg_connection_mem_usage&step=5 Returns the average percent of memory used during the queried time period. In the result snippet below, 5% of the memory was in use. "statistics": { "min": 5, "max": 5, "mean": 5 }, Copyright 2018 Avi Networks, Inc. Page 4 of 5

Copyright 2018 Avi Networks, Inc. Page 5 of 5