WHITE PAPER NGINX An Open Source Platform of Choice for Enterprise Website Architectures

Similar documents
WHITE PAPER. F5 and Cisco. Supercharging IT Operations with Full-Stack SDN

Pulse Secure Application Delivery

SD-WAN Solution How to Make the Best Choice for Your Business

TALK THUNDER SOFTWARE FOR BARE METAL HIGH-PERFORMANCE SOFTWARE FOR THE MODERN DATA CENTER WITH A10 DATASHEET YOUR CHOICE OF HARDWARE

SOLUTION BRIEF NETWORK OPERATIONS AND ANALYTICS. How Can I Predict Network Behavior to Provide for an Exceptional Customer Experience?

Service Mesh and Microservices Networking

ThousandEyes for. Application Delivery White Paper

Enabling Efficient and Scalable Zero-Trust Security

On BigFix Performance: Disk is King. How to get your infrastructure right the first time! Case Study: IBM Cloud Development - WW IT Services

Enabling Branch Office Consolidation

Stingray Traffic Manager 9.0

Cisco Unified Computing System Delivering on Cisco's Unified Computing Vision

OPEN COMPUTE PLATFORMS POWER SOFTWARE-DRIVEN PACKET FLOW VISIBILITY, PART 2 EXECUTIVE SUMMARY. Key Takeaways

IBM POWER SYSTEMS: YOUR UNFAIR ADVANTAGE

Technical and Architectural Overview

FIVE REASONS YOU SHOULD RUN CONTAINERS ON BARE METAL, NOT VMS

New Approach to Unstructured Data

Cisco HyperFlex and the F5 BIG-IP Platform Accelerate Infrastructure and Application Deployments

SaaS Providers. ThousandEyes for. Summary

F5 icontrol. In this white paper, get an introduction to F5 icontrol service-enabled management API. F5 White Paper

Developing Enterprise Cloud Solutions with Azure

Internal Server Architectures

VMWARE EBOOK. Easily Deployed Software-Defined Storage: A Customer Love Story

SECURE, FLEXIBLE ON-PREMISE STORAGE WITH EMC SYNCPLICITY AND EMC ISILON

Configuring the Oracle Network Environment. Copyright 2009, Oracle. All rights reserved.

Optimizing Web and Application Infrastructure on a Limited IT Budget

De-dupe: It s not a question of if, rather where and when! What to Look for and What to Avoid

A Closer Look at SERVER-SIDE RENDERING. Technology Overview

Next Generation Storage for The Software-Defned World

Servlet Performance and Apache JServ

TECHNICAL OVERVIEW ACCELERATED COMPUTING AND THE DEMOCRATIZATION OF SUPERCOMPUTING

Resource Containers. A new facility for resource management in server systems. Presented by Uday Ananth. G. Banga, P. Druschel, J. C.

Send me up to 5 good questions in your opinion, I ll use top ones Via direct message at slack. Can be a group effort. Try to add some explanation.

Etanova Enterprise Solutions

Delivering Microservices Securely and at Scale with NGINX in Red Hat OpenShift. November, 2017

Upgrade Your MuleESB with Solace s Messaging Infrastructure

Enterprise Overview. Benefits and features of Cloudflare s Enterprise plan FLARE

2011 IBM Research Strategic Initiative: Workload Optimized Systems

Broadcast-Quality, High-Density HEVC Encoding with AMD EPYC Processors

The 7 Habits of Highly Effective API and Service Management

Network Security and Topology

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE

Cloud for the Enterprise

Improving VDI with Scalable Infrastructure

Qlik Sense Enterprise architecture and scalability

Benefits of SD-WAN to the Distributed Enterprise

Cisco Unified Data Center Strategy

The Cloud-Based User Interface

Cloudamize Agents FAQ

Motivation. Threads. Multithreaded Server Architecture. Thread of execution. Chapter 4

YOUR APPLICATION S JOURNEY TO THE CLOUD. What s the best way to get cloud native capabilities for your existing applications?

Rapid Provisioning. Cutting Deployment Times with INFINIDAT s Host PowerTools. Abstract STORING THE FUTURE

CISCO HYPERFLEX SYSTEMS FROM KEYINFO. Bring the potential of hyperconvergence to a wide range of workloads and use cases

Enabling Innovation in the Digital Economy

Scaling Internet TV Content Delivery ALEX GUTARIN DIRECTOR OF ENGINEERING, NETFLIX

STATE OF MODERN APPLICATIONS IN THE CLOUD

Applications, services. Middleware. OS2 Processes, threads, Processes, threads, communication,... communication,... Platform

Course Outline. Lesson 2, Azure Portals, describes the two current portals that are available for managing Azure subscriptions and services.

BUILDING the VIRtUAL enterprise

Microsoft Office SharePoint Server 2007

A High-Performing Cloud Begins with a Strong Foundation. A solution guide for IBM Cloud bare metal servers

EMC XTREMCACHE ACCELERATES MICROSOFT SQL SERVER

PROCESSES AND THREADS THREADING MODELS. CS124 Operating Systems Winter , Lecture 8

DATA CENTRE SOLUTIONS

Virtual vs Physical ADC

STRATEGIC WHITE PAPER. Securing cloud environments with Nuage Networks VSP: Policy-based security automation and microsegmentation overview

Solution Track 4 Design a Scalable Virtual Desktop Infrastructure

Top 4 considerations for choosing a converged infrastructure for private clouds

IronPort C100 for Small and Medium Businesses

Making life simpler for remote and mobile workers

Adaptive Resync in vsan 6.7 First Published On: Last Updated On:

AMD EPYC Processors Showcase High Performance for Network Function Virtualization (NFV)

Process Concepts. CSC400 - Operating Systems. 3. Process Concepts. J. Sumey

White Paper. Major Performance Tuning Considerations for Weblogic Server

Cisco SAN Analytics and SAN Telemetry Streaming

SOLUTION BRIEF Fulfill the promise of the cloud

White Paper February McAfee Network Protection Solutions. Encrypted Threat Protection Network IPS for SSL Encrypted Traffic.

How to Route Internet Traffic between A Mobile Application and IoT Device?

FROM A RIGID ECOSYSTEM TO A LOGICAL AND FLEXIBLE ENTITY: THE SOFTWARE- DEFINED DATA CENTRE

TECHNICAL OVERVIEW OF NEW AND IMPROVED FEATURES OF EMC ISILON ONEFS 7.1.1

Merging Enterprise Applications with Docker* Container Technology

How the Cloud is Enabling the Disruption of the Construction Industry. AWS Case Study Construction Industry. Abstract

PeerApp Case Study. November University of California, Santa Barbara, Boosts Internet Video Quality and Reduces Bandwidth Costs

How Parallels RAS Enhances Microsoft RDS. White Paper Parallels Remote Application Server

Linux Automation.

IZO MANAGED CLOUD FOR AZURE

VIRTUALIZATION PERFORMANCE: VMWARE VSPHERE 5 VS. RED HAT ENTERPRISE VIRTUALIZATION 3

Background: I/O Concurrency

Endpoint Security and Virtualization. Darren Niller Product Management Director May 2012

21ST century enterprise. HCL Technologies Presents. Roadmap for Data Center Transformation

Overview. SUSE OpenStack Cloud Monitoring

EMC XTREMCACHE ACCELERATES ORACLE

ECE 550D Fundamentals of Computer Systems and Engineering. Fall 2017

Lecture 21 Concurrent Programming

Dell Software Defined Enterprise

Delivers cost savings, high definition display, and supercharged sharing

A Scalable Event Dispatching Library for Linux Network Servers

Outline. Definition of a Distributed System Goals of a Distributed System Types of Distributed Systems

Choosing an Interface

The Cisco WebEx Node for the Cisco ASR 1000 Series Delivers the Best Aspects of On-Premises and On-Demand Web Conferencing

Transcription:

ASHNIK PTE LTD. White Paper WHITE PAPER NGINX An Open Source Platform of Choice for Enterprise Website Architectures Date: 10/12/2014 Company Name: Ashnik Pte Ltd. Singapore By: Sandeep Khuperkar, Director Ashnik, India Page 1

What is NGINX & why it is used? NGINX (pronounced engine x ) is an open source web server. Since its public launch in 2004, Nginx has focused on high performance, high concurrency and low memory usage. Features like load balancing, caching, access and bandwidth control, and the ability to integrate efficiently with a variety of applications, have helped to make Nginx a platform of choice for enterprise website architectures. These days, applications rule the world. They aren t just tools that run people s workplaces they now run people s lives. Demand for immediate response, flawless behaviour, and more features is unprecedented. And of course, people expect applications to work equally well across any type of devices, especially on mobile. Needless to say, how fast application performs is just as important as what it does! Today, businesses face constant pressures to improve web performance and accelerate time-tomarket in order to be competitive. Today, there is a paradigm shift happening from monolithic application architectures to distributed applications. This approach helps improve development efficiencies resulting in faster response time to market. From the technical standpoint, distributed applications can also generate more internal traffic, and are sometimes harder to secure. Furthermore, as the application becomes more sophisticated, the traffic flow gets more complex and application developers need more control over how it is routed in order to optimize their application. Traditionally, hardware networking appliances and a team of network engineers used to work on work on resolving complexities of TCP/IP sometimes HTTP and optimization of application traffic. With modern web architectures and in the days of the Cloud application system engineers typically want software tools to tackle complexities, related to the network effects from application perspective and to the HTTP. However, most application frameworks do not provide any good means to quickly and effortlessly deal with any of the HTTP heavy-lifting. Page 2

Application teams no longer build and then hand off to someone else to deploy and operate they must build, deploy, and repeat the cycle. NGINX Plus- The next big thing? 1) NGINX Plus combines the functionality that was previously only available from a high-end ADC ( application delivery controller ) and the best-of-breed web acceleration techniques that were battle-tested through the 10-years history of its parent product the NGINX opensource web server. It is the same software only compact, and extremely efficient by powering over 40% of the top websites like Facebook, Twitter, Airbnb, Netflix, Dropbox, Box and more. 2) NGINX Plus is the ideal platform to deliver modern web applications, and to encapsulate and effortlessly accelerate legacy monolithic web stacks. NGINX Plus ensures the applications always achieve the performance and reliability the business needs, and can scale as the business grows. 3) NGINX originated from the world of application software with a very specific goal of making the web infrastructure holistically faster. It has never been a networking tool, or a firmware ripped out of a box. NGINX can scale sub-linearly, offering unparalleled efficiency and priceperformance ratio. 4) Out-of-the box NGINX Plus offers all the common web app acceleration techniques like HTTP load balancing, URI (path-) switching, SSL termination, bandwidth control, scalable content caching, and web security policies. 5) Over the past few years, NGINX Plus evolved to be just the right tool for the application developers and application system engineers, looking around for a proven template for web acceleration. Insights about NGINX Architecture: 1) In a traditional web server architecture, each client connection is handled as a separate process or thread, and as the popularity of a website grows, and the number of concurrent connections increases the web server slows down, delaying responses to the users. From the technical standpoint, spawning a separate process/thread requires Page 3

switching CPU to a new task, and creating a new runtime context which consumes additional memory and CPU time, and negatively impacts performance. 2) NGINX was developed with the thought of achieving 10x more performance and the optimized use of server resources while being able to scale and support dynamic growth of a website. As a result NGINX became one the most well-known modular, event-driven, asynchronous, single-threaded web server and web proxy. 3) In NGINX users connections are processed in highly efficient runloops inside a limited number of single-threaded processes called worker(s). Each worker can handle thousands of concurrent connections and requests per second. 4) Event-driven is basically about an approach to handle various tasks as events. Incoming connection is an event, disk read is an event and so on. The idea is to not waste server resources unless there s an event to handle. Modern operating system can notify the web server about initiation or completion of a task, which in turn enables NGINX workers to use proper resources in a proper way. Server resources can be allocated and released dynamically, on-demand resulting in optimized usage of network, memory and CPU. 5) Asynchronous means the runloop doesn t get stuck on particular events it sets condition for alarms from the operating system about particular events and continues to monitor the event queue for alarms. Only when there s an alarm about an event, the runloop triggers actions (e.g. read/write from the network interface). In turn, specific actions always try to utilize non-blocking interfaces to the OS so that the worker doesn t stop on handling a particular event. This way NGINX workers can use available shared resources concurrently in the most efficient manner. 6) Single-threaded means that many user connections can be handled by a single worker process which in turn helps to avoid excessive context switching and leads to more efficient usage of memory and CPU. 7) Modular architecture helps developers to extend the set of the web server features without heavily modifying the NGINX core. Page 4

Worker Process NGINX does not create a new process or thread for every connection. Worker process accepts the new requests from a shared listen queue and executes a highly efficient runloop across them to process thousands of connections per worker. Worker gets notifications about events from the mechanisms in the OS kernel. When NGINX is started, an initial set of listening sockets is created, workers then start to accept, read from and write to sockets when processing HTTP requests and responses. As NGINX does not fork a process or thread per connection, the memory usage is very conservative and extremely efficient in most of the cases it s basically a true on-demand handling of memory. NGINX also conserves CPU cycles as there s no ongoing create-destroy pattern for processes or threads. Page 5

In a nutshell - what NGINX does can be described as orchestration of the underlying OS and hardware resources to server web clients by checking the state of the network and storage events, initializing new connections, adding them to the runloop, and processing asynchronously until completion, at which point the connection is deallocated and removed from the runloop. Consequently NGINX helps to achieve moderate-to-low CPU usage under even most extreme workloads. NGINX spawns several worker(s) it s typically a worker per CPU core which in turn helps to scale across multiple CPUs. This approach helps the OS to schedule tasks across NGINX workers more evenly. General recommendations for worker configuration might be as following: For the CPU-intensive workload the number of NGINX worker(s) should be equal to number of CPU cores. For I/O-intensive workload the number of worker(s) might be about two times the number of cores. Thus NGINX is able to do more in less resources (e.g. memory and CPU). Overview on NGINX Caching NGINX as a web server handles static content very efficiently, in addition, Nginx can act as a very capable cache server. Nginx can cache content received from other servers. Nginx can be used as both as cache server and load balancer by acting as a gateway for other web or application servers. Nginx as a cache sever receives the initial HTTP requests it then handles the request if it has a cached copy of requested resource or else pass on the request to origin server. Response from the origin server are read by the cache server to decide if the response needs to be cached or pass through. Page 6

In Nginx cache keys and cache metadata are stored in shared memory segments, these memory segments can be accessed by cache loader, cache manager and workers. Each cached response is placed in different file in the file system. When Nginx reads the response from an upstream server the content initially is written to a temporary file which is outside the cache directory structure and as the request is processed it renames the temporary file and moves in cache directory. NGINX Caching Processes Cache loader and Cache manager are two Nginx processes involved in caching. Cache manager checks the state of cache file storage periodically. It removes the least recently used data when the size of file storage exceeds the max_size parameter. When Nginx starts cache loader is activated. It loads the meta information about the previously cached data into the shared memory zone. Cache loader works in iterations with parameters as configured for proxy_cache_path. NGINX Configuration Nginx has a scalable configuration system which is essential for web server. Normally the challenge of scaling is faced when maintaining lots of virtual servers, directories, locations and datasets. Keeping this in mind Nginx configuration is designed to simplify day-to-day operations and to provide ease of expansion of web server configuration. Nginx configuration resides in /usr/local/etc/nginx or /etc/nginx. The main configuration file is usually called nginx.conf. In general, Nginx settings also provide support for several original mechanisms that can be very useful as part of a lean web server configuration. It makes sense to briefly mention variables and the try_files directive, which are somewhat unique to nginx. Variables in Nginx were developed to provide an additional even-more-powerful mechanism to control run-time configuration of a web server. Nginx configuration was designed to simplify day-today operations and to provide an easy means for further expansion of web server configuration. Page 7

If you have any technology needs and want us to give us a free assessment service or quote, just write to us at success@ashnik.com or call +65 6438 3504 2014 Ashnik Pte Ltd. This white paper may contain confidential, privileged or copyright material and is solely for the use of the intended recipient(s). All rights reserved. Other names may be trademarks of their respective owners. www.ashnik.com Page 8