Resource Containers. A new facility for resource management in server systems. Presented by Uday Ananth. G. Banga, P. Druschel, J. C.

Similar documents
Resource Containers: A new facility for resource management in server systems. Outline

Resource Containers: A New Facility for Resource Management in Server Systems

Utilizing Linux Kernel Components in K42 K42 Team modified October 2001

SEDA: An Architecture for Well-Conditioned, Scalable Internet Services

Kernel Korner AEM: A Scalable and Native Event Mechanism for Linux

CSE398: Network Systems Design

CS533 Concepts of Operating Systems. Jonathan Walpole

Comp 310 Computer Systems and Organization

1 Multiprocessors. 1.1 Kinds of Processes. COMP 242 Class Notes Section 9: Multiprocessor Operating Systems

Introduction. Application Performance in the QLinux Multimedia Operating System. Solution: QLinux. Introduction. Outline. QLinux Design Principles

20: Networking (2) TCP Socket Buffers. Mark Handley. TCP Acks. TCP Data. Application. Application. Kernel. Kernel. Socket buffer.

Multiprocessor and Real- Time Scheduling. Chapter 10

Operating Systems. Lecture Course in Autumn Term 2015 University of Birmingham. Eike Ritter. September 22, 2015

Device-Functionality Progression

Chapter 12: I/O Systems. I/O Hardware

Multiprocessor Support

Chapter 13: I/O Systems

Operating Systems. Review ENCE 360

IX: A Protected Dataplane Operating System for High Throughput and Low Latency

Course Details. Operating Systems with C/C++ Course Details. What is an Operating System?

Today s class. Scheduling. Informationsteknologi. Tuesday, October 9, 2007 Computer Systems/Operating Systems - Class 14 1

ASYNCHRONOUS SHADERS WHITE PAPER 0

SMD149 - Operating Systems

Chapter 13: I/O Systems

Distributed File Systems Issues. NFS (Network File System) AFS: Namespace. The Andrew File System (AFS) Operating Systems 11/19/2012 CSC 256/456 1

Notes based on prof. Morris's lecture on scheduling (6.824, fall'02).

ENGR 3950U / CSCI 3020U Midterm Exam SOLUTIONS, Fall 2012 SOLUTIONS

Chapter 8: Virtual Memory. Operating System Concepts

CS555: Distributed Systems [Fall 2017] Dept. Of Computer Science, Colorado State University

Part V. Process Management. Sadeghi, Cubaleska RUB Course Operating System Security Memory Management and Protection

To provide a grand tour of the major operating systems components To provide coverage of basic computer system organization

CPU Scheduling. The scheduling problem: When do we make decision? - Have K jobs ready to run - Have N 1 CPUs - Which jobs to assign to which CPU(s)

Background: I/O Concurrency

Operating Systems. Process scheduling. Thomas Ropars.

Chapter 13: I/O Systems

LECTURE 3:CPU SCHEDULING

ECE519 Advanced Operating Systems

CS 326: Operating Systems. CPU Scheduling. Lecture 6

Lecture 8: February 19

TCP Tuning for the Web

Scheduling, part 2. Don Porter CSE 506

OS Extensibility: SPIN and Exokernels. Robert Grimm New York University

CSC Operating Systems Fall Lecture - II OS Structures. Tevfik Ko!ar. Louisiana State University. August 27 th, 2009.

Lecture 21 Concurrent Programming

Announcements. Computer System Organization. Roadmap. Major OS Components. Processes. Tevfik Ko!ar. CSC Operating Systems Fall 2009

CS533 Concepts of Operating Systems. Jonathan Walpole

Scheduler Support for Video-oriented Multimedia on Client-side Virtualization

PROCESSES AND THREADS THREADING MODELS. CS124 Operating Systems Winter , Lecture 8

Chapter 1: Introduction

IX: A Protected Dataplane Operating System for High Throughput and Low Latency

Chapter 1: Introduction

A Predictable RTOS. Mantis Cheng Department of Computer Science University of Victoria

Threads. Raju Pandey Department of Computer Sciences University of California, Davis Spring 2011

Lecture Topics. Announcements. Today: Advanced Scheduling (Stallings, chapter ) Next: Deadlock (Stallings, chapter

CS307: Operating Systems

Chapter 3: Processes. Operating System Concepts 8 th Edition,

Flash: an efficient and portable web server

Problem Set: Processes

WHITE PAPER NGINX An Open Source Platform of Choice for Enterprise Website Architectures

Multiprocessor and Real-Time Scheduling. Chapter 10

Module 12: I/O Systems

Slide 6-1. Processes. Operating Systems: A Modern Perspective, Chapter 6. Copyright 2004 Pearson Education, Inc.

Chapter 13: I/O Systems

Chapter 13: I/O Systems. Chapter 13: I/O Systems. Objectives. I/O Hardware. A Typical PC Bus Structure. Device I/O Port Locations on PCs (partial)

Operating Systems Overview

LINUX OPERATING SYSTEM Submitted in partial fulfillment of the requirement for the award of degree of Bachelor of Technology in Computer Science

I/O Handling. ECE 650 Systems Programming & Engineering Duke University, Spring Based on Operating Systems Concepts, Silberschatz Chapter 13

A Scalable Event Dispatching Library for Linux Network Servers

Process Scheduling. Copyright : University of Illinois CS 241 Staff

Ricardo Rocha. Department of Computer Science Faculty of Sciences University of Porto

Operating System Design Issues. I/O Management

Interactive Scheduling

Two Level Scheduling. Interactive Scheduling. Our Earlier Example. Round Robin Scheduling. Round Robin Schedule. Round Robin Schedule

Multiprocessor scheduling

What is an Operating System? A Whirlwind Tour of Operating Systems. How did OS evolve? How did OS evolve?

Chapter 3: Processes. Operating System Concepts 9 th Edition

Linux Operating System

Virtual Memory Outline

Computer System Overview

Lecture 25: Board Notes: Threads and GPUs

CHAPTER 2: PROCESS MANAGEMENT

Chapter 3: Process Concept

Chapter 3: Process Concept

Fast packet processing in the cloud. Dániel Géhberger Ericsson Research

Remote Health Monitoring for an Embedded System

Last Class: Processes

Chapter 4: Threads. Operating System Concepts 8 th Edition,

Lecture 1 Introduction (Chapter 1 of Textbook)

HPMMAP: Lightweight Memory Management for Commodity Operating Systems. University of Pittsburgh

Chapter 3: Process Concept

OPERATING SYSTEM. Chapter 9: Virtual Memory

Uniprocessor Scheduling. Basic Concepts Scheduling Criteria Scheduling Algorithms. Three level scheduling

Processes The Process Model. Chapter 2 Processes and Threads. Process Termination. Process States (1) Process Hierarchies

CMPSCI 377: Operating Systems Exam 1: Processes, Threads, CPU Scheduling and Synchronization. October 9, 2002

Operating Systems. Operating Systems

OS Design Approaches. Roadmap. OS Design Approaches. Tevfik Koşar. Operating System Design and Implementation

Process Concepts. CSC400 - Operating Systems. 3. Process Concepts. J. Sumey

Problem Set: Processes

What s in a process?

Inter-Process Communication and Synchronization of Processes, Threads and Tasks: Lesson-1: PROCESS

Transcription:

Resource Containers A new facility for resource management in server systems G. Banga, P. Druschel, J. C. Mogul OSDI 1999 Presented by Uday Ananth

Lessons in history.. Web servers have become predominantly responsible for a users perceived computing performance These servers must often scale to millions of clients. A lot of work has been done for improving the performance of web servers and making them more scalable. Service providers want to exert explicit control over resource consumption policy. (Differentiated QoS) Clip 2

The blame game.. There are shortcomings in the resource management abstractions. Operating systems treat processes as the unit of resource management. Web servers use a small set of processes to handle several activities, making them too coarse to be the right unit of resource management. 3

Let s draw up the terms.. Resource Principals are entities for which separate resource allocation and accounting are done. Protection domains are entities that need to be isolated from each other. In most operating systems, processes or threads are both resource principals as well as protection domains. 4

The problems.. Protection domain and Resource principal exist in the same process abstraction. Applications have little control over the resources the kernel consumes for them. The resources utilized by the kernel are often accounted / utilized inaccurately ( according to the process ) resulting in bad scheduling decisions. 5

The assumptions we make.. ( not necessarily wrong.. ) Most user space applications are a single process, ( possibly consisting of multiple threads ) and perform a single activity. Resources consumed by the process are properly accounted, as the kernel consumes few resources on behalf of the application. Therefore, the process is an appropriate resource principal. 6

The Band-Aids we applied.. Let s take a look at the previous approaches we ve tried. Process-per Connection Single-Process Event-Driven Connection Multi-threaded Server 7

Process-per Connection.. Simple to implement. We have a master process listening to a port for new connection requests. For each new connection a new process is forked. There are caveats such as Forking overhead, context switching overheads, IPC overheads. 8

Single-Process Event-Driven Connection.. Harder to implement. Single process runs event handlers in the main loop for each ready connection in the queue. Helped avoid IPC and context switches and hence scaled better. They weren t really concurrent unless they ran on multi-processor systems. ( They could fork into multiple processes ) 9

Multi-threaded Server.. Easier to implement compared to event driven models. Created threads for each incoming connection or created multiple threads and Idle threads listened to incoming connections. Threads are scheduled by thread scheduler. Avoided context switches and scaled better. 10

What s happening behind the scenes.. Dynamic pages required multiple resources and were created in response to user input. Multiple processes may have been created to handle the dynamic request and required some overhead of IPC. Kernel handles network processing for buffers, sockets, etc. Those operations are separate from server app and charged to either one or any unlucky process! 11

Survival of the fittest.. Let s have a look at how we ve managed to evolve over the application space and where we fail by using a separate domains. A network-intensive application A multi-process application Single-process multi-threaded application 12

A network-intensive application.. A process consisting of multiple threads, performing a single activity. The process is the right unit for protection, but it does not encompass all the resources being consumed. Referred to as eager processing and results in inaccurate accounting. We are unable to charge an application for the processing that the kernel does. 13

A multi-process application.. Composed of multiple user space processes, cooperating, to perform a single activity. Resource management is a set of all the processes rather than of individual processes. Eg. Parallel Simulation 14

Single-process multi-threaded application.. There is a single process using multiple independent threads, one for each connection. The correct unit of resource management is smaller than a process. Resource management is the set of all resources used to accomplish a single independent activity. 15

Lazy Receiver Processing.. Integrates network processing and resource management. When a packet arrives, instead of doing all the protocol processing, LRP does some minimal processing. The remainder will be performed by the process for which the packet was intended. Brings equivalence between resource principal process. 16

A quick recap.. Dual functions of a protection domain and resource principal are not efficient. The system does not allow applications to directly control resource consumptions ( e.g. priority ) or management. There may be a requirement from Web servers to provide some kind of guarantee to clients ( differentiated QoS ), making accurate accounting necessary. 17

A solution.. Containers are an abstract entity that logically contain all system resources being used by an app to achieve a particular independent activity. Containers can contain resources like CPU time, Sockets, Control Blocks, Network buffers, etc. Containers can also be attached with attributes to limit resources such as CPU availability, network QoS, scheduling priorities, etc. 18

How does it work? Applications have to identify resource principals and associate those independent activities with resource containers. Resource binding is the relation between resource / processing domains and the associated resource principals, thereby, allowing it to charge for resources within kernel. Dynamic resource binding is based on the activity or purpose that a thread / process is serving. This allows a thread to be associated with multiple resource containers. 19

Containers in a multi-threaded server.. The server creates a new container for each connection handled by a single thread bound to the container. Kernel processing is charged to this container. The scheduling priority of the associated thread would decay if the thread consumes more than it s fair share of resources. 20

Containers in an event driver server.. The web server would associate a new container for each connection. However, they would all be serviced by a single thread. The thread s binding would be changed dynamically as it moves across the connections. The associated container will be charged for the processing the thread performs for them. 21

How do you build it? There were several modifications to Digital UNIX 4.0D kernel. CPU scheduler was modified to treat containers as resource principals that could obtain a fixed share of time, or share resources assigned to its parent along with its siblings. The TCP/IP subsystem was modified to implement Lazy Receiver Processing. The Server software was a single-process, event-driven derived from thttpd and the Clients used the S-Client software. 22

Can it control resource utilization? Demonstration on how resource can isolate static requests being serviced, from excessive Dynamic CGI requests. CGI requests consumed about 2 seconds of CPU time. Static requests were serving a 1 KB document in the cache. The CGI processes were restricted to 30% (RC 1) and 10% ( RC 2 ). 23

What about performance? An extension of the resource constraints was to measure throughput under the pressure of many dynamic requests. Restricting resource usage of dynamic requests allows the OS to deliver a certain degree of fairness across containers. 24

How about QoS guarantees? An experiment tested the effectiveness of resource containers in the prioritized handling of clients A high priority client should see the guaranteed QoS even as the load imposed by the low priority clients is increased. A response time of 1 msec for the high priority client is chosen. Works, and a slight degradation is seen when select( ) is used ( because of scalability problems in select ). 25

Can it overcome the dark side? Containers help provide a certain amount of immunity against a DOS attack by SYN flooding. A set of malicious clients sent bogus SYN packets to the server s HTTP port. The concurrent throughput to other clients is measured. The slight degradation is due to the interrupt processing of the SYN flood. 26

So.. Containers can be used as an abstraction to explicitly identify a resource principal. Containers decouple resource principals from protection domains. Containers allow explicit and fine-grained control over resource consumption at both user-level and kernel-level. They can provide accurate resource accounting enabling web servers to provide differentiated QoS. 27

What s the impact? They are not heavy and more efficiently than hypervisors as Containers are based on shared OS resources. There is considerable implications for application density as well tuned container systems, can see four-to-six times as many server instances compared to traditional hypervisors. Can make a huge impacts for enterprise data centers and application developers. 28

Q&A.. I m tired.. Leave me alone.. Please.. You may provide feedback via uday@vt.edu.. 29