Copyright Push Technology Ltd December Diffusion TM 4.4 Performance Benchmarks
|
|
- Bernadette Short
- 5 years ago
- Views:
Transcription
1 Diffusion TM 4.4 Performance Benchmarks November 2012
2 Contents 1 Executive Summary Introduction Environment Methodology Throughput Latency Results Summary Throughput without Conflation Throughput with Replace Conflation Throughput Comparison: Conflated vs Non- Conflated Throughput Summary Latency Summary Conclusion of 16
3 1 Executive Summary Diffusion is the most efficient data distribution technology on the market and this whitepaper describes the improvements achieved with respect to performance and scalability in Diffusion TM 4.4. A selection of benchmarks were used to demonstrate the improvement and quantify Diffusion s unique approach to data distribution. In particular we introduce Structural Conflation ; in its simplest form this replaces messages yet to be distributed to a client with a more current message ensuring the clients receive the most up- to- date information. This has the dual benefit of reducing bandwidth usage whilst enabling the server to service a larger number of concurrent connections. We demonstrate how Diffusion s intelligent approach to data distribution adapts to individual client conditions, markedly improving scalability as can be seen from the conflated versus non- conflated benchmark results. In particular attention is drawn to the reductions in disconnections when under maximum stress. The unique integration of messaging with in- memory data caching and in- flight data processing allows consistent, current data to be delivered to tens of thousands of client devices with optimal resource utilisation. Diffusion TM is a high performance data distribution technology with excellent latency and throughput characteristics whether deploying data distribution solutions behind or across the firewall. The results speak for themselves. 2 Introduction One of the principal objectives of the latest Diffusion TM 4.4 release was improving performance and scalability. This was achieved by targeting performance bottlenecks measured through an objective methodology; by optimising the critical pathways in the product and in particular through adoption of lock- free wait- free concurrency techniques. Lock- free wait- free algorithms essentially increase efficiency when services are heavily loaded through reducing the cost of certain frequent operations. The compound effect is more graceful, robust and scalable performance even when finite resources are saturated. Diffusion TM is designed to go the extra mile to better service the last mile. This report outlines the performance characteristics of Diffusion TM 4.4 using commodity off the shelf hardware. The full testing methodology, details of the hardware and software stack used for the system under test, and full test results are provided. 3 of 16
4 3 Environment The test environment consists of 2 machines, with 2 sockets per machine and 6 hyper- threaded cores per socket. The machines are directly connected. Box A Box B Machine Configuration The server is deployed on one of the machines and constrained to a single processor. All clients are deployed on the other machine and constrained to a single processor. Servers and clients are constrained to a single processor on each machine to force saturated conditions in the throughput benchmark earlier in test runs. The Diffusion TM server is developed in Java. The following Java runtime was used on both benchmark machines: Java(TM) SE Runtime Environment (build 1.6.0_33- b03) Java HotSpot(TM) 64- Bit Server VM (build b03, mixed mode) The benchmarks were run with the machines directly interconnected using Solarflare Communications SFC Ge network cards. It is recommended not to exceed 30,000 concurrent connections with these cards, although our benchmark tests up to 45,000 concurrent connections. 4 of 16
5 4 Methodology Two technical benchmarks were undertaken to determine the performance profile of a single running instance of a server. A throughput benchmark designed to stress the server demonstrates how far it can be pushed against a single network card. The latency benchmark shows how low the latency can go between servers and clients. A single server instance can easily saturate multiple 10Ge network interfaces on commodity hardware. Saturating critical resources such as network IO, CPU or memory resources is not a recommended practice outside of lab or benchmarking conditions as service levels degrade beyond this point. The server was constrained to a single socket on the test server and only one network interface was used for all test runs. Diffusion TM supports multiple transports. The most widely deployed transport carrying the Diffusion TM protocol over the web and mobile today is the IETF WebSocket 1 protocol. This is the transport documented in this whitepaper. Diffusion TM browser clients will automatically cascade across available transport implementations choosing the best available, but degrading gracefully through Silverlight- based, Flash- based, XML Http Request and IFrame based transports for legacy environments. Legacy transports are not explored in this whitepaper. The same configuration of Diffusion TM was used for both benchmarks. This configuration is the out of the box default configuration and is neither optimised for throughput nor latency. Finer grained configuration for the respective benchmark improves the results but has no bearing on the overall characteristics. 4.1 Throughput Overview The throughput benchmark model is designed around a number of configurable dimensions. The test is run repeatedly from a cold start of a Diffusion TM server instance and for a fixed duration of 5 minutes. Each benchmark run uses a different message payload size. Each set of runs is repeated with no conflation configured and with replace conflation configured. Without conflation Diffusion TM acts analogously to traditional enterprise messaging technologies. With conflation, however, it diverges from traditional messaging. 1 IETF WebSocket Specification - editor.org/rfc/rfc6455.txt 5 of 16
6 The difference is clearly evidenced in the results and summarised in this report based on the evidence captured. All other configurable dimensions do not vary across benchmark runs. Client connections are ramped continuously for the duration of each run. Clients are ramped at a fixed periodic interval of 5 seconds. All new clients being connected in an interval are started simultaneously. This exaggerates disconnections and adverse conditions in a controlled way. Each client subscribes to a fixed subset of 50 available topics. Data is being published at a designed rate of 100 messages per second per client. Diffusion TM servers are typically deployed at or behind the edge of private networks where services are exposed to public networks, with multiple instances running behind a load balancer. Each instance typically serves circa 10,000 to 30,000 concurrently connected clients with small message payloads of between 20 and 100 bytes. It is a real- time stream oriented distribution technology. It intelligently batches, fragments and conflates messages to optimize distribution whilst guaranteeing the currency, and timeliness of delivery. Benchmark Duration Message Payload Sizes Mode of Operation Ramping Interval New Client Connections Per Interval Messages Per Second Per Client - 5 minute runs , 250, 500, 1000 and 2000 bytes. - Without conflation, with replace conflation. - 5 seconds per interval messages. 4.2 Latency Overview The latency benchmark model is designed around a number of configurable dimensions. The latency benchmark is a simple round trip benchmark. A single client is used. The client sends a message to the server. This ping or request message is received by and responded to by the server. The client then receives the pong or response message. This process continues for the benchmark duration. This benchmark therefore represents a best- case latency profile. Benchmark Duration Message Payload Sizes - 5 minute runs , 250, 500, 1000 and 2000 bytes. 6 of 16
7 5 Results Summary This section provides a high- level summary and brief discussion of the benchmark results. 5.1 Throughput without Conflation Smaller message sizes have increased message- processing overhead and tend to saturate allocated CPU resources sooner than IO resources for payload sizes of 500 bytes or less. Larger payload sizes tend to be limited by allocated network IO. Diffusion TM can saturate Ge interfaces with a single instance configured as in this benchmark with 1000 byte message payloads. So, depending on available compute resources and the working set of a real- world use case multiple 10Ge network devices may be appropriate, or where message sizes are typically small, increased compute capacity. Throughput increases linearly as client connections are increased at a constant rate Larger message sizes saturate available network IO sooner than smaller messages Utilized bandwidth scales linearly as expected until network IO saturation point. Optimal utilisation at saturation. 87.2% (1.09 of 1.25 GB/sec) is payload data for large messages - not TCP/IP or protocol framing (data with no business value). CPU not Network capacity limits small message throughput. 7 of 16
8 At least 1 million 1K messages per second, sustained. New connections are handled gracefully until resources saturate. New connections are managed out beyond resource saturation. 45K concurrent clients at 125 and 250 bytes per message can be sustained. 24K concurrent clients at 500 bytes per message can be sustained Circa 13K concurrent clients at 1000 bytes per message can be sustained. Benchmarking based on typical and peak utilisation for a realistic model of your system will help understanding of your provisioning requirements and enable you to plan capacity effectively. Diffusion TM is designed to minimize the impact to service levels to already connected clients. This is confirmed by the disconnection rates. No disconnections are recorded until we reach saturation. Beyond this point the rate is a function of size. The benchmark is designed to deliver a soft constant rate of 100 messages per second per client. The back- pressure waveform at 1000 bytes or more correlates well with the rate of acceptance, refusal or dropping of connections once service levels degrade at saturation. Scaling is linear until saturation occurs. Connection attempts are made continuously throughout the benchmark run. Only disconnected clients are summarised and disconnections only occur once critical resources (CPU and IO here specifically) saturate. It is not recommended to provision at or near saturation for production use. 8 of 16
9 5.2 Throughput with Replace Conflation Diffusion TM is not a traditional messaging technology. It can cache data in memory and allows deployment of user designed distribution logic. Diffusion TM servers can be networked together in broker- based or brokerless networks to collaborate on larger distribution tasks. One of the key advantages of Diffusion TM in low fidelity high latency environments is the ability for the product to actively conflate and tune data to the capabilities of each connected device and to adapt rates of distribution dynamically. Messaging technologies have no sympathy for tailoring individual SLAs to each device. Virtualizing client side queues to the server side has significant advantages where data is mostly being streamed from servers to clients. The disadvantage of course is increased complexity of guaranteeing delivery of transactional data. The wire protocol uses snapshot (delivered on subscription to a topic) and delta (delivered most of the time during the lifetime of a connection) messages. In mobile contexts clients can disconnect and reconnect frequently, so a delta of what has changed since that client last connected can be sent rather than a full snapshot, thereby saving considerable bandwidth on recovery. During normal operation deltas or changes are sent in order to conserve bandwidth. Clients, similarly, can update the services they provide based on state changes in the connection. Replace conflation essentially replaces outgoing messages yet to be distributed to a client with a more current message. This ensures clients receive the most up- to- date message and protects downstream environments from a degree of back- pressure due to needing to distribute stale data. Diffusion TM s virtualization of client side queues gives the server telemetry to detect when clients cannot cope with distribution volumetrics or conversely when rates can be increased. Performance characteristics of the server are in line with and on the same order as replace conflation as without conflation. However, the server can sustain more concurrent clients. Intelligent conflation that takes data structures into consideration can considerably increase the level of conflation and further reduce bandwidth utilization. 9 of 16
10 Bandwidth utilisation is similar to the non- conflated case with the following exceptions More concurrent clients can be serviced compared to the non- conflated case. Optimal utilisation. 87.2% (1.09 of 1.25 GB/sec) is data for large messages. Reduced bandwidth utilisation when compared to the non conflated case for smaller message sizes. 45K concurrent clients at 125 bytes using 22% less bandwidth than non conflated case at the end of the test run 45K concurrent clients at 250 bytes using 27% less bandwidth than non conflated case at the end of the test run 28K concurrent clients at 500 bytes using 1% less bandwidth and servicing 16% more clients than non conflated case 14K concurrent clients at 1000 bytes using 1% less bandwidth and servicing 4% more clients than non conflated case 10 of 16
11 Replace conflation manages bandwidth utilization and per client service levels actively. As the server becomes more saturated, more benefit is extracted from conflation. This is reflected in messages per second per client that tends to reduce over time as load increases. As the server handles more concurrent clients less disconnections occur relative to the non- conflated case. Beyond saturation per client service levels reduce accordingly. For larger message sizes this can mean up to 30% less per client. For smaller message sizes service levels reduce less until CPU resources saturate. The Diffusion TM server adapts to adverse conditions gracefully under both conditions. 5.3 Throughput Comparison: Conflated vs Non- Conflated In this section we compare non- conflated message style throughput with data distribution based on the results of those benchmarks to this end. The following graph shows both the replace conflated and non- conflated message rates in millions of messages per second over the duration of the benchmark run. We observe linearly increasing message rates until, at a certain message payload size, we reach available network IO saturation. For large message sizes of 1000 bytes or more there is no perceivable difference in messaging rates when viewed in this way. For smaller message sizes of 500 bytes or less however we see increasing benefits of actively conflating data for smaller message sizes. 11 of 16
12 Diffusion TM clients in capital markets, online gaming and gambling are typically distributing very small messages, typically less than 100 bytes per message, very frequently. The benefits of conflation are even greater at these low rates because the wire protocol buffers and packs multiple logical messages in order to maximize bandwidth utilization and minimize the distribution overhead incurred by TCP, IP and transport level framing. Distribution can also be prioritized (price information over news, for example) essentially partitioning low priority large data items over time so that small urgent messages are delivered in a timely fashion. 12 of 16
13 If we analyse the data through the lens of concurrently connected clients we see a very different view of these results. Even when a server supporting these clients is saturated, it can distribute messages to a larger number of clients. For data that is likely to change frequently, replace conflation offers distinct benefits. For data where only subsets are volatile and frequently changing then merge conflation may be a better candidate. Some data cannot be conflated at all. For example orders, trades and other transactional events should neither be lost nor forgotten. Messaging systems typically handle transactional delivery very well. Data distribution takes this further so that server side resources and client- side devices, connectivity and other constraints are utilised more efficiently. We can see clearly in the chart above that messaging style distribution does not cope well once IO saturates. High frequency recoverable data adapts to increasing demand, even beyond saturation point more gracefully where conflation can be used. Messaging systems have limited or no capability to peek inside the content and make intelligent decisions. Diffusion TM leverages data and distribution expectations smartly. 13 of 16
14 5.4 Throughput Summary Data distribution, as distinct from messaging, is concerned with delivering the right data, at the right time, in the right way. Messaging treats data as opaque blobs that it distributes between producers and consumers. Data distribution, on the other hand, understands enough about data so that it can actively manage timeliness and responsiveness in order to ensure only relevant data gets distributed. This allows the same service to distribute data both inside the firewall where 1Ge and 10Ge connectivity are common and over the firewall to clients where latency is relatively high, bandwidth is shared, and connectivity is prone to failure. The server adapts to client conditions individually. This has an effect of improving serviceability overall as can be seen from the conflated verses non- conflated throughput benchmark results. Internal clients or participants simply receive an improved quality of service. External clients receive data at an optimal rate. Both are serviced in exactly the same way, by the same services. In practice, the rate of distribution for small payloads is such that on mobile handhelds connected over 3g verses local area connectivity there is no human perceivable difference over short (on the same continent) distances. 5.5 Latency Summary The latency histogram below characterizes the best case latency achievable with Diffusion TM. The measurements plotted are less than 1 millisecond wide in total. 14 of 16
15 The histogram measures round trip time, not single hop latency. Average latency for messages of 500 bytes of less sub 100 microseconds. 99% of messages round trips in 250 microseconds or less. All messages at the th percentile deliver sub- millisecond latencies. The above measured results can be used (halved) to approximate single hop latency. Average latency for messages of 50 0 bytes or less is sub 50 microseconds. 99% of message single hops in 125 microseconds or less. All messages at the th percentile delivered in half a millisecond or less. 15 of 16
16 6 Conclusion Diffusion TM 4.4 is a high performance data distribution technology with excellent latency and throughput characteristics whether deploying data distribution solutions behind or across the firewall. This whitepaper compared throughput and latency of messaging and data distribution styles with Diffusion 4.4. Data distribution offers reduced bandwidth utilisation, increased stability and robustness, and fairer service levels than plain old messaging. The unique integration of messaging with in- memory data caching and in- flight data processing allows consistent, current data to be delivered to tens of thousands of client devices with optimal utilisation. Diffusion TM demonstrates that adaptive, smart exploitation of client and server side constraints delivers better bang for your bytes, both for individual devices, and overall. 16 of 16
Diffusion TM 5.0 Performance Benchmarks
Diffusion TM 5.0 Performance Benchmarks Contents Introduction 3 Benchmark Overview 3 Methodology 4 Results 5 Conclusion 7 Appendix A Environment 8 Diffusion TM 5.0 Performance Benchmarks 2 1 Introduction
More informationDelivering Real- Time Internet Solutions
Delivering Real- Time Internet Solutions Executive Summary The Internet has evolved and grown exponentially over recent years. So too have the demands and expectations for powerful, efficient and functional
More informationLightstreamer. The Streaming-Ajax Revolution. Product Insight
Lightstreamer The Streaming-Ajax Revolution Product Insight 1 Agenda Paradigms for the Real-Time Web (four models explained) Requirements for a Good Comet Solution Introduction to Lightstreamer Lightstreamer
More informationvsan 6.6 Performance Improvements First Published On: Last Updated On:
vsan 6.6 Performance Improvements First Published On: 07-24-2017 Last Updated On: 07-28-2017 1 Table of Contents 1. Overview 1.1.Executive Summary 1.2.Introduction 2. vsan Testing Configuration and Conditions
More informationNetwork Design Considerations for Grid Computing
Network Design Considerations for Grid Computing Engineering Systems How Bandwidth, Latency, and Packet Size Impact Grid Job Performance by Erik Burrows, Engineering Systems Analyst, Principal, Broadcom
More informationBest Practices. Deploying Optim Performance Manager in large scale environments. IBM Optim Performance Manager Extended Edition V4.1.0.
IBM Optim Performance Manager Extended Edition V4.1.0.1 Best Practices Deploying Optim Performance Manager in large scale environments Ute Baumbach (bmb@de.ibm.com) Optim Performance Manager Development
More informationPerformance and Scalability with Griddable.io
Performance and Scalability with Griddable.io Executive summary Griddable.io is an industry-leading timeline-consistent synchronized data integration grid across a range of source and target data systems.
More informationJVM Performance Study Comparing Oracle HotSpot and Azul Zing Using Apache Cassandra
JVM Performance Study Comparing Oracle HotSpot and Azul Zing Using Apache Cassandra Legal Notices Apache Cassandra, Spark and Solr and their respective logos are trademarks or registered trademarks of
More informationJVM Performance Study Comparing Java HotSpot to Azul Zing Using Red Hat JBoss Data Grid
JVM Performance Study Comparing Java HotSpot to Azul Zing Using Red Hat JBoss Data Grid Legal Notices JBoss, Red Hat and their respective logos are trademarks or registered trademarks of Red Hat, Inc.
More informationWhite Paper. Major Performance Tuning Considerations for Weblogic Server
White Paper Major Performance Tuning Considerations for Weblogic Server Table of Contents Introduction and Background Information... 2 Understanding the Performance Objectives... 3 Measuring your Performance
More informationThe Google File System
October 13, 2010 Based on: S. Ghemawat, H. Gobioff, and S.-T. Leung: The Google file system, in Proceedings ACM SOSP 2003, Lake George, NY, USA, October 2003. 1 Assumptions Interface Architecture Single
More informationPersistent Memory. High Speed and Low Latency. White Paper M-WP006
Persistent Memory High Speed and Low Latency White Paper M-WP6 Corporate Headquarters: 3987 Eureka Dr., Newark, CA 9456, USA Tel: (51) 623-1231 Fax: (51) 623-1434 E-mail: info@smartm.com Customer Service:
More informationEvaluation and Benchmarking Guide
Evaluation and Benchmarking Guide Release 6.x Contents 1 Preface 1 1.1 About the Evaluation & Benchmarking Guide............................ 1 1.2 Conventions.............................................
More informationLixia Zhang M. I. T. Laboratory for Computer Science December 1985
Network Working Group Request for Comments: 969 David D. Clark Mark L. Lambert Lixia Zhang M. I. T. Laboratory for Computer Science December 1985 1. STATUS OF THIS MEMO This RFC suggests a proposed protocol
More informationThe Google File System
The Google File System Sanjay Ghemawat, Howard Gobioff and Shun Tak Leung Google* Shivesh Kumar Sharma fl4164@wayne.edu Fall 2015 004395771 Overview Google file system is a scalable distributed file system
More informationDeploy a High-Performance Database Solution: Cisco UCS B420 M4 Blade Server with Fusion iomemory PX600 Using Oracle Database 12c
White Paper Deploy a High-Performance Database Solution: Cisco UCS B420 M4 Blade Server with Fusion iomemory PX600 Using Oracle Database 12c What You Will Learn This document demonstrates the benefits
More informationMigratoryData Server Architecture Guide. Version 5.0 November 13, 2018
MigratoryData Server Architecture Guide Version 5.0 November 13, 2018 Copyright Information Copyright c 2007-2018 Migratory Data Systems. ALL RIGHTS RESERVED. THIS DOCUMENT IS PROVIDED AS IS WITHOUT WARRANTY
More informationLimited-Bandwidth Plug-ins for DDS
May 2011 Limited-Bandwidth Plug-ins for DDS Integrating Applications over Low Bandwidth, Unreliable and Constrained Networks using RTI Data Distribution Service Edwin de Jong, Director of Product Management
More informationOracle Database 12c: JMS Sharded Queues
Oracle Database 12c: JMS Sharded Queues For high performance, scalable Advanced Queuing ORACLE WHITE PAPER MARCH 2015 Table of Contents Introduction 2 Architecture 3 PERFORMANCE OF AQ-JMS QUEUES 4 PERFORMANCE
More informationNVMe SSDs Becoming Norm for All Flash Storage
SSDs Becoming Norm for All Flash Storage Storage media has improved by leaps and bounds over the last several years. Capacity and performance are both improving at rather rapid rates as popular vendors
More informationBig and Fast. Anti-Caching in OLTP Systems. Justin DeBrabant
Big and Fast Anti-Caching in OLTP Systems Justin DeBrabant Online Transaction Processing transaction-oriented small footprint write-intensive 2 A bit of history 3 OLTP Through the Years relational model
More informationQLIKVIEW SCALABILITY BENCHMARK WHITE PAPER
QLIKVIEW SCALABILITY BENCHMARK WHITE PAPER Hardware Sizing Using Amazon EC2 A QlikView Scalability Center Technical White Paper June 2013 qlikview.com Table of Contents Executive Summary 3 A Challenge
More informationA developer s guide to load testing
Software architecture for developers What is software architecture? What is the role of a software architect? How do you define software architecture? How do you share software architecture? How do you
More informationCS268: Beyond TCP Congestion Control
TCP Problems CS68: Beyond TCP Congestion Control Ion Stoica February 9, 004 When TCP congestion control was originally designed in 1988: - Key applications: FTP, E-mail - Maximum link bandwidth: 10Mb/s
More informationScaling DreamFactory
Scaling DreamFactory This white paper is designed to provide information to enterprise customers about how to scale a DreamFactory Instance. The sections below talk about horizontal, vertical, and cloud
More informationChapter 4. Routers with Tiny Buffers: Experiments. 4.1 Testbed experiments Setup
Chapter 4 Routers with Tiny Buffers: Experiments This chapter describes two sets of experiments with tiny buffers in networks: one in a testbed and the other in a real network over the Internet2 1 backbone.
More informationMASV Accelerator Technology Overview
MASV Accelerator Technology Overview Introduction Most internet applications, FTP and HTTP to name a few, achieve network transport via the ubiquitous TCP protocol. But TCP suffers from latency, packet
More informationEpisode 4. Flow and Congestion Control. Baochun Li Department of Electrical and Computer Engineering University of Toronto
Episode 4. Flow and Congestion Control Baochun Li Department of Electrical and Computer Engineering University of Toronto Recall the previous episode Detailed design principles in: The link layer The network
More informationZooKeeper. Table of contents
by Table of contents 1 ZooKeeper: A Distributed Coordination Service for Distributed Applications... 2 1.1 Design Goals... 2 1.2 Data model and the hierarchical namespace... 3 1.3 Nodes and ephemeral nodes...
More information1. Arista 7124s Switch Report
1. Arista 7124s Switch Report Test Results for Arista 7124s Switch Report 2 2. Synopsis Lab Real Session Stress Session Rate Stress Sessions 64: 100.00 1518: 100.00 Count: 100.00 Rate: 100.00 IP: pass
More informationIntel Enterprise Edition Lustre (IEEL-2.3) [DNE-1 enabled] on Dell MD Storage
Intel Enterprise Edition Lustre (IEEL-2.3) [DNE-1 enabled] on Dell MD Storage Evaluation of Lustre File System software enhancements for improved Metadata performance Wojciech Turek, Paul Calleja,John
More informationAn Oracle White Paper September Oracle Utilities Meter Data Management Demonstrates Extreme Performance on Oracle Exadata/Exalogic
An Oracle White Paper September 2011 Oracle Utilities Meter Data Management 2.0.1 Demonstrates Extreme Performance on Oracle Exadata/Exalogic Introduction New utilities technologies are bringing with them
More informationIBM Europe Announcement ZP , dated November 6, 2007
IBM Europe Announcement ZP07-0484, dated November 6, 2007 IBM WebSphere Front Office for Financial Markets V2.0 and IBM WebSphere MQ Low Latency Messaging V2.0 deliver high speed and high throughput market
More informationUnderstanding Data Locality in VMware vsan First Published On: Last Updated On:
Understanding Data Locality in VMware vsan First Published On: 07-20-2016 Last Updated On: 09-30-2016 1 Table of Contents 1. Understanding Data Locality in VMware vsan 1.1.Introduction 1.2.vSAN Design
More informationEnabling Efficient and Scalable Zero-Trust Security
WHITE PAPER Enabling Efficient and Scalable Zero-Trust Security FOR CLOUD DATA CENTERS WITH AGILIO SMARTNICS THE NEED FOR ZERO-TRUST SECURITY The rapid evolution of cloud-based data centers to support
More information6.9. Communicating to the Outside World: Cluster Networking
6.9 Communicating to the Outside World: Cluster Networking This online section describes the networking hardware and software used to connect the nodes of cluster together. As there are whole books and
More informationIX: A Protected Dataplane Operating System for High Throughput and Low Latency
IX: A Protected Dataplane Operating System for High Throughput and Low Latency Belay, A. et al. Proc. of the 11th USENIX Symp. on OSDI, pp. 49-65, 2014. Reviewed by Chun-Yu and Xinghao Li Summary In this
More informationMohammad Hossein Manshaei 1393
Mohammad Hossein Manshaei manshaei@gmail.com 1393 Voice and Video over IP Slides derived from those available on the Web site of the book Computer Networking, by Kurose and Ross, PEARSON 2 Multimedia networking:
More informationMellanox InfiniBand Solutions Accelerate Oracle s Data Center and Cloud Solutions
Mellanox InfiniBand Solutions Accelerate Oracle s Data Center and Cloud Solutions Providing Superior Server and Storage Performance, Efficiency and Return on Investment As Announced and Demonstrated at
More informationCongestion Control. Andreas Pitsillides University of Cyprus. Congestion control problem
Congestion Control Andreas Pitsillides 1 Congestion control problem growing demand of computer usage requires: efficient ways of managing network traffic to avoid or limit congestion in cases where increases
More informationGateway Design Challenges
What is GEP? Gateway Design Challenges Performance given system complexity Support multiple data types efficiently and securely Support multiple priorities Minimize latency and maximize throughput High
More informationExtreme Storage Performance with exflash DIMM and AMPS
Extreme Storage Performance with exflash DIMM and AMPS 214 by 6East Technologies, Inc. and Lenovo Corporation All trademarks or registered trademarks mentioned here are the property of their respective
More informationWarpTCP WHITE PAPER. Technology Overview. networks. -Improving the way the world connects -
WarpTCP WHITE PAPER Technology Overview -Improving the way the world connects - WarpTCP - Attacking the Root Cause TCP throughput reduction is often the bottleneck that causes data to move at slow speed.
More informationECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective
ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective Part II: Data Center Software Architecture: Topic 1: Distributed File Systems GFS (The Google File System) 1 Filesystems
More informationSolace Message Routers and Cisco Ethernet Switches: Unified Infrastructure for Financial Services Middleware
Solace Message Routers and Cisco Ethernet Switches: Unified Infrastructure for Financial Services Middleware What You Will Learn The goal of zero latency in financial services has caused the creation of
More informationPerformance Testing for Multicast Services Using TeraVM Application Note. The most important thing we build is trust
TeraVM Performance Testing for Multicast Services Using TeraVM Application Note The most important thing we build is trust Performance Testing for Multicast Services Unlike point-to-point network applications,
More informationPerformance Characterization of the Dell Flexible Computing On-Demand Desktop Streaming Solution
Performance Characterization of the Dell Flexible Computing On-Demand Desktop Streaming Solution Product Group Dell White Paper February 28 Contents Contents Introduction... 3 Solution Components... 4
More informationFlash: an efficient and portable web server
Flash: an efficient and portable web server High Level Ideas Server performance has several dimensions Lots of different choices on how to express and effect concurrency in a program Paper argues that
More informationINTERNATIONAL TELECOMMUNICATION UNION
INTERNATIONAL TELECOMMUNICATION UNION TELECOMMUNICATION STANDARDIZATION SECTOR STUDY PERIOD 21-24 English only Questions: 12 and 16/12 Geneva, 27-31 January 23 STUDY GROUP 12 DELAYED CONTRIBUTION 98 Source:
More informationCS555: Distributed Systems [Fall 2017] Dept. Of Computer Science, Colorado State University
CS 555: DISTRIBUTED SYSTEMS [DYNAMO & GOOGLE FILE SYSTEM] Frequently asked questions from the previous class survey What s the typical size of an inconsistency window in most production settings? Dynamo?
More informationPerformance Benefits of Running RocksDB on Samsung NVMe SSDs
Performance Benefits of Running RocksDB on Samsung NVMe SSDs A Detailed Analysis 25 Samsung Semiconductor Inc. Executive Summary The industry has been experiencing an exponential data explosion over the
More informationRPT: Re-architecting Loss Protection for Content-Aware Networks
RPT: Re-architecting Loss Protection for Content-Aware Networks Dongsu Han, Ashok Anand ǂ, Aditya Akella ǂ, and Srinivasan Seshan Carnegie Mellon University ǂ University of Wisconsin-Madison Motivation:
More informationScalability Engine Guidelines for SolarWinds Orion Products
Scalability Engine Guidelines for SolarWinds Orion Products Last Updated: March 7, 2017 For a PDF of this article, click the PDF icon under the Search bar at the top right of this page. Your Orion Platform
More informationBest Practices for Setting BIOS Parameters for Performance
White Paper Best Practices for Setting BIOS Parameters for Performance Cisco UCS E5-based M3 Servers May 2013 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page
More informationBusiness Benefits of Policy Based Data De-Duplication Data Footprint Reduction with Quality of Service (QoS) for Data Protection
Data Footprint Reduction with Quality of Service (QoS) for Data Protection By Greg Schulz Founder and Senior Analyst, the StorageIO Group Author The Green and Virtual Data Center (Auerbach) October 28th,
More informationChapter 13 TRANSPORT. Mobile Computing Winter 2005 / Overview. TCP Overview. TCP slow-start. Motivation Simple analysis Various TCP mechanisms
Overview Chapter 13 TRANSPORT Motivation Simple analysis Various TCP mechanisms Distributed Computing Group Mobile Computing Winter 2005 / 2006 Distributed Computing Group MOBILE COMPUTING R. Wattenhofer
More informationSizing Guidelines and Performance Tuning for Intelligent Streaming
Sizing Guidelines and Performance Tuning for Intelligent Streaming Copyright Informatica LLC 2017. Informatica and the Informatica logo are trademarks or registered trademarks of Informatica LLC in the
More informationWhy NVMe/TCP is the better choice for your Data Center
Why NVMe/TCP is the better choice for your Data Center Non-Volatile Memory express (NVMe) has transformed the storage industry since its emergence as the state-of-the-art protocol for high-performance
More informationIP SLAs Overview. Finding Feature Information. Information About IP SLAs. IP SLAs Technology Overview
This module describes IP Service Level Agreements (SLAs). IP SLAs allows Cisco customers to analyze IP service levels for IP applications and services, to increase productivity, to lower operational costs,
More informationMEMORY/RESOURCE MANAGEMENT IN MULTICORE SYSTEMS
MEMORY/RESOURCE MANAGEMENT IN MULTICORE SYSTEMS INSTRUCTOR: Dr. MUHAMMAD SHAABAN PRESENTED BY: MOHIT SATHAWANE AKSHAY YEMBARWAR WHAT IS MULTICORE SYSTEMS? Multi-core processor architecture means placing
More informationFASTEST MILLION OVER THE WEB WITH KAAZING, DELL, AND TIBCO
FASTEST MILLION OVER THE WEB WITH KAAZING, DELL, AND TIBCO DELIVER REAL-TIME DATA TO ONE MILLION CONCURRENT WEB USERS ON A SINGLE RACK HTML5 WebSocket Scalability High Performance Security Copyright 2012
More informationPerformance of relational database management
Building a 3-D DRAM Architecture for Optimum Cost/Performance By Gene Bowles and Duke Lambert As systems increase in performance and power, magnetic disk storage speeds have lagged behind. But using solidstate
More informationOptimizing LS-DYNA Productivity in Cluster Environments
10 th International LS-DYNA Users Conference Computing Technology Optimizing LS-DYNA Productivity in Cluster Environments Gilad Shainer and Swati Kher Mellanox Technologies Abstract Increasing demand for
More informationDesigning Next-Generation Data- Centers with Advanced Communication Protocols and Systems Services. Presented by: Jitong Chen
Designing Next-Generation Data- Centers with Advanced Communication Protocols and Systems Services Presented by: Jitong Chen Outline Architecture of Web-based Data Center Three-Stage framework to benefit
More informationFile Server Comparison: Executive Summary. Microsoft Windows NT Server 4.0 and Novell NetWare 5. Contents
File Server Comparison: Microsoft Windows NT Server 4.0 and Novell NetWare 5 Contents Executive Summary Updated: October 7, 1998 (PDF version 240 KB) Executive Summary Performance Analysis Price/Performance
More informationTuning RED for Web Traffic
Tuning RED for Web Traffic Mikkel Christiansen, Kevin Jeffay, David Ott, Donelson Smith UNC, Chapel Hill SIGCOMM 2000, Stockholm subsequently IEEE/ACM Transactions on Networking Vol. 9, No. 3 (June 2001)
More informationRIGHTNOW A C E
RIGHTNOW A C E 2 0 1 4 2014 Aras 1 A C E 2 0 1 4 Scalability Test Projects Understanding the results 2014 Aras Overview Original Use Case Scalability vs Performance Scale to? Scaling the Database Server
More informationManaging Caching Performance and Differentiated Services
CHAPTER 10 Managing Caching Performance and Differentiated Services This chapter explains how to configure TCP stack parameters for increased performance ant throughput and how to configure Type of Service
More informationEngineering Quality of Experience: A Brief Introduction
Engineering Quality of Experience: A Brief Introduction Neil Davies and Peter Thompson November 2012 Connecting the quality of user experience to parameters a network operator can directly measure and
More informationSamKnows test methodology
SamKnows test methodology Download and Upload (TCP) Measures the download and upload speed of the broadband connection in bits per second. The transfer is conducted over one or more concurrent HTTP connections
More informationOracle Event Processing Extreme Performance on Sparc T5
Oracle Event Processing Extreme Performance on Sparc T5 An Oracle Event Processing (OEP) Whitepaper ORACLE WHITE PAPER AUGUST 2014 Table of Contents Introduction 2 OEP Architecture 2 Server Architecture
More informationWhitePaper: XipLink Real-Time Optimizations
WhitePaper: XipLink Real-Time Optimizations XipLink Real Time Optimizations Header Compression, Packet Coalescing and Packet Prioritization Overview XipLink Real Time ( XRT ) is an optimization capability
More informationMemory-Based Cloud Architectures
Memory-Based Cloud Architectures ( Or: Technical Challenges for OnDemand Business Software) Jan Schaffner Enterprise Platform and Integration Concepts Group Example: Enterprise Benchmarking -) *%'+,#$)
More informationHP ProLiant BladeSystem Gen9 vs Gen8 and G7 Server Blades on Data Warehouse Workloads
HP ProLiant BladeSystem Gen9 vs Gen8 and G7 Server Blades on Data Warehouse Workloads Gen9 server blades give more performance per dollar for your investment. Executive Summary Information Technology (IT)
More informationProtocols SPL/ SPL
Protocols 1 Application Level Protocol Design atomic units used by protocol: "messages" encoding reusable, protocol independent, TCP server, LinePrinting protocol implementation 2 Protocol Definition set
More informationFIREFLY ARCHITECTURE: CO-BROWSING AT SCALE FOR THE ENTERPRISE
FIREFLY ARCHITECTURE: CO-BROWSING AT SCALE FOR THE ENTERPRISE Table of Contents Introduction... 2 Architecture Overview... 2 Supported Browser Versions and Technologies... 3 Firewalls and Login Sessions...
More informationCloud-Native Applications. Copyright 2017 Pivotal Software, Inc. All rights Reserved. Version 1.0
Cloud-Native Applications Copyright 2017 Pivotal Software, Inc. All rights Reserved. Version 1.0 Cloud-Native Characteristics Lean Form a hypothesis, build just enough to validate or disprove it. Learn
More informationMore on Testing and Large Scale Web Apps
More on Testing and Large Scale Web Apps Testing Functionality Tests - Unit tests: E.g. Mocha - Integration tests - End-to-end - E.g. Selenium - HTML CSS validation - forms and form validation - cookies
More informationPLEASE READ CAREFULLY BEFORE YOU START
Page 1 of 20 MIDTERM EXAMINATION #1 - B COMPUTER NETWORKS : 03-60-367-01 U N I V E R S I T Y O F W I N D S O R S C H O O L O F C O M P U T E R S C I E N C E Fall 2008-75 minutes This examination document
More informationPLEASE READ CAREFULLY BEFORE YOU START
Page 1 of 20 MIDTERM EXAMINATION #1 - A COMPUTER NETWORKS : 03-60-367-01 U N I V E R S I T Y O F W I N D S O R S C H O O L O F C O M P U T E R S C I E N C E Fall 2008-75 minutes This examination document
More informationOn the Creation & Discovery of Topics in Distributed Publish/Subscribe systems
On the Creation & Discovery of Topics in Distributed Publish/Subscribe systems Shrideep Pallickara, Geoffrey Fox & Harshawardhan Gadgil Community Grids Lab, Indiana University 1 Messaging Systems Messaging
More informationJim Metzler. Introduction. The Role of an ADC
November 2009 Jim Metzler Ashton, Metzler & Associates jim@ashtonmetzler.com Introduction In any economic environment a company s senior management expects that their IT organization will continually look
More informationScaling Internet TV Content Delivery ALEX GUTARIN DIRECTOR OF ENGINEERING, NETFLIX
Scaling Internet TV Content Delivery ALEX GUTARIN DIRECTOR OF ENGINEERING, NETFLIX Inventing Internet TV Available in more than 190 countries 104+ million subscribers Lots of Streaming == Lots of Traffic
More informationMaximize the Speed and Scalability of Your MuleSoft ESB with Solace
Maximize the Speed and Scalability of MuleSoft s Mule ESB enterprise service bus software makes information and interactive services accessible to a wide range of applications and users by intelligently
More informationSEDA: An Architecture for Well-Conditioned, Scalable Internet Services
SEDA: An Architecture for Well-Conditioned, Scalable Internet Services Matt Welsh, David Culler, and Eric Brewer Computer Science Division University of California, Berkeley Operating Systems Principles
More informationIBM InfoSphere Streams v4.0 Performance Best Practices
Henry May IBM InfoSphere Streams v4.0 Performance Best Practices Abstract Streams v4.0 introduces powerful high availability features. Leveraging these requires careful consideration of performance related
More informationRAID SEMINAR REPORT /09/2004 Asha.P.M NO: 612 S7 ECE
RAID SEMINAR REPORT 2004 Submitted on: Submitted by: 24/09/2004 Asha.P.M NO: 612 S7 ECE CONTENTS 1. Introduction 1 2. The array and RAID controller concept 2 2.1. Mirroring 3 2.2. Parity 5 2.3. Error correcting
More informationWHITE PAPER Using Marathon everrun MX 6.1 with XenDesktop 5 Service Pack 1
WHITE PAPER Using Marathon everrun MX 6.1 with XenDesktop 5 Service Pack 1 www.citrix.com Contents Introduction... 2 Executive Overview... 2 Marathon everrun MX 6.1 (description by Marathon Technologies)...
More informationELECTRONIC COPY SAMKNOWS ANALYSIS OF ROGERS BROADBAND PERFORMANCE IN FEBRUARY 2015 ELECTRONIC COPY. Delivered by to: Shane Jansen.
ELECTRONIC COPY SAMKNOWS ANALYSIS OF ROGERS BROADBAND PERFORMANCE IN FEBRUARY 2015 Delivered by Email to: Shane Jansen Rogers Dated: February 25, 2015 ELECTRONIC COPY [THIS PAGE LEFT INTENTIONALLY BLANK]
More informationBetter Never than Late: Meeting Deadlines in Datacenter Networks
Better Never than Late: Meeting Deadlines in Datacenter Networks Christo Wilson, Hitesh Ballani, Thomas Karagiannis, Ant Rowstron Microsoft Research, Cambridge User-facing online services Two common underlying
More informationThe Google File System
The Google File System Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung SOSP 2003 presented by Kun Suo Outline GFS Background, Concepts and Key words Example of GFS Operations Some optimizations in
More informationPerformance Consequences of Partial RED Deployment
Performance Consequences of Partial RED Deployment Brian Bowers and Nathan C. Burnett CS740 - Advanced Networks University of Wisconsin - Madison ABSTRACT The Internet is slowly adopting routers utilizing
More informationA Low Latency Solution Stack for High Frequency Trading. High-Frequency Trading. Solution. White Paper
A Low Latency Solution Stack for High Frequency Trading White Paper High-Frequency Trading High-frequency trading has gained a strong foothold in financial markets, driven by several factors including
More informationCloud Optimized Performance: I/O-Intensive Workloads Using Flash-Based Storage
Cloud Optimized Performance: I/O-Intensive Workloads Using Flash-Based Storage Version 1.0 Brocade continues to innovate by delivering the industry s first 16 Gbps switches for low latency and high transaction
More informationNirvana A Technical Introduction
Nirvana A Technical Introduction Cyril PODER, ingénieur avant-vente June 18, 2013 2 Agenda Product Overview Client Delivery Modes Realm Features Management and Administration Clustering & HA Scalability
More informationThe Google File System
The Google File System Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung December 2003 ACM symposium on Operating systems principles Publisher: ACM Nov. 26, 2008 OUTLINE INTRODUCTION DESIGN OVERVIEW
More informationSolace JMS Broker Delivers Highest Throughput for Persistent and Non-Persistent Delivery
Solace JMS Broker Delivers Highest Throughput for Persistent and Non-Persistent Delivery Java Message Service (JMS) is a standardized messaging interface that has become a pervasive part of the IT landscape
More informationAn Overview of WebSphere MQ Telemetry and How to Utilize MQTT for Practical Solutions
IBM Software Group An Overview of WebSphere MQ Telemetry and How to Utilize MQTT for Practical Solutions Valerie Lampkin vlampkin@us.ibm.com WebSphere MQ Technical Resolution Support May 15, 2012 WebSphere
More informationThruPut Manager AE Product Overview From
Intro ThruPut Manager AE (Automation Edition) is the only batch software solution in its class. It optimizes and automates the total z/os JES2 batch workload, managing every job from submission to end
More informationSolidFire and Pure Storage Architectural Comparison
The All-Flash Array Built for the Next Generation Data Center SolidFire and Pure Storage Architectural Comparison June 2014 This document includes general information about Pure Storage architecture as
More information