Some Observations of Internet Stream Lifetimes

Similar documents
Understanding Internet Traffic Streams: Dragonflies and Tortoises

Understanding Internet Traffic Streams: Dragonflies and Tortoises

Passive Measurement of One-way and Two-way Flow Lifetimes

A Study of Burstiness in TCP Flows

Analysis of Elephant Users in Broadband Network Traffic

Broadband Traffic Report: Download Growth Slows for a Second Year Running

Visualization of Internet Traffic Features

Response time distributions for global name servers

Power of Slicing in Internet Flow Measurement. Ramana Rao Kompella Cristian Estan

New Directions in Traffic Measurement and Accounting. Need for traffic measurement. Relation to stream databases. Internet backbone monitoring

File Size Distribution on UNIX Systems Then and Now

On the Relationship of Server Disk Workloads and Client File Requests

Traffic in Network /8. Background. Initial Experience. Geoff Huston George Michaelson APNIC R&D. April 2010

Chapter 4. Routers with Tiny Buffers: Experiments. 4.1 Testbed experiments Setup

A First Look at QUIC in the Wild

Performance Consequences of Partial RED Deployment

AutoFocus: A Tool for Automatic Traffic Analysis. Cristian Estan, University of California, San Diego

Design and Implementation of Measurement-Based Resource Allocation Schemes Within The Realtime Traffic Flow Measurement Architecture

"Dark" Traffic in Network /8

RD-TCP: Reorder Detecting TCP

Rethinking The Building Block: A Profiling Methodology for UDP Flows

Chapter The LRU* WWW proxy cache document replacement algorithm

On the 95-percentile billing method

Assessing the Nature of Internet traffic: Methods and Pitfalls

A trace-driven analysis of disk working set sizes

The Impact of Residential Broadband Traffic on Japanese ISP Backbones -- SRCCS Workshop on Internet Measurement, Modeling, and Analysis --

Adaptation of Real-time Temporal Resolution for Bitrate Estimates in IPFIX Systems

Master Course Computer Networks IN2097

Deliverable 1.1.6: Finding Elephant Flows for Optical Networks

Domain Based Metering

Exposing server performance to network managers through passive network measurements

Internet Traffic Classification using Machine Learning

USER ORIENTED IP ACCOUNTING IN MULTI-USER SYSTEMS

Nigerian Telecommunications (Services) Sector Report Q2 2016

Counting daily bridge users

CSE 565 Computer Security Fall 2018

Detecting Network Reconnaissance with the Cisco Cyber Threat Defense Solution 1.0

INTERNET TRAFFIC MEASUREMENT (PART II) Gaia Maselli

9. Wireshark I: Protocol Stack and Ethernet

Topics in P2P Networked Systems

Objectives: (1) To learn to capture and analyze packets using wireshark. (2) To learn how protocols and layering are represented in packets.

Introduction. Executive Summary. Test Highlights

AN ASSOCIATIVE TERNARY CACHE FOR IP ROUTING. 1. Introduction. 2. Associative Cache Scheme

SamKnows test methodology

Master Course Computer Networks IN2097

AODV-PA: AODV with Path Accumulation

sflow Agent Contents 14-1

Computer networks are evolving to

TCP/IP stack is the family of protocols that rule the current internet. While other protocols are also used in computer networks, TCP/IP is by far

BitTorrent Traffic Classification

Integrating Active Methods and Flow Meters - An Implementation Using NeTraMet

Towards Improving Packet Probing Techniques

measurement goals why traffic measurement of Internet is so hard? measurement needs combined skills diverse traffic massive volume of traffic

On the Congestion Responsiveness of Aggregate Internet Traffic: Open-loop vs Closed-loop Session Arrivals

An overview on Internet Measurement Methodologies, Techniques and Tools

MAD 12 Monitoring the Dynamics of Network Traffic by Recursive Multi-dimensional Aggregation. Midori Kato, Kenjiro Cho, Michio Honda, Hideyuki Tokuda

Stager. A Web Based Application for Presenting Network Statistics. Arne Øslebø

Introduction to Netflow

Lixia Zhang M. I. T. Laboratory for Computer Science December 1985

A quota system for fair share of network resources

Appendix B. Standards-Track TCP Evaluation

A Simulation: Improving Throughput and Reducing PCI Bus Traffic by. Caching Server Requests using a Network Processor with Memory

- Hubs vs. Switches vs. Routers -

Accurate and Efficient SLA Compliance Monitoring

KNOM Tutorial Internet Traffic Matrix Measurement and Analysis. Sue Bok Moon Dept. of Computer Science

CS 344/444 Computer Network Fundamentals Final Exam Solutions Spring 2007

A measurement study of correlations of Internet flow characteristics

Analyzing Dshield Logs Using Fully Automatic Cross-Associations

The Impact of Router Outages on the AS-Level Internet

Nigerian Telecommunications (Services) Sector Report Q3 2016

Improving the Performance of Passive Network Monitoring Applications using Locality Buffering

Traffic Flow Measurements within IP Networks: Requirements, Technologies and Standardization

6. Results. This section describes the performance that was achieved using the RAMA file system.

Is BranchCache right for remote, serverless software distribution?

SUSIE - Charging and Accounting for QoS-enhanced IP Multicast

An Improved Cache Mechanism for a Cache-based Network Processor

VERISIGN DISTRIBUTED DENIAL OF SERVICE TRENDS REPORT

BGP Routing: A study at Large Time Scale

IP Accounting C H A P T E R

An Enhanced Dynamic Packet Buffer Management

RECHOKe: A Scheme for Detection, Control and Punishment of Malicious Flows in IP Networks

Configuring BGP. Cisco s BGP Implementation

B.2 Measures of Central Tendency and Dispersion

Introduction to Network Discovery and Identity

Application-Level Measurements of Performance on the vbns *

A Fluid-Flow Characterization of Internet1 and Internet2 Traffic *

Scheduling. Scheduling Tasks At Creation Time CHAPTER

Operational Experiences With High-Volume Network Intrusion Detection

Characterizing Gnutella Network Properties for Peer-to-Peer Network Simulation

The consumer mobile experience. Measuring the consumer experience of using Android mobile services

A Hybrid Approach for Accurate Application Traffic Identification

A Study of Cache-Based IP Flow Switching

ipv6 hello-interval eigrp

MySQL Performance Analysis with Percona Toolkit and TCP/IP Network Traffic

THE Internet is an interconnection of separately administered

Mapping Internet Sensors with Probe Response Attacks

Troubleshooting Transparent Bridging Environments

Lab Exercise Protocol Layers

Table of Contents (As covered from textbook)

An Evaluation of Deficit Round Robin Fair Queuing Applied in Router Congestion Control

Transcription:

Some Observations of Internet Stream Lifetimes Nevil Brownlee CAIDA, UC San Diego, and The University of Auckland, New Zealand nevil@auckland.ac.nz Abstract. We present measurements of stream lifetimes for Internet traffic on a backbone link in California and a university link in Auckland. We investigate the consequences of sampling techniques such as ignoring streams with six or fewer packets, since they usually account for less than % of the total bytes. We find that we often observe large bursts of small attack streams, which will diminish the integrity of strategies that focus on the elephants. Our observations further demonstrate the danger of traffic engineering approaches based on incorrect assumptions about the nature of the traffic. 1 Introduction Over the last few years there has been considerable interest in understanding the behaviour of large aggregates of Internet traffic flows. Flows are usually considered to be sequences of packets with a 5-tuple of common values (protocol, source and destination IP addresses and port numbers), and ending after a fixed timeout interval when no packets are observed. For example, Estan and Varghese [1] proposed a method of metering flows which ensures that all packets in elephant flows, i.e. those that account for the majority of bytes on a link, are counted, while packets in less significant flows may be ignored. In contrast, streams are bi-directional 5-tuple flows, ending after a dynamic timeout interval of at least s and terminating after a quiet period of ten times their average packet inter-arrival time. Brownlee and Murray [2] investigated stream lifetimes, using a modified NeTraMet [3] meter. By using streams rather than flows, NeTraMet is able to measure various stream distributions at regular intervals (typically five or minutes) over periods of hours or days. In [4] Brownlee and Claffy used this methodology to observe stream behaviour at UC San Diego and Auckland, where about 45% of the streams were dragonflies lasting less than two seconds. However, there were also many streams with lifetimes of hours to days, and those tortoises carried 50% to 60% of the link s total bytes. At U Auckland, we use NeTraMet to measure Internet usage (bytes in and out for each user). In recent years the character of our Internet traffic has changed; the total volume has steadily grown, and we now see frequent network-borne attacks. Such attacks frequently appear as short time intervals during which we see large numbers of dragonfly streams. With our production NeTraMet rulesets (meter configuration files), attacks like address scans can give rise to tens of

thousands of flows. Such large bursts of flows tend to degrade the performance of our measurement system. To minimise the effect of bursts of attack streams, we investigated a strategy similar to that proposed by Estan and Varghese [1]. To do that we modified NeTraMet to ignore streams carrying K or fewer packets. That, however, posed the question of choosing a value for K. In this paper we present some observations of stream lifetimes on a tier 1 backbone in California, which are consistent with earlier work by Brownlee and Claffy [4], and compare them with similar recent observations at Auckland. We present measurements of the varying population of active streams at Auckland and compare that with the packet rate, using data gathered at onesecond intervals over several days. We investigate the proportion of the total bytes accounted for by streams with K or fewer packets, so as to help determine a suitable value for K. We often see measurement intervals when a high proportion of the total traffic is carried in dragonfly streams; for such intervals there are few elephant streams. Lastly, we show that ignoring streams with six or fewer packets can provide effective usage monitoring for U Auckland. 2 Methodology, Overall Traffic Observations 2.1 Understanding Flows and Streams Traffic Flows were first defined in the seminal paper by Claffy, Polyzos and Braun [5]. A CPB flow is a set of packets with common values for the 5-tuple (IP protocol, Source and Destination IP Address and Port Number), together with a specified, fixed inactivity timeout, usually 60 seconds. Note that a CPB flow is unidirectional, with the 5-tuple specifying a direction for the flow s packets. CPB flows are widely used, providing a convenient way to summarise large volumes of Internet traffic data. The IETF s RTFM architecture [6] provided a more general definition of a traffic flow. RTFM flows are bidirectional, with any set of packet attribute values being allowed to specify a flow. For example, an RTFM flow can be as simple as a CPB flow, or something more complex such as all flows to or from network 192.168/16. NeTraMet is an RTFM traffic measuring system that implements an extended version of RTFM flows. Streams were introduced to NeTraMet as a way of collecting data about subsets of a flow. For example, if we specify a flow as all packets to/from a particular web server, then NeTraMet can recognise a stream for every TCP connection to that server, and build distributions of their sizes, lifetimes, etc. NeTraMet s ability to handle streams in real time allows us to produce stream density distributions (e.g. lifetime and size in bytes or packets) over long periods of time eight hours or more while maintaining stream lifetime resolution down to microseconds. Furthermore, NeTraMet can collect such distributions at

5-minute intervals for days, without needing to collect, store and process huge packet trace files. Although streams are bidirectional, that only means that NeTraMet maintains two sets of counters, one for each direction of the stream. If the meter can only see one direction of the stream, one set of counters will remain at zero. Bidirectional streams are, however, particularly useful for security analysis, where we need to know which attack streams elicited responses from within our network! 2.2 Streams in NeTraMet From our earlier study of stream lifetimes [4] we know that a high proportion of traffic bytes are carried in tortoise streams. We modified the NeTraMet meter to use this fact to cache flow matches for each stream. The meter always maintains a table of active streams; when a new stream appears it is matched so as to determine which flow(s) it should be counted in. The set of matching flows is cached in the stream table, so that later packets can be counted in their proper flows without requiring further matching; we find that for most rulesets, average cache hit rates are usually well above 80%. Since NeTraMet is now based on stream caching, it is straightforward to collect distributions of byte, packet and stream density, using a set of bins to build histograms for a range of stream lifetimes. We use 36 bins to produce distributions for lifetimes in a log scale from 6 ms to minutes, and read these distributions every ten minutes. Streams are only counted when they time out, so longer-running streams do not contribute to our distributions directly. Instead we create flows for them, so that they produce flow records giving the number of their packets and bytes every time the meter is read. From those -minute flow records we construct two more decades of logarithmic bins, producing lifetime distributions from 6 ms to 30,000 seconds (roughly 8 hours), i.e. nearly seven decades. 2.3 Tier-1 Backbone in California, December 2003 Fig. 1 gives an overview of traffic on a tier-1 OC48 backbone in California over Friday, 6 December 2003. Only one direction is shown, the other direction had about one-quarter the traffic volume. There is a clear diurnal variation from about 450 to 700 Mb/s. Most of the traffic is web (upper half of bars) or nonweb TCP (lower half), plus a background level of about 50 Mb/s of UDP and other protocols. Fig. 2 shows the stream density vs lifetime (upper left traces) for every - minute reading interval. There is little variation, and about 95% of all streams have lifetimes less than ten seconds. The lower right traces, however, show stream byte density vs lifetime. Again there is little variation, but only 60% of the bytes are carried by streams with lifetimes less than 0 s. In other words, most streams are short but the bulk of the bytes are carried in long-running streams.

Mb/s 800 750 700 650 600 550 500 450 400 350 300 250 200 150 50 0 BB2 to bytes by kind, stacked bar plots, 5-6 Dec 03 (PST) 18:00 12/05 21:00 12/05 00:00 03:00 06:00 09:00 12:00 15:00 18:00 SSL web nw TCP UDP other Fig. 1. Stacked-bar plot of traffic on an OC48 backbone in California % Cumulative distributions, totals vs stream lifetime at BB2, 5-6 Dec 03 (PST) 90 80 70 60 50 40 30 20 0 outbound streams total outbound bytes total 0.01 0.1 1 0 00 stream lifetime (s) Fig. 2. Stream lifetimes for traffic on a tier-1 backbone in California

2.4 U Auckland Gateway, October 2004 Fig. 3 shows the traffic on U Auckland s Mb/s Internet gateway for Friday- Saturday 1-2 October 2004. There is only around 15 Mb/s of traffic, and it is rather bursty, probably because the total rate is low. During the day web traffic dominates, especially on Friday. In the evenings there are periods of high non-web TCP usage when we update local mirrors for databases outside New Zealand. Mb/s 20 Auckland inbound bytes by kind, stacked bar plots, 1-2 Oct 04 (PST) 15 5 0 00:00 06:00 12:00 18:00 00:00 06:00 12:00 18:00 00:00 /01 /01 /01 /01 /02 /02 /02 /02 /03 SSL web nw TCP UDP other Fig. 3. Stacked-bar plot of traffic on the U Auckland ( Mb/s) gateway Fig. 4 shows the stream density vs lifetime as for fig. 2. Here the stream lifetime and byte densities vary greatly, again reflecting the low traffic levels at Auckland. Stream lifetimes are similar at Auckland and California, with 70% to 95% of the streams again lasting less than seconds. However, at Auckland up to 60% of the bytes are carried in streams lasting only seconds; probably reflecting the high proportion of web traffic at Auckland. 3 Streams and Packets at Auckland We modified NeTraMet to write the packet rate and number of active streams and flows to a log file every second. Fig. 5 shows the packet rate (lower trace) and number of active streams (upper trace) for each second during Friday 1 and Saturday 2 October 2004.

% Cumulative distributions, totals vs stream lifetime at Auckland, Fri 1 Oct 04 (NZST) 90 80 70 60 50 40 30 20 inbound streams total inbound bytes total 0 0.01 0.1 1 0 00 stream lifetime (s) Fig. 4. Stream lifetimes for traffic on the U Auckland ( Mb/s) gateway at tenminute intervals for Friday, 1 October 2004 Packets and Active Streams each second at Auckland, Fri 1 - Sat 2 Oct 04 (NZST) packet/s active streams count 000 00 0 04:00 /01 08:00 /01 12:00 /01 16:00 /01 20:00 /01 00:00 /02 04:00 /02 Fig. 5. Packet rate and number of active streams at one-second intervals at Auckland for Friday 1 October 2004

The diurnal variation in stream numbers generally follows the variation in packet rate, i.e. it rises from about 0600 to 0900, falls from about 1700 to 2000, then rises again in the evening. Unlike the packet rate, however, the number of streams rose while the traffic rate fell around midnight on Friday 1 Oct 04. That rise was not repeated over the weekend; it appears to have been a one-off event (e.g. a database replication job copying many tiny files) rather than part of the diurnal pattern. At regular three-hour intervals we see a short, high step in the number of streams. Our network security team were well aware of this; they are investigating. We believe that such steps are caused by some sort of network attack. Similarly, every day at 1630 we see a bigger spike. We have also observed other, less regular, spikes taking the number of active streams as high as 140,000. Fig. 6 shows more detail for two of these spikes. Packets and Active Streams each second at Auckland, on Fri 1 Oct 04 (NZST) count 000 packet/s active streams 00 0 count 000 16:26 16:28 16:30 16:32 16:34 16:36 16:38 16:40 16:42 16:44 packet/s active streams 00 0 21:16 21:18 21:20 21:22 21:24 21:26 21:28 21:30 21:32 21:34 Fig. 6. Details of fig. 5 showing spike in streams at 1633, and step at 2120 4 Usage Metering at Auckland For usage accounting at Auckland we want to ignore streams with K or fewer packets. To help select a K value, we plotted distributions of byte density vs stream size (packets). Fig. 7 shows distributions for inbound (lower traces) and outbound (upper traces) byte-percentage distributions for ten-minute sample intervals from three hours from 2 on Friday 1 October 2004. For most of those intervals it seems that we could ignore streams with six or fewer packets in

% 90 Cumulative % bytes vs Stream Size (packets), Auckland, Fri 1 Oct 04 (NZST) outbound bytes v pkts inbound bytes v pkts 80 70 60 50 40 30 20 0 0 5 15 20 25 30 35 40 packets Fig. 7. Byte density vs packets in stream for three hours at Auckland, from 2 on Friday, 1 Oct 2004 either direction. However, there is one outbound trace, for the interval ending at 2120, which has 29% of its bytes in streams with only one or two packet. Fig. 6 shows that at 2120 the number of active streams had risen sharply. Table 1 shows that the interval ending at 2120 had two unusual features: a high inbound UDP traffic rate, and a low outbound non-web TCP traffic rate. We examined the ten intervals with their highest proportion of bytes in short streams. Few of those had low outbound non-tcp rates, but all had high UDP inbound rates. We hypothesise that the step in streams was caused by an inbound address or port scan, i.e. a flood of single-packet UDP streams. Although few of those inbound UDP probe packets elicited any response, those that did increased the proportion of bytes in small streams enough to dominate the outbound traffic. Since U Auckland has about five times as many inbound traffic bytes as it does outbound, we plotted the total (inbound+outbound) byte-percentage distributions for every ten-minute interval over 1-2 October 2004, producing fig. 8. We often see intervals when the short streams contribute a significant proportion of the total link bytes, suggesting that we should not simply focus on the elephants for our usage measurements. 5 Ignoring Short Streams at Auckland Our observations in section 4 suggest that on our link, intervals when traffic is dominated by short streams are caused by network attacks (plagues of dragon-

Table 1. Inbound and outbound traffic rates (Mb/s) for various Kinds of traffic on Friday 1 October 2004 Inbound rate UDP non-web web SSL other 21 0.15 2.91 8.85 0.51 0.03 2120 1.66 2.23.15 0.52 0.04 2130 0.21 1.37 9.86 0.50 1.09 Outbound rate UDP nonweb web SSL other 21 0. 1.47 3.31 0.73 0.03 2120 0. 0.92 3.34 0.850 0.03 2130 0. 3.71 3.54 0.859 0.07 % 90 Cumulative % bytes vs Stream Size (packets), Auckland, 1-2 Oct 04 (NZST) in+out bytes v pkts 80 70 60 50 40 30 20 0 0 5 15 20 25 30 35 40 packets Fig. 8. Total (inbound+outbound) byte density vs packets in stream at Auckland, 1-2 October 2004

flies). Although we need to know about those for security monitoring, they are less important for usage accounting. We decided to try metering while ignoring streams with six or fewer packets (total in both directions). We ran the meter with K = 6 for five days, using our normal usage accounting ruleset. All five days were similar (including regular three-hourly spikes and a daily spike at 1640); fig. 9 shows the packet rate (lower trace), active streams (middle trace) and flows (upper trace) for every second of Thursday, 7 October 2004. The packet rate and streams traces are similar to those in fig. 5; the number of flows is stable and tracks the packet rate. Packets and Active Streams each second at Auckland, Thu 7 - Fri 8 Oct 04 (NZDT) active flows packet/s active streams count 000 00 0 16:00 /07 20:00 /07 00:00 04:00 08:00 12:00 Fig. 9. Packet rate, active streams and active flows at one-second intervals at Auckland for Thursday, 7 October 2004. Our NeTraMet meter used K = 6, i.e. streams with six or fewer packets were not matched to flows Fig. shows the three hours from 2200 in more detail. The number of active flows rises steadily as new flows appear, and falls rapidly when flows are read every ten minutes, allowing the meter to recover flow table space for newlyinactive flows. The sawtooth behaviour, clearly visible in the plot, is thus an artifact of the RTFM architecture. (The important point here is that when we ignore small flows, the average number of active flows remains stable over long periods, minimising the load on the flow data collection system.) Fig. also shows that when the number of active streams increases sharply (showing spikes and steps), the number of flows is not affected. That supports our hypothesis that such attack increases are caused by bursts of short-lived streams.

Packets and Active Streams each second at Auckland, Thu 7 - Fri 8 Oct 04 (NZDT) count 000 active flows packet/s active streams 00 0 22:00 /07 22:30 /07 23:00 /07 23:30 /07 00:00 00:30 01:00 Fig.. Detail of fig. 9 showing spike and steps in streams, and sawtooth variation in flows % 30 % packets and bytes ignored at Auckland, Mon 14 - Tue 15 Dec 04 (NZDT) ignored byte % ignored packet % 3 1 0.3 00:00 12/15 04:00 12/15 08:00 12/15 12:00 12/15 16:00 12/15 20:00 12/15 00:00 12/16 Fig. 11. Percent of bytes and packets ignored at five-minute intervals at Auckland, measured using K = 6, i.e. ignoring streams with six or fewer packets

To verify our estimate that ignoring streams with six or fewer packets would exclude between 5% and % of the total bytes, we modified NeTraMet so as to collect distributions of the ignored packets and bytes as a function of stream size. One day of typical ignored data, collected at 5-minute sample intervals, is shown in fig. 11. The ignored packet percentage (upper trace) generally varies between about 2.5% and 8%. Furthermore, its average varies inversely with the average packet rate, suggesting that small (dragonfly) streams provide a more or less constant background all the time, with their ignored percentage more obvious when the packet rate is low. For about 95% of our sample intervals, between 0.5% and 2% of the bytes were ignored (lower trace). In the other 5% of the intervals we saw large spikes in the packet rate and in the number of active streams, as shown earlier in fig. 9. During those spikes, % to 30% of the bytes were ignored. Overall, the percentage of bytes ignored is acceptably low, with high ignored byte percentages only occurring during attack events. 6 Conclusion At Auckland we see frequent bursts of incoming attack streams, which can dominate the traffic mix on our Internet gateway. We believe that traffic engineering and accounting approaches that ignore streams with six or fewer packets (K = 6) means that in the long term only about 2% of our total bytes are not measured as user traffic. In return we achieve a significant reduction in the number of flows we have to create, read, store and process. However, for some traffic mixes, this sampling bias against small flows can radically warp the inferences one makes about the aggregate traffic. We are continuing our investigation of stream behaviour, especially that relating to the attack streams (plague of dragonflies) events. We have not observed these on the California backbone link, where traffic levels are much higher and there is more statistical mixing, but such attack streams probably do appear there. An alternative approach is the adaptive one proposed in [7], which adapts its sampling parameters to the traffic in real time That approach avoids the bias against small flows and should give a true picture of the actual traffic load, within its sampling limitations. Our approach, however, may be more useful for accounting applications, since we are not sampling. Instead we preserve detail for all the larger streams (which we can bill to a user) while ignoring the small attack streams (which are overhead, not billable to a user). 6.1 Future work We are continuing to investigate the plague of dragonflies events at Auckland. We would like to improve our network attack detection ability by recognising and reporting frequently-occurring attack patterns. The ability to summarise

large groups of small streams would also reduce the number of packets we ignore in our traffic monitoring. At this stage it is clear that NeTraMet can handle our network s data rate at Mb/s. We are confident that this can be done without having to use sampling techniques at 1 Gb/s. 6.2 The Need for Ongoing Measurements At U Auckland we use NeTraMet for usage accounting and traffic analysis, Snort 1 and Argus 2 for security monitoring, and MRTG 3 for traffic engineering. Each of these tools is specialised so as to perform its intended function well, but there is little overlap between the tools. Indeed, when an unusual event occurs on the network, it can be useful to have data from many tools, providing many different views of that event. We believe, therefore, that every large network should collect traffic data on an ongoing basis, using several different tools. The work required to support such monitoring is well justified by the ability it provides to investigate incidents soon after they occur. In addition, the understanding gained about the network, its traffic, and the ways that traffic changes over time, provides a sound basis for long-term improvements in the network s performance and in service to it s users. Acknowledgement Support for this work is provided by DARPA NMS Contract N66001-01-1-8909, NSF Award NCR-97192 CAIDA: Cooperative Association for Internet Data Analysis, and The University of Auckland. References 1. C. Estan and G. Varghese, New directions in traffic measurement and accounting: Focusing on the Elephants, Ignoring the Mice, ACM Transactions on Computer Systems, August 2003 2. N. Brownlee and M. Murray, Streams, Flows and Torrents, PAM2001, April 2001 3. N. Brownlee, Using NeTraMet for Production Traffic Measurement, Intelligent Management Conference, IM2001, May 2001 4. N. Brownlee and K. Claffy, Understanding Internet Stream Lifetimes: Dragonflies and Tortoises, IEEE Communications magazine, October 2002 5. K. Claffy, G. Polyzos and H-W. Braun, A parameterisable methodology for Internet traffic flow profiling, IEEE Journal on Selected Areas in Communications, 1995. 6. N. Brownlee, C. Mills and G. Ruth, Traffic Flow Measurement: Architecture, RFC 2722, October 1999. 7. C. Estan, K. Keys, D. Moore and G. Varghese, Building a better NetFlow, SIG- COMM, September 2004 1 Security Monitoring, http://www.snort.org/ 2 Network Auditing, http://www.qosient.com/argus/ 3 Traffic Rate Monitoring, http://people.ee.ethz.ch/ oetiker/webtools/mrtg/