Steelhead Appliance Deployment Guide. Version August 2008

Size: px
Start display at page:

Download "Steelhead Appliance Deployment Guide. Version August 2008"

Transcription

1 Steelhead Appliance Deployment Guide Version August 2008

2 Riverbed Technology, Incorporated. All rights reserved. Riverbed Technology, Riverbed, Steelhead, RiOS, Interceptor and the Riverbed logo are trademarks or registered trademarks of Riverbed Technology, Inc. All other trademarks used or mentioned herein belong to their respective owners. Linux is a trademark of Linus Torvalds in the United States and in other countries. Oracle and JInitiator are trademarks or registered trademarks of Oracle Corporation. Microsoft, Windows, Windows NT, Windows 2000, Windows Vista, Outlook, and Internet Explorer are trademarks or registered trademarks of Microsoft Corporation in the United States and in other countries. UNIX is a registered trademark in the United States and in other countries, exclusively licensed through X/Open Company, Ltd. Parts of this product are derived from the following software: Apache The Apache Software Foundation. All rights reserved. Busybox Eric Andersen ethtool 1994, , 1999, 2001, 2002 Free Software Foundation, Inc. Less Mark Nudelman Libevent Niels Provos. All rights reserved. LibGD, Version 2.0 licensed by Boutell.Com, Inc. Libtecla 2000, 2001 by Martin C. Shepherd. All rights reserved. Linux Kernel Linus Torvalds login The Regents of the University of California. All rights reserved. md5, md5.cc 1995 University of Southern California, , RSA Data Security, Inc. my_getopt.{c,h} 1997, 2000, 2001, 2002, Benjamin Sittler. All rights reserved. NET-SNMP Copyright 1989, 1991, 1992 by Carnegie Mellon University. All rights reserved. Derivative Work , Copyright 1996, The Regents of the University of California. All rights reserved. OpenSSH 1983, 1990, 1992, 1993, 1995, 1993 The Regents of the University of California. All rights reserved. pam Tall Maple Systems, Inc. All rights reserved. pam-radius 1989, 1991 Free Software Foundation, Inc. pam-tacplus by Pawel Krawczyk ssmtp GNU General Public License syslogd Tall Maple Systems, Inc. All rights reserved. Vixie-Cron 1988, 1990, 1993, 1994 by Paul Vixie. All rights reserved. Zile Sandro Sigalam 2003 Reuben Thomas. All rights reserved. This product includes software developed by the University of California, Berkeley and its contributors. This product is derived from the RSA Data Security, Inc. MD5 Message-Digest Algorithm. For detailed copyright and license agreements or modified source code (where required), see the Riverbed Technical Support site at Certain libraries were used in the development of this software, licensed under GNU Lesser General Public License, Version 2.1, February For a list of libraries, see the Riverbed Technical Support at You must log in to the support site to request modified source code. Other product names, brand names, marks, and symbols are registered trademarks or trademarks of their respective owners. The content of this manual is furnished on a RESTRICTED basis and is subject to change without notice and should not be construed as a commitment by Riverbed Technology, Incorporated. Use, duplication, or disclosure by the U.S. Government is subject to restrictions set forth in Subparagraphs (c) (1) and (2) of the Commercial Computer Software Restricted Rights at 48 CFR , as applicable. Riverbed Technology, Incorporated assumes no responsibility or liability for any errors or inaccuracies that may appear in this book. Riverbed Technology 199 Fremont Street San Francisco, CA Phone: Fax: Web: Part Number

3 Contents Introduction...9 About This Guide... 9 Types of Users... 9 Organization of This Guide... 9 Document Conventions Hardware and Software Dependencies Ethernet Network Compatibility SNMP-Based Management Compatibility Antivirus Compatibility Additional Resources Online Notes Riverbed Documentation Online Documentation Riverbed Knowledge Base Related Reading Contacting Riverbed Internet Riverbed Technical Support Riverbed Professional Services Documentation Chapter 1 Steelhead Appliance Design Fundamentals...15 How Steelhead Appliances Optimize Data Data Streamlining Transport Streamlining Application Streamlining Management Streamlining Choosing the Right Steelhead Appliance Deployment Modes for the Steelhead Appliance The Auto-Discovery Protocol Overview of Auto-Discovery STEELHEAD APPLIANCE DEPLOYMENT GUIDE III

4 Original Auto-Discovery Process Enhanced Auto-Discovery Controlling Optimization In-Path Rules Peering Rules High Bandwidth, Low Latency Environment Example Pass-Through Transit Traffic Example Fixed-Target In-Path Rules Fixed-Target In-Path Rule to an In-Path Address Fixed-Target In-Path Rule to a Primary Address Network Integration Tools Redundancy and Clustering Data Store Synchronization Fail-to-Wire and Fail-to-Block Link State Propagation Connection Forwarding Best Practices for Steelhead Appliance Deployments Chapter 2 Physical In-Path Deployments...39 Overview of In-Path Deployments The Logical In-Path Interface Failure Modes In-Path IP Address Selection Link State Propagation Cabling and Duplex Basic Steps for Deploying a Physical In-Path Steelhead Appliance High Availability Deployments A Basic Serial Cluster Deployment Chapter 3 Virtual In-Path Deployments...53 Overview of Virtual In-Path Deployments Configuring an In-Path, Load Balanced, Layer-4 Switch Deployment Basic Steps (Client-Side) Basic Steps (Server-Side) Configuring NetFlow in Virtual In-Path Deployments Chapter 4 Out-of-Path Deployments...57 Overview of Out-of-Path Deployments Limitations of Out-of-Path Deployments Out-of-Path Deployment Example IV CONTENTS

5 Chapter 5 WCCP Deployments...61 Overview of WCCP Cisco Hardware and IOS Requirements The Pros and Cons of WCCP WCCP Fundamentals Configuring WCCP Basic Steps Configuring a Simple WCCP Deployment Configuring a High Availability Deployment Basic WCCP Router Configuration Commands Steelhead Appliance WCCP CLI Commands Configuring Additional WCCP Features Setting the Service Group Password Configuring Multicast Groups Configuring Group Lists to Limit Service Group Members Configuring Access Lists Configuring Load Balancing in WCCP NetFlow in WCCP Verifying and Troubleshooting WCCP Configurations Chapter 6 PBR Deployments...89 Overview of PBR PBR Failover and CDP Connecting the Steelhead Appliance in a PBR Deployment Configuring PBR Configuring PBR Overview Steelhead Appliance Directly Connected to the Router Steelhead Appliance Connected to Layer-2 Switch with a VLAN to the Router Steelhead Appliance Connected to a Layer-3 Switch NetFlow and Virtual In-Path Deployments Chapter 7 PFS Deployments...99 Overview of PFS When to Use PFS PFS Terms Upgrading V2.x PFS Shares Domain and Local Workgroup Settings Domain Mode Local Workgroup Mode PFS Share Operating Modes Lock Files Configuring PFS Configuration Requirements Basic Steps STEELHEAD APPLIANCE DEPLOYMENT GUIDE V

6 Chapter 8 Protocol Optimization in the Steelhead Appliance CIFS Optimization HTTP Optimization MAPI Optimization MS-SQL Optimization NFS Optimization Implementing NFS Optimization Configuring IP Aliasing SSL Optimization How Does SSL Work? Configuring SSL Using the Management Console Chapter 9 QoS Configuration and Integration Overview of QoS Introduction to QoS Introduction to Riverbed QoS Integrating Steelhead Appliances into Existing QoS Architectures WAN-Side Traffic Characteristics and QoS QoS Integration Techniques QoS Marking Enforcing QoS Policies Using Riverbed QoS QoS Classes QoS Rules Guidelines for the Maximum Number of QoS Classes and Rules QoS in Virtual In-Path and Out-of-Path Deployments Riverbed QoS Enforcement Best Practices Configuring Riverbed QoS Basic Steps Riverbed QoS Configuration Example Chapter 10 WAN Visibility Modes Overview of WAN Visibility Modes Correct Addressing Transparent Addressing Port Transparency Full Address Transparency Configuring WAN Visibility Modes WAN Visibility CLI Commands Implications of Transparent Addressing Stateful Systems Network Design Issues VI CONTENTS

7 Chapter 11 RADIUS and TACACS+ Authentication Overview of Authentication Authentication CLI Commands New Authentication Features Configuring a RADIUS Server with FreeRADIUS Configuring a TACACS+ Server with Free TACACS Configuring TACACS+ with Cisco Secure Access Control Servers Configuring RADIUS Authentication in the Steelhead Appliance Basic Steps Configuring TACACS+ Authentication in the Steelhead Appliance Basic Steps Chapter 12 Troubleshooting Deployment Problems Duplex Mismatches Solution: Manually Set Matching Speed and Duplex Solution: Use an Intermediary Switch Inability to Access Files During a WAN Disruption Solution: Use Proxy File Service Network Asymmetry Solution: Use Connection Forwarding Solution: Use Virtual In-Path Deployment Solution: Deploy a Four-Port Steelhead Appliance Old Antivirus Software Solution: Upgrade Antivirus Software Similar Problems Packet Ricochets Solution: Add In-Path Routes Solution: Use Simplified Routing Router CPU Spikes After WCCP Configuration Solution: Check Internetwork Operating System Compatibility Solution: Use Inbound Redirection Solution: Use Inbound Redirection with Fixed-Target Rules Solution: Use Inbound Redirection with Fixed-Target Rules and Redirect List Solution: Base Redirection on Ports Rather than ACLs Solution: Use PBR Server Message Block Signed Sessions Solution: Enable Secure-CIFS Solution: Disable SMB Signing with Active Directory Similar Problems Unavailable Opportunistic Locks Solution: None Needed Similar Problems Underutilized Fat Pipes STEELHEAD APPLIANCE DEPLOYMENT GUIDE VII

8 Solution: Enable High-Speed TCP Appendix A Deployment Examples Physical In-Path Deployments Simple, Physical In-Path Deployment Physical In-Path with Dual Links Serial Cluster Deployment with Multiple Links Basic Example of Connection Forwarding Connection Forwarding with Allow-Failure and Fail-to-Block Resolving Transit Traffic Issues Acronyms and Abbreviations Glossary Index VIII CONTENTS

9 Introduction Welcome to the Steelhead Appliance Deployment Guide. Read this introduction for an overview of the information provided in this guide and the documentation conventions used throughout, hardware and software dependencies, additional reading, and contact information. This introduction includes the following sections: About This Guide, next Hardware and Software Dependencies on page 11 Ethernet Network Compatibility on page 11 SNMP-Based Management Compatibility on page 12 Antivirus Compatibility on page 12 Additional Resources on page 12 Contacting Riverbed on page 14 About This Guide The Steelhead Appliance Deployment Guide describes how to configure the Steelhead appliance in complex in-path and out-of-path deployments such as failover, multiple routing points, static clusters, connection forwarding, WCCP, Layer-4 and PBR, and PFS. Types of Users This guide is written for storage and network administrators with familiarity administering and managing WANS using common network protocols such as TCP, CIFS, HTTP, FTP, NFS, and so forth. Organization of This Guide The Steelhead Appliance Deployment Guide includes the following chapters: Chapter 1, Steelhead Appliance Design Fundamentals, describes how the Steelhead appliance optimizes data, the factors you need to consider when designing your Steelhead appliance deployment, the main Steelhead appliance configuration options, and how to use them. STEELHEAD APPLIANCE DEPLOYMENT GUIDE 9

10 Chapter 2, Physical In-Path Deployments, describes physical in-path deployments. Chapter 3, Virtual In-Path Deployments, describes virtual in-path deployments. Chapter 4, Out-of-Path Deployments, describes out-of-path deployments. Chapter 5, WCCP Deployments, describes how to configure the Steelhead appliance for deployments using WCCP. Chapter 6, PBR Deployments, describes how to configure the Steelhead appliance and routers for PBR. Chapter 7, PFS Deployments, describes how to configure the Steelhead appliance to perform PFS. Chapter 8, Protocol Optimization in the Steelhead Appliance, describes the Steelhead appliance optimization protocols and basic steps for implementing them. Chapter 9, QoS Configuration and Integration, describes the Steelhead appliance QoS feature and how to implement it. Chapter 10, WAN Visibility Modes, describes Steelhead appliance WAN visibility modes, the advantages and limitations of each mode, and how to configure them. Chapter 11, RADIUS and TACACS+ Authentication, describes how to configure RADIUS or TACACS+ authentication for the Steelhead appliance. Chapter 12, Troubleshooting Deployment Problems, describes common deployment problems and solutions. Appendix A, Deployment Examples, provides examples of how to configure Steelhead appliances in various deployments. A list of acronyms and glossary of terms follow the chapters. An index directs you to areas of particular interest. Document Conventions This manual uses the following standard set of typographical conventions to introduce new terms, illustrate screen displays, describe command syntax, and so forth. Convention italics boldface Courier Meaning Within text, new terms and emphasized words appear in italic typeface. Within text, commands, keywords, identifiers (names of classes, objects, constants, events, functions, program variables), environment variables, filenames, GUI controls, and other similar terms appear in bold typeface. Information displayed on your terminal screen and information that you are instructed to enter appears in Courier font. < > Within syntax descriptions, values that you specify appear in angle brackets. For example: interface <ipaddress> [ ] Within syntax descriptions, optional keywords or variables appear in brackets. For example: ntp peer <addr> [version <number>] 10 INTRODUCTION

11 Convention Meaning { } Within syntax descriptions, required keywords or variables appear in braces. For example: {delete <filename> upload <filename>} Within syntax descriptions, the pipe symbol represents a choice to select one keyword or variable to the left or right of the symbol. (The keyword or variable can be either optional or required.) For example: {delete <filename> upload <filename>} Hardware and Software Dependencies The following table summarizes the hardware and software requirements for the Steelhead appliance. Riverbed Component Steelhead Appliance Steelhead Management Console, Steelhead Central Management Console Hardware and Software Requirements 19 inch (483 mm) two or four-post rack. Any computer that supports a Web browser with a color image display. The Management Console has been tested with Mozilla Firefox version 1.5.x, 2.0.x and Microsoft Internet Explorer version 6.0.x, and 7.0. NOTE: Javascript and cookies must be enabled in your Web browser. Ethernet Network Compatibility The Steelhead appliance supports the following types of Ethernet networks: Ethernet Logical Link Control (LLC) (IEEE ) Fast Ethernet 100 Base-TX (IEEE ) Gigabit Ethernet over Copper 1000 Base-T and Fiber 1000 Base-SX (LC connector) (IEEE ) The Primary port in the Steelhead appliance is 10 Base-T/100, Base-TX/1000, and Base-T/SX Mbps (IEEE ). (The Primary port on the Model 100, 200 is Fast Ethernet only.) In-path Steelhead appliance ports are 10/100/1000 Base-TX or Gigabit Ethernet 1000Base-T/SX (IEEE ) (depending on your order). The Steelhead appliance supports Virtual Local Area Network (VLAN) Tagging (IEEE 802.1Q ). It does not support the Cisco InterSwitch Link (ISL) protocol. All copper interfaces are auto-sensing for speed and duplex (IEEE ). The Steelhead appliance auto-negotiates speed and duplex mode for all data rates and supports full duplex mode and flow control (IEEE ). The Steelhead appliance with a Gigabit Ethernet card supports Jumbo Frames on in-path and primary ports. STEELHEAD APPLIANCE DEPLOYMENT GUIDE 11

12 SNMP-Based Management Compatibility The Steelhead appliance supports a proprietary Riverbed MIB accessible through SNMP. Both SNMP v1 (RFCs 1155, 1157, 1212, and 1215) and SNMP v2c (RFCs 1901, 2578, 2579, 2580, 3416, 3417, and 3418) are supported, although some MIB items may only be accessible through SNMPv2. SNMP support allows Steelhead appliance to be integrated into network management systems such as Hewlett Packard OpenView Network Node Manager, BMC Patrol, and other SNMP-based network management tools. Antivirus Compatibility The Steelhead appliance has been tested with the following antivirus software with no impact on performance: Network Associates (McAfee) VirusScan Enterprise on the server Network Associates (McAfee) VirusScan Enterprise on the server Network Associates (McAfee) VirusScan Enterprise on the client Symantec (Norton) AntiVirus Corporate Edition 8.1 on the server The Steelhead appliance has been tested with the following antivirus software with a noticeable to moderate impact on performance: F-Secure Anti-Virus 5.43 on the client F-Secure Anti-Virus 5.5 on the server Network Associates (McAfee) NetShield 4.5 on the server Network Associates VirusScan 4.5 for multi-platforms on the client Symantec (Norton) AntiVirus Corporate Edition 8.1 on the client Additional Resources This section describes resources that supplement the information in this guide. It includes the following sections: Online Notes, next Riverbed Documentation on page 13 Online Documentation on page 13 Riverbed Knowledge Base on page 13 Related Reading on page INTRODUCTION

13 Online Notes The following online file supplements the information in this manual. It is available on the Riverbed Technical Support site at Online File <product>_<version_number>.txt Purpose Describes the product release and identifies fixed problems, known problems, and workarounds. This file also provides documentation information not covered in the manuals or that has been modified since publication. Examine this file before you begin the installation and configuration process. It contains important information about this release of the Steelhead appliance. Riverbed Documentation For a complete list of Riverbed documentation log in to the Riverbed Technical Support Web site located at Online Documentation The Riverbed documentation set is periodically updated with new information. To access the most current version of Riverbed documentation and other technical information, consult the Riverbed Technical Support site located at Riverbed Knowledge Base The Riverbed Knowledge Base is a database of known issues, how-to documents, system requirements, and common error messages. You can browse titles or search for key words and strings. To access the Riverbed Knowledge Base, log in to the Riverbed Technical Support site located at Related Reading To learn more about network administration, consult the following books: Microsoft Windows 2000 Server Administrator s Companion by Charlie Russell and Sharon Crawford (Microsoft Press, 2000) Common Internet File System (CIFS) Technical Reference by the Storage Networking Industry Association (Storage Networking Industry Association, 2002) TCP/IP Illustrated, Volume I, The Protocols by W. R. Stevens (Addison-Wesley, 1994) Internet Routing Architectures (2nd Edition) by Bassam Halabi (Cisco Press, 2000) STEELHEAD APPLIANCE DEPLOYMENT GUIDE 13

14 Contacting Riverbed This section describes how to contact departments within Riverbed. Internet You can find out about Riverbed products through our Web site at Riverbed Technical Support If you have problems installing, using, or replacing Riverbed products contact Riverbed Technical Support or your channel partner who provides support. To contact Riverbed Technical Support, please open a trouble ticket at or call RVBD-TAC ( ) in the United States and Canada or +1 (415) outside the United States. Riverbed Professional Services Riverbed has staff of professionals who can help you with installation assistance, provisioning, network redesign, project management, custom designs, consolidation project design, and custom coded solutions. To contact Riverbed Professional Services go to or proserve@riverbed.com. Documentation We continually strive to improve the quality and usability of our documentation. We appreciate any suggestions you may have about our online documentation or printed materials. Send documentation comments to techpubs@riverbed.com. 14 INTRODUCTION

15 CHAPTER 1 Steelhead Appliance Design Fundamentals In This Chapter This chapter describes how the Steelhead appliance optimizes data, the factors you need to consider when designing your Steelhead appliance deployment, and how and when to use the most fundamental and commonly utilized Steelhead appliance features. It includes the following sections: How Steelhead Appliances Optimize Data, next Choosing the Right Steelhead Appliance on page 19 Deployment Modes for the Steelhead Appliance on page 21 The Auto-Discovery Protocol on page 21 Controlling Optimization on page 25 Fixed-Target In-Path Rules on page 30 Network Integration Tools on page 32 Best Practices for Steelhead Appliance Deployments on page 37 How Steelhead Appliances Optimize Data This chapter describes how the Steelhead appliance optimizes data. It includes the following sections: Data Streamlining, next Transport Streamlining on page 17 Application Streamlining on page 18 Management Streamlining on page 19 The causes for slow throughput in WANs are well known: high delay (round-trip time or latency), limited bandwidth, and chatty application protocols. Large enterprises spend a significant portion of their information technology budgets on storage and networks, much of it spent to compensate for slow throughput by deploying redundant servers and storage, and the required backup equipment. Steelhead appliances enable you to consolidate and centralize key IT resources to save money, reduce capital expenditures, simplify key business processes, and improve productivity. STEELHEAD APPLIANCE DEPLOYMENT GUIDE 15

16 RiOS is the software that powers the Steelhead appliance and Steelhead Mobile. With RiOS, you can solve a range of problems effecting WANs and application performance, including: insufficient WAN bandwidth. inefficient transport protocols in high-latency environments. inefficient application protocols in high-latency environments. RiOS intercepts client-server connections without interfering with normal client-server interactions, file semantics, or protocols. All client requests are passed through to the server normally, while relevant traffic is optimized to improve performance. The optimization techniques RiOS utilizes are: Data Streamlining Transport Streamlining Application Streamlining Management Streamlining Data Streamlining Steelhead appliances and Steelhead Mobile can reduce WAN bandwidth utilization by 65% to 98% for TCPbased applications using Data Streamlining. Scalable Data Referencing In addition to traditional techniques like data compression, RiOS also uses a Riverbed proprietary algorithm called Scalable Data Referencing (SDR). SDR breaks up TCP data streams into unique data chunks that are stored in the hard disk (data store) of the device running RiOS (a Steelhead appliance or Steelhead Mobile host system). Each data chunk is assigned a unique integer label (reference) before it is sent to a peer RiOS device across the WAN. When the same byte sequence is seen again in future transmissions from clients or servers, the reference is sent across the WAN instead of the raw data chunk. The peer RiOS device (a Steelhead appliance or Steelhead Mobile host system) uses this reference to find the original data chunk on its data store, and reconstruct the original TCP data stream. Files and other data structures can be accelerated by Data Streamlining even when they are transferred using different applications. For example, a file that is initially transferred through CIFS is accelerated when it is transferred again through FTP. Applications that encode data in a different format when they transmit over the WAN can also be accelerated by Data Streamlining. For example, Microsoft Exchange uses the MAPI protocol to encode file attachments prior to sending them to Microsoft Outlook clients. As a part of its MAPI-specific optimizations, RiOS un-encodes the data before applying SDR. This enables the Steelhead appliance to recognize byte sequences in file attachments in their native form when the file is subsequently transferred through FTP, or copied to a CIFS file share STEELHEAD APPLIANCE DESIGN FUNDAMENTALS

17 Bi-Directionally Synchronized Data Store Data and references are maintained in persistent storage in the data store within each RiOS device and are stable across reboots and upgrades. To provide further longevity and safety, local Steelhead appliance pairs keep their data stores fully synchronized bi-directionally at all times. This ensures that the failure of a single Steelhead appliance does not force remote Steelhead appliances to send previously transmitted data chunks. This is especially useful when the local Steelhead appliances are deployed in a network cluster, such as a master and backup deployment, a serial cluster, or a WCCP cluster. For details about master and backup deployments, see Redundancy and Clustering on page 32. For details about serial cluster deployments, see High Availability Deployments on page 48. For details about WCCP deployments, see WCCP Deployments on page 61. Unified Data Store A key Riverbed innovation is the unified data store which Data Streamlining uses to reduce bandwidth usage. After a data pattern is stored on the disk of a Steelhead appliance or Steelhead Mobile peer, it can be leveraged for transfers to any other Steelhead appliance or Steelhead Mobile peer, across all applications being accelerated. This means that data is not duplicated within the data store, even if it is used in different applications, in different data transfer directions, or with new peers. The unified data store ensures that RiOS uses its disk space as efficiently as possible, even when working with thousands of remote Steelhead appliances or Steelhead Mobile peers. QoS Data Streamlining includes optional QoS enforcement. QoS enforcement allows bandwidth and latency requirements to be decoupled through the implementation of Hierarchical Fair Service Curve (HFSC) queuing technology. QoS enforcement can be applied to both optimized and unoptimized traffic, both TCP and UDP, and is uniquely suited to the low latency requirements of VoIP, Video, and Citrix traffic. The use of QoS enforcement is optional. RiOS offers the ability to either pass through existing DSCP and DiffServe markings, or to apply new DSCP markings. Transport Streamlining Steelhead appliances use a generic latency optimization technique called Transport Streamlining. Transport Streamlining uses a set of standards and proprietary techniques to optimize TCP traffic between Steelhead appliances. These techniques: ensure that efficient retransmission methods, such as TCP selective acknowledgements, are used. negotiate optimal TCP window sizes to minimize the impact of latency on throughput. maximize throughput across a wide range of WAN links. Steelhead appliance to Steelhead appliance TCP connections, by default, share available bandwidth with other non-steelhead appliance traffic. This is true even when using the HS-TCP transport option, an IETF specified TCP sender modification that can achieve high throughput on links with large bandwidth and large latency. STEELHEAD APPLIANCE DEPLOYMENT GUIDE 17

18 You can selectively use the MX-TCP option on traffic you want to transmit at a specific rate over the WAN, regardless of the presence of other traffic. While not appropriate for all environments, MX-TCP can maintain data transfer throughput where adverse network conditions, such as abnormally-high packet loss, impair the performance and throughput of normal TCP connections. MX-TCP effectively handles packet loss without loss of throughput typically experienced with TCP. Connection Pooling Some application protocols, such as HTTP, often use many rapidly created, short lived TCP connections. To optimize these protocols, Steelhead appliances create pools of idle TCP connections. When a client tries to create a new connection to a previously visited server, Steelhead appliances use one from its pool of connections. This spares the client and the Steelhead appliance from having to wait for a three-way TCP handshake to finish across the WAN. This feature, called connection pooling, is available for connections using the correct addressing WAN visibility mode. For details about WAN visibility modes, see WAN Visibility Modes on page 139. Transport Streamlining ensures that there is always a one-to-one ratio for active TCP connections between Steelhead appliances, and the TCP connections to clients and servers. That is, Steelhead appliances do not tunnel or perform multiplexing and de-multiplexing of data across connections. This is true regardless of the WAN visibility mode in use. DSCP and ToS QoS Mirroring In addition, DSCP or ToS QoS markings on the LAN-side connections, by default, are mirrored onto the WAN-side, Steelhead appliance to Steelhead appliance connections. These two architectural components allow existing network-based QoS or prioritization systems to treat traffic with the same granularity as before any Steelhead appliances were deployed. Application Streamlining In addition to Data and Transport Streamlining optimizations, RiOS can apply application-specific optimizations for certain application protocols. For Steelhead appliances using RiOS v5.0.x and later, this includes: CIFS and SMB (Windows file sharing) MAPI (Outlook and Exchange 2000) MAPI 2003 (Outlook and Exchange 2003) MAPI 2007 (Outlook and Exchange 2007) NFS v3 (Unix file sharing) TDS (Microsoft SQL Server) HTTP HTTPS and SSL Oracle11i-Native Oracle11i-HTTP STEELHEAD APPLIANCE DESIGN FUNDAMENTALS

19 These protocol-specific optimizations reduce the number of round trips over the WAN for common actions and help see past data obfuscation or encryption, including: opening and editing documents on remote file servers (CIFS). sending and receiving attachments (MAPI). viewing remote intranet sites (HTTP). securely performing SDR for SSL encrypted transmissions (HTTPS). Management Streamlining Management Streamlining refers to the methods that Riverbed has developed to simplify the deployment and management of RiOS devices. These methods include: Auto-Discovery Protocol. Auto-discovery enables Steelhead appliances and Steelhead Mobile to automatically find remote Steelhead appliances, and to then optimize traffic using them. Autodiscovery relieves you from having to manually configure large amounts of network information. The auto-discovery process enables administrators to: control and secure connections. specify which traffic is to be optimized. specify peers for optimization. Central Management Console (CMC). The CMC enables new, remote Steelhead appliances to be automatically configured and monitored. It also gives you a single view of the overall benefit and health of the Steelhead appliance network. Steelhead Mobile Controller. The Mobile Controller is the management appliance you use to track the individual health and performance of each deployed software client, and to manage enterprise client licensing. The Mobile Controller enables you to see who is connected, view their data reduction statistics, and perform support operations such as resetting connections, pulling logs, and automatically generating traces for troubleshooting. You can perform all of these management tasks without end user input. Choosing the Right Steelhead Appliance Generally, you select a Steelhead appliance model based on the number of users, the bandwidth requirements, and the applications used at the deployment site. However: if you do not want to optimize applications that transfer large amounts of data (for example, WANbased backup or restore operations, system image or update distribution, and so forth), choose your Steelhead appliance model based on the amount of bandwidth and number of connections at your site. if you do want to optimize applications that transfer large amounts of data, choose your Steelhead appliance model based on the amount of bandwidth and number of connections at your site, as well as on the size of the Steelhead appliance data store. Once you consider these factors, you might also consider high availability, redundancy, or other requirements. STEELHEAD APPLIANCE DEPLOYMENT GUIDE 19

20 If no single Steelhead appliance model meets your requirements, and depending on your deployment model, there are many ways to cluster Steelhead appliances together to provide scaling, and if needed, redundancy. Steelhead appliance models vary according to the following attributes: Number of concurrent TCP connections that can be optimized Amount of disk storage available for SDR Amount of WAN bandwidth that can be used Maximum possible in-path interfaces Availability of fiber interfaces Availability of RAID for data store Availability of redundant power supplies Upgrade options via software licenses Support for PFS shares All Steelhead appliance models have the following specifications that determine the amount of traffic a single Steelhead appliance can optimize: Number of Concurrent TCP Connections. Each Steelhead appliance model can optimize a certain number of concurrent TCP connections. The number of TCP connections you need for optimization depends on the number of users at your site, the applications you use, and whether you want to optimize all applications or just a few of them. When planning corporate enterprise deployments, Riverbed recommends you use ratios of 5-15 connections per user if full optimization is desired, depending on the applications being used. NOTE: If the number of connections you want to optimize exceeds the limit of the Steelhead appliance model, the Steelhead appliance allows excess connections to pass through unoptimized. WAN Bandwidth Rating. Each Steelhead appliance model has a limit on the rate at which it pushes optimized data towards the WAN. You must select a Steelhead appliance model that is at least rated for the same bandwidth available at the deployment site. This limit does not apply to pass-through traffic. NOTE: When a Steelhead appliance reaches its rate limit, it does not start passing through traffic. Rather, it begins shaping traffic to this limit. New optimized connections can be set up if the connection limit allows. Data Store Size. Each Steelhead appliance model has a fixed amount of disk space available for SDR. Because SDR stores unique patterns of data, the amount of data store needed by a deployed Steelhead appliance differs from the amount needed by applications or file servers. For the best optimization possible, the Steelhead appliance data store should be large enough to hold all of the commonly accessed data at a site. Old data that is recorded in the Steelhead appliance data store might eventually be overwritten by new data, depending on traffic patterns. At sites where applications transfer large amounts of data (for example, WAN-based backup or restore operations, system image or update distribution, and so forth) you must not only select the Steelhead appliance model based on the amount of bandwidth and number of connections at the site, but also on the size of Steelhead appliance data store. Sites without these applications are typically sized by considering the bandwidth and number of connections STEELHEAD APPLIANCE DESIGN FUNDAMENTALS

21 If you need help planning, designing, deploying, or operating your Steelhead appliances, Riverbed offers consulting services directly and through Riverbed authorized partners. For details, contact Riverbed Professional Services, located at or contact them at Deployment Modes for the Steelhead Appliance Steelhead appliances can be placed into the network in many different ways. Deployment modes available for the Steelhead appliances include: Physical In-Path. In a physical in-path deployment, the Steelhead appliance is physically in the direct path between clients and servers. In-path designs are the simplest to configure and manage, and the most common type of Steelhead appliance deployment, even for large sites. Many variations of physical in-path deployments are possible, to account for redundancy, clustering, and asymmetric traffic flows. For details, see Physical In-Path Deployments on page 39. Virtual In-Path. In a virtual in-path deployment, a redirection mechanism (like WCCP, PBR, or Layer- 4 switching) is used to place the Steelhead appliance virtually in the path between clients and servers. For details, see Virtual In-Path Deployments on page 53. Out-of-Path. In an out-of-path deployment, the Steelhead appliance is not in the direct path between the client and the server. In an out-of-path deployment, the Steelhead appliance acts as a proxy. This type of deployment might be suitable for locations where physical in-path or virtual in-path configurations are not possible. However, out-of-path deployments have several drawbacks you need to be aware of. For details, see Out-of-Path Deployments on page 57. The Auto-Discovery Protocol This chapter describes the Steelhead appliance auto-discovery protocol. It includes the following sections: Overview of Auto-Discovery, next Original Auto-Discovery Process on page 22 Enhanced Auto-Discovery on page 24 Overview of Auto-Discovery Auto-discovery enables Steelhead appliances to automatically find remote Steelhead appliances and to optimize traffic with them. Auto-discovery relieves you of having to manually configure the Steelhead appliances with large amounts of network information. The auto-discovery process enables you to: control and secure connections. specify which traffic is optimized. specify how remote peers are selected for optimization. There are two types of auto-discovery, original and enhanced: STEELHEAD APPLIANCE DEPLOYMENT GUIDE 21

22 Original Auto-Discovery. Automatically finds the first remote Steelhead appliance along the connection path. Enhanced Auto-Discovery (available in RiOS v4.0.x or later). Automatically finds the last Steelhead appliance along the connection path. Most Steelhead appliance deployments use auto-discovery. You can also manually configure Steelhead appliance pairing using fixed-target in-path rules, but this approach requires on-going configuration. Fixed target rules also require tracking new subnets that are present in the network and which Steelhead appliances are responsible for optimizing the traffic. For details about fixed-target in-path rules, see Fixed-Target In-Path Rules on page 30. Original Auto-Discovery Process The following section describes how a client connects to a remote server when the Steelhead appliances have auto-discovery enabled. In this example, each Steelhead appliance uses correct addressing and a single subnet. Figure 1-1. The Auto-Discovery Process STEELHEAD APPLIANCE DESIGN FUNDAMENTALS

23 NOTE: This example does not illustrate asymmetric routing detection or enhanced auto-discovery peering. In the original auto-discovery process: 1. The client initiates the TCP connection by sending a TCP SYN packet. 2. The client-side Steelhead appliance receives the packet on its LAN interface, examines the packet, discovers it is a SYN, and continues processing the packet. Using information from the SYN packet (for example, the source or destination address, VLAN tag, and so forth), the Steelhead appliance performs an action based on a configured set of rules, called in-path rules. In this example, because the matching rule for the packet is set to auto, the Steelhead appliance uses auto-discovery to find the remote Steelhead appliance. The Steelhead appliance appends a TCP option to the packet TCP option field. This is the probe query option. The probe query option contains the in-path IP address of the client-side Steelhead appliance. Nothing else in the packet changes, only the option is added. 3. The Steelhead appliance forwards the modified packet (denoted as SYN_probe_query) out of the WAN interface. Because neither the source or destination fields are modified, the packet is routed in the same manner as if there was no Steelhead appliance deployed. 4. The server-side Steelhead appliance receives the SYN_probe_query packet on its WAN interface, examines the packet, discovers that it is a SYN packet, and therefore searches for a TCP probe query. If found, the server-side Steelhead appliance: Uses the packet fields and the IP address of the client-side Steelhead appliance to determine what action to take based on its peering rules. In this example, because the matching rule is set to accept (or auto, depending on the RiOS version), the server-side Steelhead appliance tells the client-side Steelhead appliance that it is the remote optimization peer for this TCP connection. The server-side Steelhead appliance removes the probe_query option from the packet, and replaces it with a probe_response option (the probe_query and probe_response use the same TCP option number). The probe_response option contains the in-path IP address of the serverside Steelhead appliance. The Steelhead appliance then reverses all of the source and destination fields (TCP and IP) in the packet header. The packet sequence numbers and flags are modified to make the packet look like a normal SYN/ACK server response packet. If no server-side Steelhead appliances are present, the server ignores the TCP probe that was added by the client-side Steelhead appliance, responds with a regular SYN/ACK resulting in a pass-through connection, and sends the SYN/ACK. 5. The server-side Steelhead appliance transmits the packet to the client-side Steelhead appliance. Because the destination IP address of the packet is now the client IP address, the packet is routed through the WAN just as if the server was responding to the client. 6. The client-side Steelhead appliance receives the packet on its WAN interface, examines it, and discovers that it is a SYN/ACK. The client-side Steelhead appliance scans for and finds the probe_response field, and reads the in-path IP address of the server-side Steelhead appliance. The client-side Steelhead appliance now knows all the parameters of the packet TCP flow, including the: IP addresses of the client and server. STEELHEAD APPLIANCE DEPLOYMENT GUIDE 23

24 TCP source and destination ports for this connection. in-path IP address of the server-side Steelhead appliance for this connection. 7. The Steelhead appliances now establish three TCP connections, the: client-side Steelhead appliance completes the TCP connection setup with the client, as if it were the server. Steelhead appliances complete the TCP connection between each other. server-side Steelhead appliance completes the TCP connection with the server, as if it were the client. After the three TCP connections are established, optimization begins. The data sent between the client and server for this specific connection is optimized and carried on its own individual TCP connection between the Steelhead appliances. Enhanced Auto-Discovery In RiOS v4.0.x or later, enhanced auto-discovery is available. Enhanced auto-discovery automatically discovers the last Steelhead appliance in the network path of the TCP connection. In contrast, the original auto-discovery protocol automatically discovers the first Steelhead appliance in the path. The difference is only seen in environments where there are three or more Steelhead appliances in the network path for connections to be optimized. Enhanced auto-discovery works with Steelhead appliances running the original auto-discovery protocol. Enhanced auto-discovery ensures that a Steelhead appliance only optimizes TCP connections that are being initiated or terminated at its local site, and that a Steelhead appliance does not optimize traffic that is transiting through its site. For details about passing through transit traffic using enhanced auto-discovery and peering rules, see Resolving Transit Traffic Issues on page 184. To enable enhanced auto-discovery 1. On the Steelhead appliance, connect to the CLI. For details, see the Riverbed Command-Line Interface Reference Manual. 2. At the system prompt, enter the following set of commands: enable configure terminal in-path peering auto For details about connecting to and using the Steelhead CLI, see the Riverbed Command-Line Interface Reference Manual STEELHEAD APPLIANCE DESIGN FUNDAMENTALS

25 Controlling Optimization There are two ways to configure what traffic a Steelhead appliance optimizes and what other actions it performs: In-Path rules. In-path rules determine the action a Steelhead appliance takes when a connection is initiated, usually by a client. Peering rules. Peering rules determine how a Steelhead appliance reacts when it sees a probe query. In-Path Rules In-path rules are used only when a connection is initiated. Because connections are usually initiated by clients, in-path rules are configured for the initiating, or client-side Steelhead appliance. In-path rules determine Steelhead appliance behavior with SYN packets. In-path rules are an ordered list of fields a Steelhead appliance uses to match with SYN packet fields (for example, source or destination subnet, IP address, VLAN, or TCP port). Each in-path rule has an action field. When a Steelhead appliance finds a matching in-path rule for a SYN packet, the Steelhead appliance treats the packet according to the action specified in the in-path rule. There are five types of in-path rule actions, each with different configuration possibilities: Auto. Use the auto-discovery process to determine if a remote Steelhead appliance is able to optimize the connection attempting to be created by this SYN packet. Pass. Allow the SYN packet to pass through the Steelhead appliance. No optimization is performed on the TCP connection initiated by this SYN packet. Fixed-Target. Skip the auto-discovery process and use a specified remote Steelhead appliance as an optimization peer. Fixed target rules require the input of at least one remote target Steelhead appliance; an optional backup Steelhead appliance might also be specified. For details about fixed-target in-path rules, see Fixed-Target In-Path Rules on page 30. Deny. Drop the SYN packet and send a message back to its source. Discard. Drop the SYN packet silently. In-path rules are used only in the following scenarios: TCP SYN packet arrives on the LAN interface of physical in-path deployments. TCP SYN packet arrives on the WAN0_0 interface of virtual in-path deployments. Again, both of these scenarios are associated with the first, or initiating, SYN packet of the connection. Because most connections are initiated by the client, you configure your in-path rules on the client-side Steelhead appliance. In-path rules have no effect on connections that are already established, regardless of whether the connections are being optimized. In-path rule configurations differ depending on the action. For example, both the fixed-target and the autodiscovery actions allow you to choose what type of optimization is applied, what type of data reduction is used, what type of latency optimization is applied, and so forth. For an example of how in-path rules are used, see High Bandwidth, Low Latency Environment Example on page 26. STEELHEAD APPLIANCE DEPLOYMENT GUIDE 25

26 Default In-Path Rules There are three default in-path rules that ship with Steelhead appliances. These default rules pass through certain types of traffic unoptimized. The primary reason that these types of traffic are passed through is because you are likely to use these types of protocols (telnet, ssh, https) when you deploy and configure your Steelhead appliances. The default in-path rules can be removed or overwritten by altering or adding other rules to the in-path rule list, or by changing the port groups that are used. The default rules allow the following traffic to pass through the Steelhead appliance without attempting optimization: Encrypted Traffic. Includes HTTPS, SSH, and others. Interactive Traffic. Includes telnet, ICA, and others. RiverbedProtocols. Includes the TCP ports used by Riverbed products (that is, the Steelhead appliance, the Interceptor appliance, and the Steelhead Mobile Controller). Peering Rules Peering rules control Steelhead appliance behavior when it sees probe queries. Peering rules (displayed using the show in-path peering rules CLI command) are an ordered list of fields a Steelhead appliance uses to match with incoming SYN packet fields (for example, source or destination subnet, IP address, VLAN, or TCP port) as well as the IP address of the probing Steelhead appliance. This is especially useful in complex networks. There are the following types of peering rule actions: Pass. The receiving Steelhead appliance does not respond to the probing Steelhead appliance, and allows the SYN+probe packet to continue through the network. Accept. The receiving Steelhead appliance responds to the probing Steelhead appliance and becomes the remote-side Steelhead appliance (that is, the peer Steelhead appliance) for the optimized connection. Auto. If the receiving Steelhead appliance is not using enhanced auto-discovery, this has the same effect as the Accept peering rule action. If enhanced auto-discovery is enabled, the Steelhead appliance only becomes the optimization peer if it is the last Steelhead appliance in the path to the server. If a packet does not match any peering rule in the list, the default rule applies. High Bandwidth, Low Latency Environment Example To illustrate how in-path and peering rules might be used when designing Steelhead appliance deployments, consider a network that has high bandwidth, low latency, and a large number of users STEELHEAD APPLIANCE DESIGN FUNDAMENTALS

27 The following figure illustrates this scenario occurring between two buildings at the same site. In this situation, you want to select Steelhead appliance models to optimize traffic going to and from the WAN. However, you do not want to optimize traffic flowing between Steelhead appliance A and Steelhead appliance B. There are two ways to achieve this result. Figure 1-2. High Bandwidth Utilization, Low Latency, and Many Connections Between Steelhead Appliances You can use: In-path Rules. You can configure in-path rules on each of the Steelhead appliances (in Building A and Building B) so that the Steelhead appliances do not perform auto-discovery on any of the subnets in Building A and Building B. This option requires knowledge of all subnets within the two buildings, and also requires that you update the list of subnets as the network is modified. Peering Rules. You can configure peering rules on Steelhead A and Steelhead B that pass through probe packets with in-path IP addresses of the other Steelhead appliance (Steelhead A passes through probe packets with in-path IP addresses of Steelhead B, and vice versa). Using peering rules would require: less initial configuration. less on-going maintenance because you do not need to update the list of subnets in the list of peering rules for each of the Steelhead appliances. STEELHEAD APPLIANCE DEPLOYMENT GUIDE 27

28 The following figure illustrates how to use peering rules to prevent optimization from occurring between two Steelhead appliances and still allow optimization for traffic going to and from the WAN. Figure 1-3. Peering Rules for High Utilization Between Steelhead Appliances Steelhead A has a Pass peering rule for all traffic coming from the Steelhead B in-path interface. This means Steelhead A lets connections from Steelhead B pass through it unoptimized. Steelhead B has a Pass peering rule for all traffic coming from the Steelhead A in-path interface. This means Steelhead B lets connections from Steelhead A pass through it unoptimized. To configure Steelhead A 1. On Steelhead A, connect to the CLI. For details, see the Riverbed Command-Line Interface Reference Manual. 2. At the system prompt, enter the following set of commands: enable configure terminal in-path peering rule pass peer rulenum end To configure Steelhead B 1. On Steelhead B, connect to the CLI. For details, see the Riverbed Command-Line Interface Reference Manual. 2. At the system prompt, enter the following set of commands: enable configure terminal in-path peering rule pass peer rulenum end NOTE: If a packet does not apply to any of the configured peering rules, the auto peering rule is used STEELHEAD APPLIANCE DESIGN FUNDAMENTALS

29 Pass-Through Transit Traffic Example Transit traffic is data that is flowing through a Steelhead appliance whose source or destination is not local to the Steelhead appliance. For details, see Resolving Transit Traffic Issues on page 184. A Steelhead appliance must only optimize traffic that is initiated or terminated at the site where it resides any extra WAN hop between the Steelhead appliance and the client or server greatly reduces the optimization benefits seen by those connections. IMPORTANT: All Riverbed performance and quality assurance testing results are taken from deployments in which Steelhead appliances only optimize locally initiated or terminated traffic. For example, in the following figure the Steelhead appliance at the Chicago site sees transit traffic between San Francisco and New York. You want the initiating Steelhead appliance (San Francisco) and the terminating Steelhead appliance (New York) to optimize this traffic, rather than the Steelhead appliance in Chicago. To ensure that the Chicago Steelhead appliance only optimizes traffic that is locally initiated or terminated, you configure peering rules and in-path rules only on the Chicago Steelhead appliance. In this example, assume that the default in-path rules are configured on all three Steelhead appliances. Because the default action for in-path rules and peering rules is to use auto-discovery, two in-path and two peering rules must be configured on the Chicago Steelhead appliance. The following figure illustrates how to use peering rules and in-path rules to resolve a transit traffic issue on the Chicago Steelhead appliance. Figure 1-4. Peering Rules for Transit Traffic You can configure peering rules for transit traffic using the Riverbed CLI. STEELHEAD APPLIANCE DEPLOYMENT GUIDE 29

30 To configure the Chicago Steelhead Appliance 1. On the Steelhead appliance, connect to the CLI. For details, see the Riverbed Command-Line Interface Reference Manual. 2. At the system prompt, enter the following set of commands: enable configure terminal in-path rule auto srcaddr /24 rulenum end in-path rule pass rulenum end in-path peering rule auto dest /24 rulenum end in-path peering rule pass rulenum end For details about transit traffic, see Resolving Transit Traffic Issues on page 184. Fixed-Target In-Path Rules A fixed-target in-path rule is one of five types of in-path rules that allow you to manually specify a remote Steelhead appliance to use for optimization. As with all in-path rules, fixed-target in-path rules are only executed for SYN packets, and therefore are configured on the initiating or client-side Steelhead appliance. For details about in-path rules, see In-Path Rules on page 25. Fixed-target in-path rules can be used in environments where the auto-discovery process cannot work. A fixed target rule requires the input of at least one target Steelhead appliance; an optional backup Steelhead appliance can also be specified. Fixed-target in-path rules have several disadvantages compared to auto-discovery: Difficulty in determining which subnets to include in the fixed target rule. Ongoing modifications to rules are needed as new subnets or Steelhead appliances are added to the network. Currently, only two remote Steelhead appliances can be specified. All traffic is directed to the first Steelhead appliance until it reaches capacity, or until it stops responding to requests to connect. Traffic is then directed to the second Steelhead appliance (until it reaches capacity, or until it stops responding to requests to connect). Because of these disadvantages, fixed-target in-path rules are used less frequently than auto-discovery. In general, fixed target rules are used only when auto-discovery cannot be used. There is a significant difference in LAN data flow depending on whether the fixed-target (or backup) IP address listed in the fixed-target in-path rule is for a Steelhead appliance primary interface or its in-path interface. Fixed-Target In-Path Rule to an In-Path Address Fixed-target in-path rules that target a remote (physical or virtual) in-path Steelhead appliance in-path interface IP address are used in environments where the auto-discovery process cannot work. For example: STEELHEAD APPLIANCE DESIGN FUNDAMENTALS

31 Traffic traversing the WAN passes through a satellite or other device that strips off TCP options, including those used by auto-discovery. Traffic traversing the WAN goes through a device that proxies TCP connections and uses its own TCP connection to transport the traffic. For example, some satellite-based WANs use built-in TCP proxies in their satellite uplinks. When the target IP address of a fixed-target in-path rule is a Steelhead appliance in-path interface, the traffic between the server-side Steelhead appliance and the server looks like client to server traffic; that is, the server sees connections coming from the client IP address. This is the same as when auto-discovery is used. The following figure illustrates how to use a fixed-target in-path rule to the Steelhead appliance in-path interface. In this example, a fixed-target in-path rule is used to resolve an issue with a satellite. The satellite gear strips the TCP option from the packet, which means the Steelhead appliance does not see the TCP option, and the connection cannot be optimized. To enable the Steelhead appliance to see the TCP option, you configure a fixed-target in-path rule on the initiating Steelhead appliance (Steelhead A) that targets the terminating Steelhead appliance (Steelhead B) in-path interface. Figure 1-5. Fixed-Target In-Path Rule to the Steelhead Appliance In-Path Interface The fixed-target in-path rule specifies that only SYN packets destined for /16, Steelhead B subnets, are allowed through to the Site B Steelhead appliance. All other packets are passed through the Steelhead appliance. You can configure in-path rules using the Riverbed CLI. To configure Steelhead A 1. On Steelhead A, connect to the CLI. For details, see the Riverbed Command-Line Interface Reference Manual. 2. At the system prompt, enter the following set of commands: enable configure terminal in-path rule fixed-target target-addr dstaddr /16 rulenum end STEELHEAD APPLIANCE DEPLOYMENT GUIDE 31

32 Fixed-Target In-Path Rule to a Primary Address Fixed target in-path rules whose target is the primary IP address of a remote Steelhead appliance are used only when the remote Steelhead appliance has out-of-path mode enabled. There are several disadvantages to this deployment method, the most important being that traffic to the remote server no longer uses the client IP address. Instead, the server sees connections coming to it from the out-of-path Steelhead appliance primary IP address. For details about out-of-path deployments, see Out-of-Path Deployments on page 57. Network Integration Tools This section describes Steelhead appliance tools you can use to integrate with your network. Redundancy and Clustering You can deploy redundant Steelhead appliances in your network to ensure optimization continues in case of a Steelhead appliance failure. Redundancy and clustering options are available for each type of deployment. Physical In-Path Deployments The following redundancy options for physical in-path deployments are available: Master and Backup In-Path Deployment. In a master and backup deployment, two Steelhead appliances are placed in a physical in-path mode. One of the Steelhead appliances is configured as a master, and the other as the backup. The master Steelhead appliance (usually the Steelhead appliance closest to the LAN) optimizes traffic, and the backup Steelhead appliance constantly checks to make sure the master Steelhead appliance is functioning. If the backup Steelhead appliance cannot reach the master, it begins optimizing new connections until the master comes back up. After the master has recovered, the backup Steelhead appliance stops optimizing new connections, and allows the master to resume optimizing. However, the backup Steelhead appliance continues to optimize connections that were made while the master was down. This is the only time, immediately after a recovery from a master failure, that connections are optimized by both the master Steelhead appliance and the backup. Serial Cluster In-Path Deployment. In a serial cluster deployment, two or more Steelhead appliances are placed in a physical in-path mode, and the Steelhead appliances concurrently optimize connections. Because the Steelhead appliance closest to the LAN sees the combined LAN bandwidth of all of the Steelhead appliances in the series, serial clustering is only supported on the higher-end Steelhead appliance models. Serial clustering requires configuring peering rules on the Steelhead appliances to prevent them from choosing each other as optimization peers. NOTE: Deployments that use connection forwarding, where there are multiple Steelhead appliances, each covering different links to the WAN, do not necessarily provide redundancy. For details about connection forwarding, see Connection Forwarding on page STEELHEAD APPLIANCE DESIGN FUNDAMENTALS

33 Virtual In-Path Deployments For virtual in-path deployments, the clustering and redundancy options vary depending on which redirection method is being used. WCCP, the most common virtual in-path deployment method, allows options like N+1 redundancy and 1+1 redundancy. For details about virtual in-path deployments, see Virtual In-Path Deployments on page 53. Out-Of-Path Deployments For an out-of-path deployment, two Steelhead appliances, a primary and a backup, can be configured using fixed-target rules that specify traffic for optimization. If the primary Steelhead appliance becomes unreachable, new connections are optimized by the backup Steelhead appliance. If the backup Steelhead appliance is down, no optimization occurs, and traffic is passed through the network unoptimized. After the master has recovered, the backup Steelhead appliance stops optimizing new connections, and allows the master to resume optimizing. However, the backup Steelhead appliance continues to optimize connections that were made while the master was down. This is the only time, immediately after a recovery from a master failure, that connections are optimized by both the master Steelhead appliance and the backup. For details about virtual in-path deployments, see Out-of-Path Deployments on page 57. Data Store Synchronization IMPORTANT: The features of data store synchronization and how it interacts with the system have changed with each release of the RiOS software. If you are running an earlier version of RiOS software (that is, other than v5.x), please consult the appropriate documentation for that software release. Data store synchronization enables pairs of local Steelhead appliances to synchronize their data stores with each other, even while they are optimizing connections. Data store synchronization is typically used to ensure that if a Steelhead appliance fails, no loss of potential bandwidth savings occurs. This is because the data segments and references are on the backup Steelhead appliance. You can use data store synchronization for physical in-path, virtual in-path, or out-of-path deployments. You enable synchronization on two Steelhead appliances, one as the synchronization master, and the other as the synchronization backup. The traffic for data store synchronization is transferred via the Steelhead appliance primary network interfaces, not the in-path interfaces. TIP: The terms master and backup are used both in data store synchronization and in the master and backup physical in-path deployment. There is no requirement that the master in one role also be the master in the other. Data store synchronization can be used in any deployment, not just in physical in-path deployments. Data Store Synchronization Requirements The synchronization master and its backup: must have the same hardware model. must be running the same version of the RiOS software. STEELHEAD APPLIANCE DEPLOYMENT GUIDE 33

34 do not have to be in the same physical location. If they are in different physical locations, they must be connected via a fast, reliable LAN connection with minimal latency. IMPORTANT: Before you replace a synchronization master for any reason, Riverbed recommends that you make the synchronization backup the new synchronization master. This is so that the new master (the former backup) can warm the new (replacement) Steelhead appliance, ensuring that the most data is optimized and none is lost. Fail-to-Wire and Fail-to-Block In physical in-path deployments, the Steelhead appliance LAN and WAN ports that traffic flows through are internally connected by circuitry that can take special action in the event of a disk failure, a software crash, a runaway software process, or even loss of power to the Steelhead appliance. All Steelhead appliance models and in-path network interface cards support fail-to-wire mode, where, in the event of a failure or loss of power, the LAN and WAN ports become internally connected as if they were the ends of a crossover cable, thereby providing uninterrupted transmission of data over the WAN. The default failure mode is fail-to-wire mode. Certain in-path network interface cards also support a fail-to-block mode, where in the event of a failure or loss of power, the Steelhead appliance LAN and WAN interfaces completely lose link status. When fail-toblock is enabled, a failed Steelhead appliance blocks traffic along its path, forcing traffic to be re-routed onto other paths (where the remaining Steelhead appliances are deployed).for details about fail-to-block mode, see Fail-to-Block Mode on page 42. For details about Steelhead appliance LAN and WAN ports and physical in-path deployments, see The Logical In-Path Interface on page 40. For details about physical in-path deployments, see Physical In-Path Deployments on page 39. Link State Propagation In physical in-path deployments, link state propagation can shorten the recovery time of a link failure. Link state propagation communicates link status between the devices connected to the Steelhead appliance. When this feature is enabled, the link state of each Steelhead appliance LAN/WAN pair is monitored. If either physical port loses link status, the other corresponding physical port brings its link down. This allows link failure to quickly propagate through a chain of devices, and is useful in environments where link status is used for fast failure detection. For details about physical in-path deployments, see Physical In-Path Deployments on page 39. Connection Forwarding In order for a Steelhead appliance to optimize a TCP connection, the Steelhead appliance must see all of the packets for that connection. When you use connection forwarding, multiple Steelhead appliances work together and share information about what connections are being optimized by each Steelhead appliance STEELHEAD APPLIANCE DESIGN FUNDAMENTALS

35 Steelhead appliances that are configured to use connection forwarding with each other are known as connection forwarding neighbors. If a Steelhead appliance sees a packet belonging to a connection that is optimized by a different Steelhead appliance, it forwards it to the correct Steelhead appliance. When a neighbor Steelhead appliance reaches its optimization capacity limit, that Steelhead appliance stops optimizing new connections, but continues to forward packets for TCP connections being optimized by its neighbors. You can use connection forwarding both in physical in-path deployments and in virtual in-path deployments. In physical in-path deployments, it is used between Steelhead appliances that are deployed on separate parallel paths to the WAN. In virtual in-path deployments, it is used when the redirection mechanism does not guarantee that packets for a TCP connection are always sent to the same Steelhead appliance. This includes the WCCP protocol, a commonly used virtual in-path deployment method. It is usually easier to design physical in-path implementations that do not require connection forwarding. For example, if you have multiple paths to the WAN, you can use a Steelhead appliance model that supports multiple in-path interfaces, instead of using multiple Steelhead appliances with single in-path interfaces. In general, serial deployments are preferred over parallel deployments. For details about deployment best practices, see Best Practices for Steelhead Appliance Deployments on page 37. The following figure illustrates a site with multiple paths to the WAN. Steelhead A and Steelhead B can be configured as connection forwarding neighbors. This ensures that if a routing or switching change causes TCP connection packets to change paths, either Steelhead A or Steelhead B can forward the packets back to the correct Steelhead appliance. Figure 1-6. Connection Forwarding Steelhead Appliances The following example assumes that the Steelhead appliances have already been configured properly for in-path interception. STEELHEAD APPLIANCE DEPLOYMENT GUIDE 35

36 To configure Steelhead A 1. On Steelhead A, connect to the CLI. For details, see the Riverbed Command-Line Interface Reference Manual. 2. At the system prompt, enter the following set of commands: enable configure terminal in-path neighbor enable in-path neighbor ip address To configure Steelhead B 1. On Steelhead B, connect to the CLI. For details, see the Riverbed Command-Line Interface Reference Manual. 2. At the system prompt, enter the following set of commands: enable configure terminal in-path neighbor enable in-path neighbor ip address When Steelhead A begins optimizing a new TCP connection, it communicates this to Steelhead B, provides the IP addresses and TCP port numbers for the new TCP connection, and defines a dynamic TCP port on which to forward packets. If Steelhead B sees a packet that matches the connection, it takes the packet, alters its destination IP address to be the in-path IP address of Steelhead A, alters its destination TCP port to be the specific dynamic port that Steelhead A specified for the connection, and transmits the packet using its routing table. TIP: To ensure the connection-forwarding-neighbor Steelhead appliance sends traffic to each of their in-path IP addresses via the LAN, install a static route for the addresses whose next hop is the LAN gateway device. Failure Handling within Connection Forwarding By default, if a Steelhead appliance loses connectivity to a connection forwarding neighbor, the Steelhead appliance stops attempting to optimize new connections. This behavior can be changed with the in-path neighbor allow-failure CLI command. If the allow-failure CLI command is used, a Steelhead appliance continues to optimize new connections, regardless of the state of its neighbors. For virtual in-path deployments with multiple Steelhead appliances, including WCCP clusters, connection forwarding and the allow-failure CLI command must always be used. This is because certain events, such as network failures, and router or Steelhead appliance cluster changes, can cause routers to change the destination Steelhead appliance for TCP connection packets. When this happens, Steelhead appliances must be able to redirect traffic to each other to insure that optimization continues. For parallel physical in-path deployments, where multiple paths to the WAN are covered by different Steelhead appliances, connection forwarding is needed because packets for a TCP connection might be routed asymmetrically; that is, the packets for a connection might sometimes go through one path, and other times go through another path. The Steelhead appliances on these paths must use connection forwarding to ensure that the traffic for a TCP connection is always sent to the Steelhead appliance that is performing optimization for that connection STEELHEAD APPLIANCE DESIGN FUNDAMENTALS

37 If the allow-failure CLI command is used in a parallel physical in-path deployment, Steelhead appliances only optimize those connections that are routed through the paths with operating Steelhead appliances. TCP connections that are routed across paths without Steelhead appliances (or with a failed Steelhead appliance) are detected by the asymmetric routing detection feature. For physical in-path deployments, the allow-failure CLI command is commonly used with the fail-to-block feature (on supported hardware). When fail-to-block is enabled, a failed Steelhead appliance blocks traffic along its path, forcing traffic to be re-routed onto other paths (where the remaining Steelhead appliances are deployed). For an example configuration, see Connection Forwarding with Allow-Failure and Fail-to- Block on page 182. NOTE: You can configure your Steelhead appliances to automatically detect and report asymmetry within TCP connections as seen by the Steelhead appliance. Asymmetric route auto-detection does not solve asymmetry; it simply detects and reports it, and passes the asymmetric traffic unoptimized. For details about enabling asymmetric route auto-detection, see the Steelhead Management Console User s Guide. Best Practices for Steelhead Appliance Deployments The following list represents best practices for designing and deploying your Steelhead appliances. These best practices are not requirements, but Riverbed recommends you follow these suggestions as they will lead to designs that require the least amount of initial and on-going configuration: Use in-path designs. Whenever possible, use a physical in-path deployment the most common type of Steelhead appliance deployment. Physical in-path deployments are easier to manage and configure than WCCP, PBR, and L4 designs. In-path designs generally require no extra configuration on the connected routers or switches. If desired, you can limit traffic to be optimized on the Steelhead appliance. For details, see Physical In-Path Deployments on page 39. Use the right cables. To ensure that traffic flows not only when the Steelhead appliance is optimizing traffic, but also when the Steelhead appliance transitions to fail-to-wire mode, use the appropriate crossover or straight-through cable to connect the Steelhead appliance to a router or switch. Verify the cable selection by removing power from the Steelhead appliance and then test connectivity through it. For details, see Choosing the Right Cables on page 46. Set matching duplex speeds. The number one cause of performance issues is duplex mismatch on the Steelhead appliance WAN or LAN interfaces, or on the interface of a device connected to the Steelhead appliance. Most commonly it is the interface of a network device deployed prior to the Steelhead appliance. For details about duplex settings, see Cabling and Duplex on page 46. For details about troubleshooting duplex mismatch, see Physical In-Path Deployments on page 39. Minimize the effect of link state transition. Use the Cisco spanning-tree portfast command on Cisco switches, or similar configuration options on your routers and switches that minimize the amount of time an interface stops forwarding traffic when the Steelhead appliance transitions to failure mode. For details, see Fail-to-Wire Mode on page 41. Use serial rather than parallel designs. Parallel designs are physical in-path designs in which a Steelhead appliance has some, but not all, of the WAN links passing through it, and other Steelhead appliances have the remaining WAN links passing through them. Connection forwarding must be configured for parallel designs. In general, it is easier to use physical in-path designs where one Steelhead appliance has all of the links to the WAN passing through it. For details about serial designs, see Physical In-Path Deployments on page 39. For details about connection forwarding, see Connection Forwarding on page 34. STEELHEAD APPLIANCE DEPLOYMENT GUIDE 37

38 Do not optimize transit traffic. Ideally, Steelhead appliances only optimize traffic that is initiated or terminated at its local site. The best and easiest way to achieve this is to deploy the Steelhead appliances where the LAN connects to the WAN and not where LAN-to-LAN or WAN-to-WAN traffic can pass through (or be redirected to) the Steelhead appliance. For details, see Resolving Transit Traffic Issues on page 184. Position your Steelhead appliances close to your network end points. For optimal performance, minimize latency between Steelhead appliances and their respective clients and servers. Steelhead appliances should be as close as possible to your network end points (that is, client-side Steelhead appliances should be as close to your clients as possible, and server-side Steelhead appliances should be as close to your servers as possible). Use correct addressing or port transparency modes. Performance trade-offs exist for each of the WAN Visibility modes, but the inherent issues with full transparency are generally greater. For details, see WAN Visibility Modes on page 139. Use data store synchronization. Regardless of the deployment type or clustering used at a site, data store synchronization can allow significant bandwidth optimization even after a Steelhead appliance or hard drive failure. For details, see Data Store Synchronization on page 33. Use connection forwarding and allow-failure in a WCCP cluster. In a WCCP cluster, use connection forwarding and the allow-failure CLI option between Steelhead appliances. For details, see Connection Forwarding on page 34. Avoid using fixed-target in-path rules. Whenever possible, make auto-discovery possible, rather than resorting to fixed-target in-path rules. For details about auto-discovery, see The Auto-Discovery Protocol on page 21. For details about fixed-target in-path rules, see Fixed-Target In-Path Rules on page 30. Understand in-path rules versus peering rules. Use in-path rules to modify Steelhead appliance behavior when a connection is initiated. For details, see In-Path Rules on page 25. Use peering rules to modify Steelhead appliance behavior when it sees auto-discovery tagged packets. For details, see Peering Rules on page 26. Use Riverbed Professional Services or an authorized Riverbed Partner. Training (both standard and custom) and consultation are available for small to large, and extra-large deployments. For details, go to the Riverbed Professional Services site located at or contact them at proserve@riverbed.com STEELHEAD APPLIANCE DESIGN FUNDAMENTALS

39 CHAPTER 2 Physical In-Path Deployments In This Chapter This chapter describes a physical in-path Steelhead appliance deployment. It includes the following sections: Overview of In-Path Deployments next The Logical In-Path Interface on page 40. Basic Steps for Deploying a Physical In-Path Steelhead Appliance on page 47. Overview of In-Path Deployments In a physical in-path Steelhead appliance deployment, a Steelhead appliance LAN interface connects to a LAN-side device (usually a switch), and a corresponding Steelhead appliance WAN interface connects to a WAN connecting device (usually a router). This allows the Steelhead appliance to see all traffic flowing to and from the WAN, and perform optimization. Depending on the Steelhead appliance model and its hardware configuration, multiple pairs of WAN and LAN interfaces can be used simultaneously, which can be connected to multiple switches and routers. The following figure illustrates the simplest type of physical in-path Steelhead appliance deployment. Figure 2-1. Single Subnet, Physical In-Path Deployment STEELHEAD APPLIANCE DEPLOYMENT GUIDE 39

40 Most Steelhead appliance deployments are physical in-path deployments. Physical in-path configurations are the easiest to deploy and do not require on-going maintenance as other configurations do (such as virtual in-path configurations: WCCP, PBR, L4 redirection, and so forth). The Logical In-Path Interface All Steelhead appliances ship with at least one pair of ports that are used for in-path deployments. This pair of ports forms the logical in-path interface. The logical in-path interface acts as an independent, two-port bridge, with its own IP address. The following figure illustrates the Steelhead appliance logical in-path interface and how it is physically connected to network devices in a single subnet, in-path deployment. Figure 2-2. The Logical In-Path Interface in a Single Subnet In-Path Deployment The simplest in-path Steelhead appliance has two IP addresses: Primary. Used for the system management, data store synchronization, and SNMP. In-Path0_0. Used for optimized data transmission. Several types of network interface cards are available for Steelhead appliances. The desktop Steelhead appliances have network bypass functionality built-in. With 1U and 3U systems, you can choose the type of bypass card. Steelhead appliances can have both copper and fiber Ethernet bypass cards. For details about bypass cards, see the Bypass Card Installation Guide. Failure Modes All Steelhead appliance models and in-path network interface cards support fail-to-wire mode. In the event of a disk failure, a software crash, a runaway software process, or even loss of power to the Steelhead appliance, the LAN and WAN ports that form the logical in-path interface become internally connected as if they were the ends of a crossover cable, thereby providing uninterrupted transmission of data over the WAN. Certain in-path network interface cards also support a fail-to-block mode, where in the case of a failure or loss of power, the Steelhead appliance LAN and WAN interfaces completely lose link status, blocking traffic along its path and forcing it to be re-routed onto other paths (where the remaining Steelhead appliances are deployed). T he default failure mode is fail-to-wire mode PHYSICAL IN-PATH DEPLOYMENTS

41 For a list of in-path network interface cards or bypass cards that support fail-to-block mode, see Fail-to- Block Mode on page 42. If a Steelhead appliance transitions to fail-to-wire or fail-to-block mode, you are notified in the following ways: The Intercept/Bypass status light is active. For details about the status lights for each of the bypass cards, see the Bypass Card Installation Guide. Critical displays in the Management Console status bar. SNMP traps are sent (if you have set this option). The event is logged to system logs (syslog) (if you have set this option). notifications are sent (if you have set this option). Fail-to-Wire Mode Fail-to-wire mode allows the Steelhead appliance WAN and LAN ports to serve as an Ethernet crossover cable. In fail-to-wire mode, Steelhead appliances cannot view or optimize traffic. Instead, all traffic is passed through the Steelhead appliance unoptimized. All Steelhead appliance in-path interfaces support fail-to-wire mode. Fail-to-wire mode is the default setting for Steelhead appliances. When a Steelhead appliance transitions from normal operation to fail-to-wire mode, Steelhead appliance circuitry physically moves in order to electrically connect the Steelhead appliance LAN and WAN ports to each other, and physically disconnects these two ports from the rest of the Steelhead appliance. During the transition to fail-to-wire mode, devices connected to the Steelhead appliance momentarily see their links to the Steelhead appliance go down, then immediately come back up. After the transition, traffic resumes flowing as quickly as the connected devices are able. For example, spanning-tree configuration and routingprotocol configuration influence how quickly traffic resumes flowing. Traffic that was passed-through is uninterrupted. Traffic that was optimized might be interrupted, depending on the behavior of the application-layer protocols. When connections are restored, the traffic resumes flowing, although without optimization. After the Steelhead appliance returns to normal operation, it transitions the Steelhead appliance LAN and WAN ports out of fail-to-wire mode. The devices connected to the Steelhead appliance perceive this as another link state transition. After they are back online, new connections that are made are optimized. However, connections made during the failure are not optimized. To force all connections to be optimized, you can enable the kickoff feature. This feature resets established connections to force them to go through the connection creation process again. For this reason, enabling the kickoff feature must not be set in production deployments. Generally, connections are short lived and kickoff is not necessary. For details about enabling the kickoff feature, see the Steelhead Management Console User s Guide. Fail-to-Wire Mode Effect On Connected Devices When a Steelhead appliance transitions to fail-to-wire mode, the transition can have an effect on devices connected to the Steelhead appliance. For example, one common implication pertains to the spanning-tree protocol. In many physical in-path deployments, the Steelhead appliance LAN port is connected to an Ethernet switch, and the Steelhead appliance WAN port is connected to a router. STEELHEAD APPLIANCE DEPLOYMENT GUIDE 41

42 When a Steelhead appliance transitions from bridging mode to failure mode, a switch might force the port that is connected to the Steelhead appliance to go through the second, non-forwarding states of spanning tree. This can result in packet delay or packet loss. You can resolve this issue by making configuration modifications on your switch. Depending on your switch vendor, there are many different methods to alleviate this issue, ranging from skipping the nonforwarding states (for example, running the spanning-tree portfast command on Cisco switches), to using newer 802.1d STP protocols that converge faster on link transitions. RiOS v5.0.x and later only has this mode transition issue when the Steelhead appliance experiences a power loss. RiOS v4.1 and earlier has this transition state issue when the Steelhead appliance experiences a power loss, software failure, or when the optimization service is restarted. Fail-to-Block Mode Some network interfaces support fail-to-block mode. In fail-to-block mode, if the Steelhead appliance has an internal software failure or power loss, the Steelhead appliance LAN and WAN interfaces power down and stop bridging traffic. This is only useful if the network has a routing or switching infrastructure that can automatically divert traffic off of the link once the failed Steelhead appliance blocks it. You can use this with connection forwarding, the allow-failure CLI command, and an additional Steelhead appliance on another path to the WAN to achieve redundancy. For details, see Connection Forwarding with Allow- Failure and Fail-to-Block on page 182. The following Steelhead appliance in-path interfaces support fail-to-block mode: Two-Port Copper Gigabit-Ethernet Bypass Card-B Four-Port SX Fiber Gigabit-Ethernet Bypass Card Six-Port Copper Gigabit-Ethernet Bypass Card Four-Port Copper Gigabit-Ethernet PCI-E Bypass Card Series XX50 Two-Port SX Fiber Gigabit-Ethernet PCI-E Bypass Card Series XX50 Four-Port SX Fiber Gigabit-Ethernet PCI-E Bypass Card Series XX50 The desktop Steelhead appliance models (50, 100, 200, and 300) do not support fail-to-block mode. To enable fail-to-block mode 1. On the Steelhead appliance, connect to the CLI. For details, see the Riverbed Command-Line Interface Reference Manual. 2. At the system prompt, enter the following set of commands: enable configure terminal no interface inpath0_0 fail-to-bypass enable write memory NOTE: You must save your changes to memory for your changes to take effect PHYSICAL IN-PATH DEPLOYMENTS

43 To change from fail-to-block mode back to fail-to-wire mode 1. On the Steelhead appliance, connect to the CLI. For details, see the Riverbed Command-Line Interface Reference Manual. 2. At the system prompt, enter the following set of commands: enable configure terminal interface inpath0_0 fail-to-bypass enable write memory NOTE: You must save your changes to memory for your changes to take effect. To check failure mode status 1. On the Steelhead appliance, connect to the CLI. For details, see the Riverbed Command-Line Interface Reference Manual. 2. At the system prompt, enter the following set of commands: enable show interface inpath0_0 In-Path IP Address Selection An IP address is required for each Steelhead appliance in-path interface. When using correct addressing or port transparency, the IP address must be reachable by remote Steelhead appliances for optimization to occur. For details about correct addressing and port transparency, see WAN Visibility Modes on page 139. In some environments, the link between the switch and the router might reside in a subnet that has no available IP address. There are several ways to accommodate the IP address requirement, including: Creating a secondary interface, with a new subnet and IP address on the router or switch, and pulling the Steelhead appliance in-path interface IP address from the new subnet. Creating a new 802.1q VLAN interface and subnet on the router and switch link, and pulling the Steelhead appliance in-path interface IP address from the new subnet. This also requires entering the appropriate in-path VLAN tag on the Steelhead appliance. NOTE: With RiOS v5.0.x and later you can deploy Steelhead appliances so that the in-path interface IP address is not actually used. This deployment option can be useful for integrating with certain network configurations, such as NAT. However, an IP address must be configured for each enabled in-path interface. For details, see Configuring WAN Visibility Modes on page 149. VLAN Tracking RiOS v3.0.x and later supports per-connection VLAN tracking for physical in-path deployments. VLAN tracking enables Steelhead appliances to maintain the VLAN ID for each optimized connection. Enabling VLAN tracking ensures that packets remain on the correct VLAN. STEELHEAD APPLIANCE DEPLOYMENT GUIDE 43

44 Each of the Steelhead appliance in-path interfaces is configured with a default VLAN ID, called the in-path interface VLAN ID. Steelhead appliances use the in-path interface VLAN ID when they generate traffic for a connection that is not known to be on any particular VLAN. You can assign one in-path interface VLAN ID to the Steelhead appliance in-path interface. The following figure illustrates how Steelhead appliances use VLAN IDs when VLAN tracking is enabled. In this example, the original VLAN ID is preserved between the switch and the Steelhead appliance. The inpath0_0 VLAN ID 100 is used when the Steelhead appliance generates traffic destined to its peer on the other side of the WAN. Figure 2-3. VLAN Tracking Enabled on the Steelhead Appliance In this example: The client generates a SYN. The switch receives the SYN and tags it with VLAN ID 200. The Steelhead appliance: is deployed in-path. uses correct addressing. has VLAN tracking enabled. remembers that this connection is on VLAN 200. inpath0_0 VLAN ID is set to 100, which it uses to tag Steelhead-to-Steelhead traffic. The process is the same in the reverse direction (server-to-client). Use VLAN tracking if you deploy Steelhead appliances on a trunk link and you want the Steelhead appliances to preserve VLAN tags when communicating with host machines. If you need the VLAN IDs to be preserved between your Steelhead appliances, you can configure full address transparency. For details, see Full Address Transparency on page 143. Best practice is to have all packets on a connection go and return on the same VLAN PHYSICAL IN-PATH DEPLOYMENTS

45 To configure a Steelhead appliance that is on a trunk 1. On the Steelhead appliance, connect to the CLI. For details, see the Riverbed Command-Line Interface Reference Manual. 2. At the system prompt, enter the following set of commands: enable configure terminal in-path simplified routing all in-path vlan-conn-based in-path mac-match-vlan in-path interface inpathx_y vlan Z write memory NOTE: You must save your changes to memory for your changes to take effect. Using tcpdump on Trunk Links Care is required when taking network traces on trunk links. If a filter string is configured to restrict the captured packets to a specified set (based on IP addresses, ports, and so forth), by default tcpdump does not capture packets with an 802.1Q tag. To capture packets with an 802.1Q tag, the filter string must be prefixed by the keyword vlan. For example: tcpdump i wanx_y vlan and host. For details, see the Riverbed Command-Line Interface Reference Manual. Link State Propagation In physical in-path deployments, link state propagation helps communicate link status between the devices connected to the Steelhead appliance. When this feature is enabled, the link state of each Steelhead appliance LAN and WAN pair is monitored. If either physical port loses link status, the link of the corresponding physical port is also brought down. For example, in a simple physical in-path deployment, in which the Steelhead appliance is connected to a router on its WAN port and a switch on its LAN port, if the cable to the router is disconnected the Steelhead appliance deactivates the link on its LAN port. This causes the switch interface that is connected to the Steelhead appliance to also lose the link. (This also occurs if the cable to the switch is disconnected; the router interface that is connected to the Steelhead appliance loses the link.) Link state propagation helps the link failure to quickly propagate through the chain of devices, and is useful in environments where link status is used as a fast-fail trigger. Link state propagation is supported on either all or none of the interfaces of a Steelhead appliance; it cannot be used to selectively activate an in-path interface. STEELHEAD APPLIANCE DEPLOYMENT GUIDE 45

46 To enable link state propagation on a Steelhead appliance 1. On the Steelhead appliance, connect to the CLI. For details, see the Riverbed Command-Line Interface Reference Manual. 2. At the system prompt, enter the following set of commands: enable configure terminal in-path lsp enable write memory NOTE: You must save your changes to memory for your changes to take effect. Cabling and Duplex In most physical in-path deployments the Steelhead appliance is connected to a router and a switch. The Steelhead appliance WAN port is connected to the router with a crossover cable, and the Steelhead appliance LAN port is connected to the switch with a straight-through cable. For details about the in-path interface, see The Logical In-Path Interface on page 40. The number one cause of poor performance is a duplex mismatch. Choosing the Right Cables The following table summarizes the correct cable (either a crossover or a straight-through) usage in the Steelhead appliance. Devices Steelhead appliance to Steelhead appliance Steelhead appliance to router Steelhead appliance to switch Steelhead appliance to host Cable Crossover Crossover Straight-through Crossover To avoid duplex mismatch, you must manually configure the same speed for your: router switch the Steelhead appliance primary interface the Steelhead appliance LAN interface the Steelhead appliance WAN interface Riverbed recommends you do not rely on Auto MDI/MDI-X to auto-sense the cable type. The installation might work when the Steelhead appliance is optimizing traffic, but it might not if the in-path bypass card transitions to fail-to-wire mode PHYSICAL IN-PATH DEPLOYMENTS

47 The following can be signs of a duplex mismatch: You cannot connect to an attached device. You can connect to a device when you choose auto-negotiation, but you cannot connect to that same device when you manually set the speed or duplex. Slow performance across the network. To verify slow performance on the network, go to the Reports - Networking - Interface Counters page of the Management Console. Look for positive values for the following fields: Discards Errors Overruns Frame Carrier counts Collisions The above values are zero (0) on a healthy network unless you use half duplex, which Riverbed does not recommend. Basic Steps for Deploying a Physical In-Path Steelhead Appliance Perform the following basic steps to deploy a physical in-path Steelhead appliance. Figure 2-4. Simple, Physical In-Path Deployment 1. Determine the speed for the: switch interface router interface Steelhead appliance primary interface Steelhead appliance WAN interface Steelhead appliance LAN interface STEELHEAD APPLIANCE DEPLOYMENT GUIDE 47

48 Riverbed recommends the following speeds: Fast Ethernet Interfaces: 100 megabits full duplex Gigabit Interfaces: 1000 megabits full duplex 2. Determine the IP addresses for the Steelhead appliance. A Steelhead appliance that is deployed in a physical in-path mode requires two IP addresses, one each for the: Steelhead appliance in-path interface Steelhead appliance primary interface (this interface is used for managing the Steelhead appliance) 3. Manually configure the speed for the: switch interface router interface Steelhead appliance primary interface Steelhead appliance WAN interface Steelhead appliance LAN interface IMPORTANT: Riverbed strongly recommends that you manually configure the speed for each interface. High Availability Deployments You can increase optimization and provide high availability by deploying several Steelhead appliances back-to-back in an in-path configuration to create a serial cluster. Serial clusters are supported only on Models 5000, 5010, 5520, 6020, and Appliances in a serial cluster process the peering rules you specify in a spill-over fashion. When the maximum number of TCP connections for a Steelhead appliance is reached, the appliance stops intercepting new connections. The next Steelhead appliance in the serial cluster intercepts new connections, if it has not reached its maximum number of connections. The in-path peering rules and in-path passthrough rules tell the Steelhead appliances in a serial cluster not to intercept connections between each other. When you enable peering for a Steelhead appliance in a serial cluster, the Steelhead appliance automatically intercepts and optimizes traffic for all of the appliances in the cluster. When the maximum number of TCP connections for a Steelhead appliance is reached, that appliance stops intercepting new connections and passes them on to the next Steelhead appliance in the cluster automatically. The peering rules define what happens when a Steelhead appliance receives an auto-discovery probe from another Steelhead appliance in the same cluster PHYSICAL IN-PATH DEPLOYMENTS

49 You can deploy serial clusters on the client or server-side of the network. Figure 2-5. Serial Cluster Deployment In this example, Steelhead 1, Steelhead 2, and Steelhead 3 are configured with in-path peering rules so they do not answer probe requests from one another, and with in-path rules so they do not accept their own WAN connections. Similarly, Steelhead 4, Steelhead 5, and Steelhead 6 are configured so that they do not answer probes from one another and do not intercept inner connections from one another. The Steelhead appliances are configured to perform auto-discovery to find an available peer Steelhead appliance on the other side of the WAN. A Basic Serial Cluster Deployment The following figure illustrates how to configure a cluster of three in-path Steelhead appliances in a data center. Figure 2-6. Serial Cluster in a Data Center This example uses the following parameters: Steelhead 1 in-path IP address is Steelhead 2 in-path IP address is Steelhead 3 in-path IP address is In this example, you configure each Steelhead appliance with in-path peering rules to prevent peering with another Steelhead appliance in the cluster, and with in-path rules to not optimize connections originating from other Steelhead appliances in the same cluster. STEELHEAD APPLIANCE DEPLOYMENT GUIDE 49

50 To configure Steelhead 1 1. On Steelhead 1, connect to the CLI. For details, see the Riverbed Command-Line Interface Reference Manual. 2. At the system prompt, enter the following set of commands: enable configure terminal in-path peering rule pass peer rulenum 1 in-path peering rule pass peer rulenum 1 in-path rule pass-through srcaddr /32 rulenum 1 in-path rule pass-through srcaddr /32 rulenum 1 wr mem show in-path peering rules Rule Type Source Network Dest Network Port Peer Addr pass * * * pass * * * def auto * * * * show in-path rules Rule Type Source Addr Dest Addr Port Target Addr Port pass /24 * * pass /24 * * def auto * * * NOTE: You must save your changes to memory for your changes to take effect. To configure Steelhead 2 1. On Steelhead 2, connect to the CLI. For details, see the Riverbed Command-Line Interface Reference Manual. 2. At the system prompt, enter the following set of commands: NOTE: The following in-path, pass-through rules are not required as the Steelhead-to-Steelhead connections are made through port 7800, which is the default pass-through port. However, if port 7800 is not marked as a pass-through port, these pass-through rules are necessary. enable configure terminal in-path peering rule pass peer rulenum 1 in-path peering rule pass peer rulenum 1 in-path rule pass-through srcaddr /32 rulenum 1 in-path rule pass-through srcaddr /32 rulenum 1 wr mem show in-path peering rules Rule Type Source Network Dest Network Port Peer Addr pass * * * pass * * * def auto * * * * show in-path rules Rule Type Source Addr Dest Addr Port Target Addr Port PHYSICAL IN-PATH DEPLOYMENTS

51 pass /24 * * pass /24 * * def auto * * * NOTE: You must save your changes to memory for your changes to take effect. To configure Steelhead 3 1. On Steelhead 3, connect to the CLI. For details, see the Riverbed Command-Line Interface Reference Manual. 2. At the system prompt, enter the following set of commands: enable configure terminal in-path peering rule pass peer rulenum 1 in-path peering rule pass peer rulenum 1 in-path rule pass-through srcaddr /32 rulenum 1 in-path rule pass-through srcaddr /32 rulenum 1 wr mem show in-path peering rules Rule Type Source Network Dest Network Port Peer Addr pass * * * pass * * * def auto * * * * show in-path rules Rule Type Source Addr Dest Addr Port Target Addr Port pass /24 * * pass /24 * * def auto * * * NOTE: You must save your changes to memory for your changes to take effect. STEELHEAD APPLIANCE DEPLOYMENT GUIDE 51

52 PHYSICAL IN-PATH DEPLOYMENTS

53 CHAPTER 3 Virtual In-Path Deployments In This Chapter This chapter describes virtual in-path deployments and summarizes the basic steps for configuring an in-path, load balanced, Layer-4 switch deployment. It includes the following sections: Overview of Virtual In-Path Deployments, next Configuring an In-Path, Load Balanced, Layer-4 Switch Deployment on page 54 Configuring NetFlow in Virtual In-Path Deployments on page 56 This chapter assumes you are familiar with: The Management Console. For details, see the Steelhead Management Console User s Guide. The RiOS CLI. For details, see the Riverbed Command-Line Interface Reference Manual. The installation and configuration process for the Steelhead appliance. For details, see the Steelhead Appliance Installation and Configuration Guide. This chapter provides the basic steps for configuring one type of virtual in-path deployment. It does not provide detailed procedures for all virtual in-path deployments. Use this chapter as a general guide to virtual in-path deployments. For details about the factors you must consider before you design and deploy the Steelhead appliance in a network environment, see Choosing the Right Steelhead Appliance on page 19. Overview of Virtual In-Path Deployments In a virtual in-path deployment, the Steelhead appliance is virtually in the path between clients and servers. Traffic moves in and out of the same WAN interface, and the LAN interface is not used. This deployment differs from a physical in-path deployment in that a packet redirection mechanism is used to direct packets to Steelhead appliances that are not in the physical path of the client or server. Redirection mechanisms include: Layer-4 Switch. You enable Layer-4 switch (or server load-balancer) support when you have multiple Steelhead appliances in your network to manage large bandwidth requirements. For details, see Configuring an In-Path, Load Balanced, Layer-4 Switch Deployment, next. STEELHEAD APPLIANCE DEPLOYMENT GUIDE 53

54 Hybrid. A hybrid deployment is a deployment in which the Steelhead appliance is deployed either in a physical or virtual in-path mode, and has out-of-path mode enabled. A hybrid deployment is useful where the Steelhead appliance must be referenced from remote sites as an out-of-path device (for example, to bypass intermediary Steelhead appliances). For details, see Chapter 4, Out-of-Path Deployments. PBR. PBR enables you to redirect traffic to a Steelhead appliance that is configured as a virtual in-path device. PBR allows you to define policies that override routing behavior. For example, instead of routing a packet based on routing table information, it is routed based on the policy applied to the router. You define policies to redirect traffic to the Steelhead appliance and policies to avoid loop-back. For details, see Chapter 6, PBR Deployments. WCCP. WCCP was originally implemented on Cisco routers, multi-layer switches, and Web caches to redirect HTTP requests to local Web caches (Version 1). Version 2, which is supported on Steelhead appliances, can redirect any type of connection from multiple routers to multiple Web caches. For example, if you have multiple routers or if there is no in-path place for the Steelhead appliance, you can place the Steelhead appliance in a virtual in-path mode through the router so that they work together. For details, see Chapter 5, WCCP Deployments. Figure 3-1. Virtual In-Path Deployment on the Server-Side of the Network Configuring an In-Path, Load Balanced, Layer-4 Switch Deployment An in-path, load-balanced, Layer-4 switch deployment serves high traffic environments or environments with large numbers of active TCP connections. It handles failures, scales easily, and supports all protocols. When you configure the Steelhead appliance using a Layer-4 switch, you define the Steelhead appliances as a pool where the Layer-4 switch redirects client and server traffic. Only one WAN interface on the Steelhead appliance is connected to the Layer-4 switch, and the Steelhead appliance is configured to send and receive data through that interface VIRTUAL IN-PATH DEPLOYMENTS

55 The following figure illustrates the server-side of the network where load balancing is required. Figure 3-2. In-Path, Load-Balanced, Layer-4 Switch Deployment Basic Steps (Client-Side) Configure the client-side Steelhead appliance as an in-path device. For details, see the Steelhead Appliance Installation and Configuration Guide. Basic Steps (Server-Side) Perform the following steps for each Steelhead appliance in the cluster. 1. Install and power on the Steelhead appliance. For details, see the Steelhead Appliance Installation and Configuration Guide. 2. Connect to the Steelhead appliance. For details, see the Steelhead Appliance Installation and Configuration Guide. Make sure you properly connect to the Layer-2 switch. For example: On Steelhead A, plug the straight-through cable into the Primary port of the Steelhead appliance and connect it to the LAN-side switch. On Steelhead B, plug the straight-through cable into the Primary port of the Steelhead appliance and connect it to the LAN-side switch. 3. Configure the Steelhead appliance in an in-path configuration. For details, see the Steelhead Management Console User s Guide. 4. Connect the Layer-4 switch to the Steelhead appliance: On Steelhead A, plug the straight-through cable into the WAN port of the Steelhead appliance and the Layer-4 switch. On Steelhead B, plug the straight-through cable into the WAN port of the Steelhead appliance and the Layer-4 switch. STEELHEAD APPLIANCE DEPLOYMENT GUIDE 55

56 5. Connect to the Management Console. For details, see the Steelhead Management Console User s Guide. 6. Navigate to the Configure - Optimization - General Service Settings page and enable Layer-4 switch support. For example, click Enable In-Path Support and Enable L4/PBR/WCCP Support. 7. Apply and save the new configuration in the Management Console. 8. Configure your Layer-4 switch. For details, refer to your switch documentation. 9. Navigate to the Configure - Maintenance - Service page and restart the optimization service. 10. View performance reports and system logs. Configuring NetFlow in Virtual In-Path Deployments The Steelhead appliance supports the export of NetFlow v5 flows to any compatible NetFlow v5 collector. During NetFlow export, the NetFlow datagram provides information such as the interface index that corresponds to the input and output traffic. An administrator can use the interface index to determine how much traffic is flowing from the LAN to the WAN and from the WAN to the LAN. In virtual in-path deployments, such as the server-side of the network shown in Figure 3-1 on page 54, traffic moves in and out of the same WAN interface; the LAN interface is not used. As a result, when the Steelhead appliance exports data to a NetFlow collector, all traffic has the WAN interface index. Though it is technically correct for all traffic to have the WAN interface index because the input and output interfaces are the same, this makes it impossible for an administrator to use the interface index to distinguish between LAN-to-WAN and WAN-to-LAN traffic. You can work around this issue by using the CLI to turn on the Steelhead appliance fake index feature, which inserts the correct interface index before exporting data to a NetFlow collector. The fake index feature works only for optimized traffic, not unoptimized or passed through traffic. This feature can be configured only using the CLI. To configure the fake index feature 1. On the Steelhead appliance, connect to the CLI. For details, see the Riverbed Command-Line Interface Reference Manual. 2. Configure the Steelhead appliance to capture optimized LAN traffic on the WAN0_0 interface for NetFlow. For example, at the system prompt enter the following command: ip flow-export destination interface wan0_0 capture optimized-lan 3. Turn on the fake index feature on the Steelhead appliance. For example: ip flow-export destination interface wan0_0 fakeindex on VIRTUAL IN-PATH DEPLOYMENTS

57 CHAPTER 4 Out-of-Path Deployments In This Chapter This chapter describes out-of-path deployments and summarizes the basic steps for configuring them. It includes the following sections: Overview of Out-of-Path Deployments, next Out-of-Path Deployment Example on page 59 For details about the factors you must consider before you design and deploy the Steelhead appliance in a network environment, see Choosing the Right Steelhead Appliance on page 19. NOTE: Riverbed refers to WCCP and PBR deployments as virtual in-path deployments. This chapter discusses out-ofpath deployments, which do not include WCCP or PBR deployments. This chapter assumes you are familiar with: The installation and configuration process for the Steelhead appliance. For details, see the Steelhead Appliance Installation and Configuration Guide. This chapter provides the basic steps for out-of-path network deployments. It does not provide detailed procedures. Use this chapter as a general guide to these deployments. Overview of Out-of-Path Deployments In an out-of-path deployment, only a Steelhead appliance primary interface is required to connect to the network. The Steelhead appliance can be connected anywhere in the LAN. There is no redirecting device in an out-of-path Steelhead appliance deployment. You configure fixed-target in-path rules for the client-side Steelhead appliance. The fixed-target in-path rules point to the primary IP address of the out-of-path Steelhead appliance. The out-of-path Steelhead appliance uses its primary IP address when communicating to the server. The remote Steelhead appliance must be deployed either in a physical or virtual in-path mode. For an example, see A Fixed-Target In-Path Rule to an Out-Of-Path Steelhead Appliance Primary IP Address on page 59. STEELHEAD APPLIANCE DEPLOYMENT GUIDE 57

58 You can achieve redundancy by deploying two Steelhead appliances out-of-path at one location, and by using both of their primary IP addresses in the remote Steelhead appliance fixed-target rule. The fixedtarget rule allows the specification of a primary and a backup Steelhead appliance. If the primary Steelhead appliance becomes unreachable, the remote Steelhead appliances use the backup Steelhead appliance until the primary comes back online. If both out-of-path Steelhead appliances in a specific fixed target rule are unavailable, the remote Steelhead appliance passes through this traffic unoptimized. The remote Steelhead appliances does not look for another matching in-path rule in the list. You can use data store synchronization between the out-of-path Steelhead appliances for additional benefits in case of a failure. For details, see Data Store Synchronization on page 33. You can also implement load balancing with out-of-path deployments by using multiple out-of-path Steelhead appliances, and configuring different remote Steelhead appliances to use different target out-ofpath Steelhead appliances. You can target an out-of-path Steelhead appliance for a fixed-target rule. This can be done simultaneously for physical in-path, and virtual in-path deployments. This is referred to as a hybrid deployment. For details about fixed-target in-path rules, see Fixed-Target In-Path Rules on page 30. Limitations of Out-of-Path Deployments While the ease of deploying an out-of-path Steelhead appliance might seem appealing, there are serious disadvantages when using this method: Connections initiated from the site with the out-of-path Steelhead appliance cannot be optimized. Servers at the site see the optimized traffic coming not from a client IP address, but from the out-ofpath Steelhead appliance primary IP address. In certain network environments, a change in the source IP address might be problematic. For some commonly used protocols, Steelhead appliances automatically make protocol-specific adjustments to account for the IP address change. For example, with CIFS, MAPI, and FTP, there are various places where the IP address of the connecting client can be used within the protocol itself. Because the Steelhead appliance uses application aware optimization for these protocols, it is able to make the appropriate changes within optimized connections and ensure correct functioning when used in out-of-path deployments. However, there are protocols, such as NFS, that cannot function appropriately when attempting optimization in an out-of-path configuration. IMPORTANT: If you use out-of-path deployments, ensure correct operation by being selective about which applications you optimize. Even with protocols where RiOS specifically adjusts for the change in source IP address on the LAN, there might be authentication, IDS, or IPS systems that generate alarms when seeing this change. Because of these disadvantages specific to out-of-path deployments, and the requirement of using fixedtarget rules, this type of deployment is not as widely used as physical or virtual in-path deployments. It is primarily used as a way to rapidly deploy a Steelhead appliance in a site with very complex or numerous connections to the WAN OUT-OF-PATH DEPLOYMENTS

59 Out-of-Path Deployment Example The following figure illustrates a scenario where fixed-target in-path rules are configured for an out-of-path Steelhead appliance primary interface. Figure 4-1. A Fixed-Target In-Path Rule to an Out-Of-Path Steelhead Appliance Primary IP Address In this example, you configure: Steelhead A with a fixed-target in-path rule specifying that traffic destined to a particular Web server at the data center is optimized by the out-of-path Steelhead B. The TCP connection between the out-of-path Steelhead appliance, Steelhead B, and the server uses the Steelhead appliance primary IP address as the source, instead of the client IP address. To configure Steelhead A 1. On Steelhead A, connect to the CLI. For details, see the Riverbed Command-Line Interface Reference Manual. 2. At the system prompt, enter the following set of commands: enable configure terminal in-path rule fixed-target target-addr dstaddr /24 dstport 80 rulenum end STEELHEAD APPLIANCE DEPLOYMENT GUIDE 59

60 To configure Steelhead B 1. On Steelhead B, connect to the CLI. For details, see the Riverbed Command-Line Interface Reference Manual. 2. At the system prompt, enter the following set of commands: enable configure terminal out-of-path enable OUT-OF-PATH DEPLOYMENTS

61 CHAPTER 5 WCCP Deployments In This Chapter This chapter describes how to configure WCCP to redirect traffic to one or more Steelhead appliances. It includes the following sections: Overview of WCCP, next Configuring WCCP on page 68 Configuring Additional WCCP Features on page 78 Verifying and Troubleshooting WCCP Configurations on page 85 This chapter assumes you are familiar with: The installation and configuration process for the Steelhead appliance. For details, see the Steelhead Appliance Installation and Configuration Guide. The RiOS CLI. For details, see the Riverbed Command-Line Interface Reference Manual. This chapter provides basic information about WCCP network deployments, and examples for configuring WCCP deployments. Use this chapter as a general guide to WCCP deployments. For details about the factors you must consider before you design and deploy the Steelhead appliance in a network environment, see Choosing the Right Steelhead Appliance on page 19. For details about WCCP, see the Cisco documentation Web site at Overview of WCCP This section provides an overview of WCCP. It includes the following sections: Cisco Hardware and IOS Requirements, next The Pros and Cons of WCCP on page 62 WCCP Fundamentals on page 63 STEELHEAD APPLIANCE DEPLOYMENT GUIDE 61

62 WCCP Version 1 was originally implemented on Cisco routers, multi-layer switches, and Web caches to redirect HTTP requests to local Web caches. WCCP Version 2, which Steelhead appliances support, can redirect any type of connection from multiple routers to multiple Web caches. Steelhead appliances deployed with WCCP can interoperate with remote Steelhead appliances deployed in any way, including: WCCP, PBR, in-path, out-of-path, and so forth. Cisco Hardware and IOS Requirements WCCP requires either a Cisco router or a switch. The most important factors in a successful WCCP implementation are the Cisco hardware platform and the IOS revision you use. There are many possible combinations of Cisco hardware and IOS revisions, and each combination has different capabilities. Cisco platforms and IOS do not support all assignment methods, redirection methods, use of ACLs to control traffic, and interface interception directions. You can expect the Cisco minimum recommended IOS to change as WCCP becomes more widely used, and new IOS technical issues are discovered. As of January 2008, Cisco recommends the following minimum IOS releases for specific hardware platforms: Cisco Hardware ISR and 7200 Routers Catalyst 6500 with Sup720 or Sup32 Catalyst 6500 with Sup2 Catalyst 4500 Catalyst 3750 Cisco IOS 12.1(14), 12.2(26), 12.3(13), 12.4(10), 12.1(3)T, 12.2(14)T, 12.3(14)T5, 12.4(9)T1 12.2(18)SXF (27)E, 12.2(18)SXF (31)SG 12.2(37)SE IMPORTANT: Regardless of how you configure a Steelhead appliance, if the Cisco IOS version on the router or switch is below the current Cisco minimum recommendations, it might be impossible to have a functioning WCCP implementation, or, the implementation might not have optimal performance. The Pros and Cons of WCCP Physical in-path deployments require less initial and ongoing configuration and maintenance than out-ofpath or virtual in-path deployments. This is because physical in-path Steelhead appliances are placed at the points in your network where data already flows. Consequently, you do not need to alter your existing network infrastructure. For details about physical in-path deployments, see Physical In-Path Deployments on page 39. Virtual in-path techniques, such as WCCP, require more time to configure. This is because the network infrastructure must be configured to redirect traffic to the Steelhead appliances WCCP DEPLOYMENTS

63 WCCP has several advantages: No rewiring required. You do not need to move any wires during installation. At large sites with multiple active links, you can adjust wiring by moving individual links, one at a time, through the Steelhead appliances. An option when no other is available. At sites where a physical in-path deployment is not possible, WCCP might achieve the integration you need. For example, if your site has a WAN link terminating directly into a large access switch, there is no place to install a physical in-path Steelhead appliance. WCCP has several disadvantages: Network design changes required. WCCP deployments with multiple routers can require significant network changes (for example, spanning VLANs and GRE tunnels). Hardware and IOS upgrades required. To avoid hardware limitations and IOS issues, you must keep the Cisco platform and IOS revisions at the current minimum recommended levels. Otherwise, it might be impossible to create a stable deployment, regardless of how you configure the Steelhead appliance. Future IOS feature planning must consider compatibility with WCCP. Additional evaluation overhead. More time can be required to evaluate the integration of the Steelhead appliances. This is in addition to evaluating Steelhead appliance performance gains. Riverbed Professional Services might be needed to test and perform network infrastructure upgrades before any optimization can be performed. This is especially true when WCCP is deployed at numerous sites. Additional configuration management. You must create access lists and manage them on an ongoing basis. At small sites, it might be feasible to redirect all traffic to the Steelhead appliances. However, at larger sites, access lists might be required to ensure that traffic that cannot be optimized (for example, LAN-to-LAN traffic) is not sent to the Steelhead appliances. GRE encapsulation. If your network design does not support the presence of the Steelhead appliances and the Cisco router or switch interface in a common subnet, you must use GRE encapsulation for forwarding packets. Steelhead appliances can accommodate the subsequent extra performance utilization. However, your existing router or switch might experience large resource utilization. WCCP Fundamentals This section describes some of the fundamental concepts for configuring WCCP. It includes the following sections: Service Groups, next Assignment Methods on page 64 Redirection and Return Methods on page 66 Service Groups A central concept of WCCP is the service group. The service group logically consists of the routers and the Steelhead appliances that work together to redirect and optimize traffic. You might use one or more service groups to redirect traffic to the Steelhead appliances for optimization. Service groups are differentiated by a service group number. The service group number is local to the site where WCCP is used. The service group number is not transmitted across the WAN. STEELHEAD APPLIANCE DEPLOYMENT GUIDE 63

64 For each router in a service group, you identify each of the interfaces through which optimized traffic might pass. You also configure each of these router interfaces to redirect traffic to the appropriate service group. NOTE: Riverbed recommends that you use WCCP service groups 61 and 62. Routers redirect traffic to the Steelhead appliances in their WCCP service group. The assignment method and the load balancing configuration determine which Steelhead appliance the router redirects traffic to. Assignment Methods This section describes WCCP assignment methods. It includes the following sections: Hash Assignment, next Mask Assignment with RiOS v5.0.1 or Earlier on page 65 Mask Assignment with RiOS v5.0.2 or Later on page 65 Determining an Assignment Method on page 66 The assignment method refers to how a router chooses which Steelhead appliance in a WCCP service group to redirect packets to. There are two assignment methods: the hash assignment method and the mask assignment method. Steelhead appliances support both the hash assignment and mask assignment methods. Hash Assignment The hash assignment method redirects traffic based on a hashing scheme and the weight of the Steelhead appliances. A hashing scheme is a combination of the source IP address, destination IP address, source port, or destination port. The hash assignment method is commutative: a packet with a source IP address X, and a destination IP address Y, hashes to the same value as a packet with a source IP address Y, and a destination IP address X. (Consequently, a single WCCP service group is usually sufficient to configure a WCCP cluster that uses the hash assignment method because the same Steelhead appliance sees both inbound and outbound traffic for any given connection.) The weight of a Steelhead appliance is determined by the number of connections the Steelhead appliance supports. The default weight is based on the Steelhead appliance model number. The more connections a Steelhead appliance model supports, the heavier the weight of that model. You can modify the default weight. The hash assignment method supports failover and load balancing. In a failover configuration, you configure one or more Steelhead appliances to be used only if no other Steelhead appliances within the WCCP service group are operating. To configure a Steelhead appliance to be a failover appliance, you set the Steelhead appliance weight to 0. If a Steelhead appliance has a weight of 0, and another Steelhead appliance in the same WCCP service group has a non-zero weight, the Steelhead appliance with the 0 weight does not receive redirected traffic. If all of the Steelhead appliances have a weight of 0, traffic is redirected equally among them WCCP DEPLOYMENTS

65 Mask Assignment with RiOS v5.0.1 or Earlier With RiOS v5.0.1 or earlier, a single Steelhead appliance receives redirected traffic when you use the mask assignment method. The other Steelhead appliances function as failover appliances. The Steelhead appliance that receives traffic is the Steelhead appliance with the lowest in-path IP address. Unlike the hash assignment method, the mask assignment method processes the first packet for a connection in the router hardware. To force mask redirection, you use the assign-scheme option for the wccp service-group CLI command. For example: wccp service-group 90 routers assign-scheme mask Some Cisco platforms, such as the Catalyst 4500 and the Catalyst 3750, only support the mask assignment method. Mask Assignment with RiOS v5.0.2 or Later The mask assignment method in RiOS v5.0.2 or later supports load-balancing across multiple active Steelhead appliances. As with the hash assignment method, each Steelhead appliance is configured with the appropriate service groups and router bindings. Load-balancing decisions (for example, deciding which Steelhead appliance in a cluster is to optimize a given new connection) are based on administrator-specified bits pulled, or masked, from the IP address and TCP port fields. Unlike the hash assignment method, these bits are not hashed. Instead, the Cisco switch concatenates the bits to construct an index into the load-balancing table. Consequently, you must carefully choose these bits. A maximum of seven bits can be used. Unlike the hash assignment method, the mask assignment method is not commutative. When you use the mask assignment method, you configure failover in the same manner as you do with the hash assignment method. The mask assignment method requires that, for every connection, packets are redirected to the same Steelhead appliance in both directions (client-to-server and server-to-client). To achieve redirection you configure the following: Set up two WCCP groups with reversed masks. Configure the Cisco switch to redirect packets to a WCCP service group in the client-to-server direction, and to redirect packets to another WCCP group in the server-to-client direction. For example, the Steelhead appliance WCCP configuration might look as follows: wccp service-group 61 routers assign-scheme mask src-ip-mask 0x1741 wccp service-group 62 routers assign-scheme mask dst-ip-mask 0x1741 STEELHEAD APPLIANCE DEPLOYMENT GUIDE 65

66 The following figure illustrates the reversed mask redirection technique. Figure 5-1. Mask Assignment Method Packet Redirection NOTE: Riverbed recommends that you use WCCP service groups 61 and 62. For details about mask assignment method parameters, see WCCP Service Group Parameters on page 76. Determining an Assignment Method Unless otherwise specified in the Steelhead appliance WCCP service group setting, and if the router supports it, Steelhead appliances prefer the hash assignment method. The hash assignment method generally achieves better load distribution than the mask assignment method. However, there are instances when the mask assignment method is preferable: Certain lower-end Cisco switches do not support hash assignment (3750, 4000, 4500-series, among others). The hash assignment method uses a Netflow table entry on the switch for every connection. The Netflow table entry can support up to 256K connections, depending on the hardware. However, when the switch runs out of Netflow table entries, every WCCP-redirected packet is process-switched. Because this has a crippling effect on the switch CPU, very large WCCP deployments are constrained to the mask assignment load distribution method. The hash assignment method switches the first packet of every new redirected TCP connection. The switch CPU installs the Netflow table entry that is used to hardware-switch subsequent packets for a given connection. This process limits the number of connection set ups a switch can perform per unit of time. Consequently, in WCCP deployments where the connection set up rate is very high, the mask assignment method is the only option. Redirection and Return Methods WCCP supports two methods for transmitting packets between a router or a switch and Steelhead appliances: the GRE encapsulation method, and the L2 method. Steelhead appliances support both the L2 and GRE encapsulation methods, in both directions, to and from the router or switch WCCP DEPLOYMENTS

67 The L2 method is generally preferred from a performance standpoint because it requires fewer resources from the router or switch than the GRE encapsulation does. The L2 method modifies only the destination Ethernet address. However, not all combinations of Cisco hardware and IOS revisions support the L2 method. Also, the L2 method requires the absence of L3 hops between the router or switch and the Steelhead appliance. The GRE encapsulation method appends a GRE header to a packet before it is forwarded. This imposes a performance penalty on the router and switch, especially during the GRE packet de-encapsulation process. This performance penalty might be too great for production deployments. You can avoid using the GRE encapsulation method for the traffic return path from the Steelhead appliance by using the Steelhead appliance wccp override-return route-no-gre CLI command. The wccp override-return route-no-gre CLI command enables the Steelhead appliance to return traffic without GRE encapsulation to the Steelhead appliance in-path gateway. This occurs regardless of the method negotiated for returning traffic to the router or switch. Use the wccp override-return route-nogre CLI command only if the Steelhead appliance is no more than an L2 hop away from the router or switch, and unencapsulated traffic going to the default gateway does not pass through an interface that redirects the packet back to the Steelhead appliance (that is, there is no WCCP redirection loop). For details about the wccp override-return route-no-gre CLI command, see the Riverbed Command-Line Interface Reference Manual. The following table summarizes Cisco hardware platform support for redirection and return methods. Cisco Hardware ISR and 7200 routers Catalyst 6500 with Sup720 or Sup32 Catalyst 6500 with Sup2 Catalyst 4500 Catalyst 3750 Redirection and Return Method GRE GRE or L2 GRE or L2 L2 L2 Best Practices for Determining a Redirection and Return Method Riverbed recommends the following best practices for determining your redirection and return method: Design your WCCP deployment so that your Steelhead appliances are no more than an L2 hop away from the router or switch performing WCCP redirection. Do not configure a specific redirection or assignment method on your Steelhead appliance. Allow the Steelhead appliance to negotiate these settings with the router. Use the wccp override-return route-no-gre CLI command only if the following are both true: The Steelhead appliance is no more than an L2 hop away from the router or switch. Unencapsulated traffic going to the default gateway does not pass through an interface that redirects the packet back to the Steelhead appliance (that is, there is no WCCP redirection loop). If this condition is not met, traffic redirected by the Steelhead appliance is continually forwarded back to the same Steelhead appliance. STEELHEAD APPLIANCE DEPLOYMENT GUIDE 67

68 WCCP Clustering and Failover Steelhead appliances support failover for WCCP. Steelhead appliances periodically announce themselves to the routers. If a Steelhead appliance fails, traffic is redirected to the remaining operating Steelhead appliances. Instead of load balancing traffic between two Steelhead appliances, you might want traffic to only go to one Steelhead appliance and failover to the other Steelhead appliance if the first Steelhead appliance fails. To configure failover support, set the backup Steelhead appliance weight to 0. Configuring WCCP This section describes how to configure WCCP and provides example deployments. It includes the following sections: Basic Steps, next Configuring a Simple WCCP Deployment on page 69 Configuring a High Availability Deployment on page 71 Basic WCCP Router Configuration Commands on page 74 Steelhead Appliance WCCP CLI Commands on page 76 WCCP Service Group Parameters on page 76 Setting the Service Group Password on page 78 Configuring Multicast Groups on page 79 Configuring Group Lists to Limit Service Group Members on page 81 Configuring Access Lists on page 81 NetFlow in WCCP on page 85 Basic Steps Perform the following basic steps to configure WCCP. 1. Configure the Steelhead appliance as an in-path device. For details, see Physical In-Path Deployments on page 39 and the Steelhead Appliance Installation and Configuration Guide. 2. Create a service group on the router and set the router to use WCCP to redirect traffic to the WCCP Steelhead appliance. 3. Attach the WCCP Steelhead appliance wan0_0 interface to the network. The wan0_0 interface must be able to communicate with the switch or router where WCCP is configured and where WCCP redirection takes place. 4. Configure the WCCP Steelhead appliance to be a virtual in-path device with WCCP support. For example, use the Steelhead appliance CLI command in-path oop enable. 5. Add the service group on the WCCP Steelhead appliance WCCP DEPLOYMENTS

69 6. Enable WCCP on the WCCP Steelhead appliance. Configuring a Simple WCCP Deployment The following figure illustrates a WCCP deployment that is simple to deploy and administer, and achieves high performance. This example includes a single router and a single Steelhead appliance. Figure 5-2. A Single Steelhead Appliance and A Single Router In this example: The router and the Steelhead appliance use WCCP service groups 61 and 62. In this example, as long as the Steelhead appliance is a member of all of the service groups, and the service groups include all of the interfaces on all of the paths to and from the WAN, it does not matter whether a single service group, or multiple service groups, are configured. The Steelhead appliance: wan0_0 interface is directly attached to the router with a crossover cable. virtual inpath0_0 interface uses the IP information that is visible to the router and the remote Steelhead appliances for data transfer. does not have an encapsulation scheme in the WCCP service group configuration. Therefore, the Steelhead appliance informs the router that it supports both the GRE and the L2 redirection methods. The method negotiated and used depends on the methods that the router supports. default gateway return override is enabled with the wccp override-return route-no-gre CLI command. Enabling this CLI command decreases the resource utilization on the router. In this example, this is possible because returning packets do not match any subsequent WCCP interface redirect statements. For details about the wccp override-return route-no-gre CLI command, see Redirection and Return Methods on page 66. NOTE: If you are using RiOS v4.x or earlier, see the following Riverbed Knowledge Base article, What WCCP Redirect and Return Method Should I Use?, located at STEELHEAD APPLIANCE DEPLOYMENT GUIDE 69

70 The router uses the ip wccp redirect exclude CLI command on the router interface connected to the Steelhead appliance wan0_0 interface. This CLI command configures the router to never redirect packets arriving on this interface, even if they are later sent out of an interface with an ip wccp redirect out command. Although this is not required for this deployment, Riverbed recommends you use it as a best practice. NOTE: Although the primary interface is not included in this example, Riverbed recommends that you connect the primary interface for management purposes. For details about configuring the primary interface, see the Steelhead Management Console User s Guide. To configure WCCP on the Steelhead appliance 1. On the Steelhead appliance, connect to the CLI. For details, see the Riverbed Command-Line Interface Reference Manual. 2. At the system prompt, enter the following set of commands: enable configure terminal interface primary ip address /24 ip default-gateway interface inpath0_0 ip address /24 ip in-path-gateway inpath0_ in-path enable in-path oop enable wccp enable wccp service-group 61 routers wccp service-group 62 routers wccp override-return route-no-gre write memory restart NOTE: You must save your changes to memory and restart the Steelhead appliance for your changes to take effect. To configure WCCP on the Cisco router NOTE: In this example, only traffic to or from IP address is sent to the Steelhead appliance. On the router, at the system prompt, enter the following set of commands: enable configure terminal ip access-list extended wccp_acl remark Dont redirect anything in the Steelheads subnet deny tcp any deny tcp any permit tcp any permit tcp any ip wccp version 2 ip wccp 61 redirect-list wccp_acl ip wccp 62 redirect-list wccp_acl interface s0/0 ip wccp 62 redirect in WCCP DEPLOYMENTS

71 interface f0/0 ip wccp 61 redirect in interface f0/1 ip wccp redirect exclude in end write memory TIP: Enter configuration commands, one per line. End each command with Ctrl-Z. For details about how to verify the WCCP configuration, Verifying and Troubleshooting WCCP Configurations on page 85. Configuring a High Availability Deployment The following figure illustrates a WCCP deployment in which two Steelhead appliances and two routers are used in a WCCP configuration to provide high availability in the event of a Steelhead appliance or router failure. Data store synchronization is commonly used in high availability designs, and is also used in this example. You can configure data store synchronization between any two local Steelhead appliances, regardless of how they are deployed: physical in-path, virtual in-path, or out-of-path. For details about data store synchronization, see Data Store Synchronization on page 33. Figure 5-3. High Availability WCCP with Data Store Synchronization STEELHEAD APPLIANCE DEPLOYMENT GUIDE 71

72 In this example: The Steelhead appliances are both directly connected to their associated WAN routers. The WCCP cluster is comprised of two routers redirecting traffic and two Steelhead appliances acting as the cache engines. If a single Steelhead appliance fails, all traffic is forwarded to the operating Steelhead appliance. Because the two Steelhead appliances synchronize their data stores, the remaining Steelhead appliance provides the same level of acceleration as the failed Steelhead appliance. To configure WCCP on Steelhead 1 1. On the Steelhead appliance, connect to the CLI. For details, see the Riverbed Command-Line Interface Reference Manual. 2. At the system prompt, enter the following set of commands: enable configure terminal interface primary ip address /24 ip default-gateway interface inpath0_0 ip address /24 ip in-path-gateway inpath0_ in-path enable in-path oop enable in-path neighbor enable in-path neighbor ip address in-path neighbor allow-failure in-path neighbor advertiseresync wccp enable wccp service-group 61 routers , wccp service-group 62 routers , wccp override-return route-no-gre datastore sync master datastore sync peer-ip datastore sync enable write memory restart NOTE: You must save your changes to memory and restart the Steelhead appliance for your changes to take effect. To configure WCCP on Steelhead 2 1. On the Steelhead appliance, connect to the CLI. For details, see the Riverbed Command-Line Interface Reference Manual. 2. At the system prompt, enter the following set of commands: enable configure terminal interface primary ip address /24 ip default-gateway interface inpath0_0 ip address /24 ip in-path-gateway inpath0_ in-path enable in-path oop enable in-path neighbor enable in-path neighbor ip address WCCP DEPLOYMENTS

73 in-path neighbor allow-failure in-path neighbor advertiseresync wccp enable wccp service-group 61 routers , wccp service-group 62 routers , wccp override-return route-no-gre no datastore sync master datastore sync peer-ip datastore sync enable write memory restart NOTE: You must save your changes to memory and restart the Steelhead appliance for your changes to take effect. To configure WCCP on Cisco router 1 On the router, at the system prompt, enter the following set of commands: enable configure terminal ip access-list extended wccp_acl deny tcp any deny tcp any ! Replace this permit any with the subnets of remote sites! That will have Steelheads DO NOT leave in the permit any permit tcp any any ip wccp version 2 ip wccp 61 redirect-list wccp_acl ip wccp 62 redirect-list wccp_acl interface vlan10 ip wccp redirect exclude in interface vlan100 ip wccp 61 redirect in interface vlan200 ip wccp 61 redirect in interface s0/1 ip wccp 62 redirect in end write memory TIP: Enter configuration commands, one per line. End each command with Ctrl-Z. To configure WCCP on Cisco router 2 On the router, at the system prompt, enter the following set of commands: enable configure terminal ip access-list extended wccp_acl deny tcp any deny tcp any ! Replace this permit any with the subnets of remote sites! That will have Steelheads DO NOT leave in the permit any permit tcp any any ip wccp version 2 ip wccp 61 redirect-list wccp_acl ip wccp 62 redirect-list wccp_acl interface vlan10 ip wccp redirect exclude in STEELHEAD APPLIANCE DEPLOYMENT GUIDE 73

74 interface vlan100 ip wccp 61 redirect in interface vlan200 ip wccp 61 redirect in interface s0/1 ip wccp 62 redirect in end write memory TIP: Enter configuration commands, one per line. End each command with Ctrl-Z. For details about how to verify the WCCP configuration, Verifying and Troubleshooting WCCP Configurations on page 85. Basic WCCP Router Configuration Commands This section summarizes some of the basic WCCP router configuration commands. For details about WCCP router configuration commands, refer to your router documentation. To enable WCCP and define a service group on the router On the router, at the system prompt, enter the following set of commands: enable configure terminal ip wccp <service_group> end write memory For example: enable configure terminal ip wccp 90 end write memory IMPORTANT: The service group you specify on the router must also be set on the WCCP Steelhead appliance. NOTE: The WCCP protocol allows you to add up to 32 Steelhead appliances and 32 routers to a service group. To specify inbound traffic redirection for each router interface On the router, at the system prompt, enter the following set of commands: enable configure terminal interface <interface> ip wccp <service_group> redirect in end write memory WCCP DEPLOYMENTS

75 For example: enable configure terminal interface fastethernet 0/0 ip wccp 90 redirect in interface serial 0 ip wccp 90 redirect in end write memory About the ip wccp Router Command The ip wccp [NR] router command is not additive. After you end and write memory for an ip wccp [NR] command, you cannot use another ip wccp [NR] command to augment information you previously specified. To retain information you previously specified with ip wccp [NR], you must issue a new ip wccp [NR] command that includes the information you previously specified, as well as whatever you want to configure. For example, you configure your router using the following set of commands: enable configure terminal ip wccp 90 redirect-list 100 end write memory If you want to specify a password on the router later, the command ip wccp 90 password <your_password> overwrites the previous redirect list configuration. To retain the previous redirect list configuration and set a password, you must use the following command: ip wccp 90 redirect-list 100 password <your_password> For example: enable configure terminal ip wccp 90 redirect-list 100 password XXXYYYZZ end write memory STEELHEAD APPLIANCE DEPLOYMENT GUIDE 75

76 Steelhead Appliance WCCP CLI Commands This section summarizes the Steelhead appliance WCCP CLI commands. Steelhead Appliance CLI Command [no] wccp enable wccp mcast-ttl 10 [no] wccp service-group <service-id> {routers <routers> assign-scheme [hash hash] protocol [tcp icmp] encap-scheme [either gre l2] flags <flags> password <password> ports <ports> priority <priority> weight <weight> assign-scheme [either hash mask] src-ip-mask <mask> dst-ipmask <mask> src-port-mask <mask> dst-portmask <mask>} in-path neighbor allow-failure show wccp Description Enables or disables WCCP. Specifies the multicast Time To Live (ttl) value of 10 for WCCP. Configures a WCCP service group. Ensures that if a Steelhead appliance fails the neighbor Steelhead appliance continues to optimize new connections (for in-path deployments that use connection forwarding with WCCP). Displays WCCP settings. WCCP Service Group Parameters The following table summarizes the parameters for configuring a WCCP service group. Parameter service-group <service-id> routers <IP addresses> assign-scheme [hash mask] protocol [tcp icmp] Description Specifies the service group ID (from 0 to 255). The service group ID must match the value set on the router. A value of 0 specifies the standard http service group. To enable WCCP, the Steelhead appliance must join a service group at the router. A service group is a group of routers and Steelhead appliances which define the traffic to redirect, and the routers and Steelhead appliances the traffic goes through. NOTE: Riverbed recommends that you use WCCP service groups 61 and 62. Specifies a comma-separated list of router IP addresses (maximum of 32). Specifies the assignment method to use: either. Specifies either hash or mask. This is the default setting (hash first, then mask). hash. Specifies a hash assignment method. For details about the hash assignment method, see Hash Assignment on page 64. mask. Specifies a mask assignment method. For details about the mask assignment method, see Mask Assignment with RiOS v5.0.1 or Earlier on page 65, and Mask Assignment with RiOS v5.0.2 or Later on page 65. For details about assignment methods, see Assignment Methods on page 64. Specifies the protocol: TCP or ICMP WCCP DEPLOYMENTS

77 encap-scheme [either gre 12] flags <flags> ports <ports> priority <priority> password <password> Specifies the traffic forwarding and redirection scheme: gre. Generic Routing Encapsulation. l2. Layer-2 redirection. NOTE: To work around a router or switch that does not support L2 return negotiation, you can configure your Steelhead appliance to not encapsulate return packets. For details, see Redirection and Return Methods on page 66. either. Layer-2 first; if Layer-2 is not supported, then gre. This is the default value. Specifies the fields the router should hash on and if certain ports should be redirected. Specify a combination of src-ip-hash, dst-ip-hash, src-port-hash, dst-port-hash, portsdest, or ports-source. You can set one or more flags. The default setting is src-ip-hash, dst-ip-hash, which ensures that all of the packets for a particular TCP connection are redirected to the same Steelhead appliance. If you use a different setting, you might need to enable connection forwarding among the Steelhead appliances in the WCCP service group. The following hashing options are available: src-ip-hash. Specifies that the router hash the source IP address to determine traffic to redirect. dst-ip-hash. Specifies that the router hash the destination IP address to determine traffic to redirect. src-port-hash. Specifies that the router hash the source port to determine traffic to redirect. dst-port-hash. Specifies that the router hash the destination port to determine traffic to redirect. Other options: ports-dest. Specifies that the router determines traffic to redirect based on destination ports. ports-source. Specifies that the router determines traffic to redirect based on source ports. If the source or destination flags are set, the router redirects only the TCP traffic that matches the source or destination ports specified. NOTE: Flags cannot set destination ports and source ports simultaneously. Specifies a comma-separated list of up to seven ports that the router redirects. Use only if the ports-dest or ports-source service flag is set. Specifies the WCCP priority for traffic redirection. If a connection matches multiple service groups on a router, the router chooses the service group with the highest priority. The range is The default value is 200. Specifies the WCCP password. This password must be the same as the password on the router. Additionally, WCCP requires that all routers in a service group have the same password. Passwords are limited to eight characters. STEELHEAD APPLIANCE DEPLOYMENT GUIDE 77

78 weight <weight> src-ip-mask <mask> dst-ip-mask <mask> src-port-mask <mask> dst-port-mask <mask> Specifies the percentage of connections that are redirected to a particular Steelhead appliance. A higher weight redirects more traffic to that Steelhead appliance. The ratio of traffic redirected to a Steelhead appliance is equal to its weight divided by the sum of the weights of all the Steelhead appliances in the same service group. For example, if there are two Steelhead appliances in a service group and one has a weight of 100 and the other has a weight of 200, the one with the weight 100 receives 1/3 of the traffic and the other receives 2/3 of the traffic. The range is The default value corresponds to the number of TCP connections your appliance supports. To enable failover support with WCCP groups, define the service group weight to be 0 on the backup Steelhead appliance. If one Steelhead appliance has a weight 0, but another one has a non-zero weight, the Steelhead appliance with weight 0 does not receive any redirected traffic. If all the Steelhead appliances have a weight 0, the traffic is redirected equally among them. Specifies the source IP mask address. Specifies the destination IP mask address. Specifies the source-port mask. Specifies the destination-port mask. For detailed information about WCCP CLI commands, see the Riverbed Command-Line Interface Reference Manual. Configuring Additional WCCP Features This section describes additional WCCP features and how to configure them. It includes the following sections: Setting the Service Group Password, next Configuring Multicast Groups on page 79 Configuring Group Lists to Limit Service Group Members on page 81 Configuring Access Lists on page 81 Configuring Load Balancing in WCCP on page 84 NetFlow in WCCP on page 85 Setting the Service Group Password You can configure password authentication of WCCP protocol messages between the router and the Steelhead appliance: The router service group must match the service group password configured on the WCCP Steelhead appliance. The same password must be configured on the router and the WCCP Steelhead appliance. Passwords must be no more than eight characters WCCP DEPLOYMENTS

79 IMPORTANT: The following router commands are not required for the example network configurations in this chapter. Use caution when you issue the ip wccp [NR] router command because each ip wccp [NR] router command overwrites the previous ip wccp [NR] router command. You cannot use an ip wccp [NR] router command to augment ip wccp [NR] router commands you previously issued. For details, see About the ip wccp Router Command on page 75. To set the service group password on the WCCP router On the router, at the system prompt, enter the following set of commands: enable configure terminal ip wccp <service_group> password <your_password> end write memory TIP: Enter configuration commands, one per line. End each command with Ctrl-Z. NOTE: All routers in a service group must have the same password. Passwords cannot exceed eight characters. To set the service group password on the WCCP Steelhead appliance 1. Connect to the Riverbed CLI on the WCCP Steelhead appliance. For details, see the Riverbed Command- Line Interface Reference Manual. 2. At the system prompt, enter the following set of commands: enable configure terminal wccp service-group <service-id> routers <IP address> password <your_password> write memory restart For example, to set the password where the router service group is 90 and the router IP address is , enter the following command: wccp service-group 90 routers password XXXYYYZZ NOTE: You must set the same password on the Steelhead appliance and the Cisco router. NOTE: You must save your changes to memory and restart the Steelhead appliance for your changes to take effect. Configuring Multicast Groups If you add multiple routers and Steelhead appliances to a service group, you can configure them to exchange WCCP protocol messages through a multicast group. STEELHEAD APPLIANCE DEPLOYMENT GUIDE 79

80 Configuring a multicast group is advantageous because if a new router is added, it does not need to be explicitly added on each Steelhead appliance. IMPORTANT: The following router commands are not required for the example network configurations in this chapter. Use caution when you issue the ip wccp [NR] router command because each ip wccp [NR] router command overwrites the previous ip wccp [NR] router command. You cannot use an ip wccp [NR] router command to augment ip wccp [NR] router commands you previously issued. For details, see About the ip wccp Router Command on page 75. To configure multicast groups on the WCCP router On the router, at the system prompt, enter the following set of commands: enable configure terminal ip wccp 90 group-address interface fastethernet 0/0 ip wccp 90 redirect in ip wccp 90 group-listen end write memory TIP: Enter configuration commands, one per line. End each command with Ctrl-Z. NOTE: Multicast addresses must be between and To configure multicast groups on the WCCP Steelhead appliance 1. Connect to the Riverbed CLI on the WCCP Steelhead appliance. For details, see the Riverbed Command- Line Interface Reference Manual. 2. At the system prompt, enter the following set of commands: enable configure terminal wccp enable wccp mcast-ttl 10 wccp service-group 90 routers write memory restart NOTE: You must save your changes to memory and restart the Steelhead appliance for your changes to take effect. NOTE: You must set the same password on the Steelhead appliance and the Cisco router WCCP DEPLOYMENTS

81 Configuring Group Lists to Limit Service Group Members You can configure a group list on your router to limit service group members (for instance, Steelhead appliances) by IP address. For example, if you want to allow only Steelhead appliances with IP addresses and to join the router service group, you create a group list on the router. IMPORTANT: The following router commands are not required for the example network configurations in this chapter. Use caution when you issue the ip wccp [NR] router command because each ip wccp [NR] router command overwrites the previous ip wccp [NR] router command. You cannot use an ip wccp [NR] router command to augment ip wccp [NR] router commands you previously issued. For details, see About the ip wccp Router Command on page 75. To configure a WCCP router group list On the WCCP router, at the system prompt, enter the following set of commands: enable configure terminal access-list 1 permit access-list 1 permit ip wccp 90 group-list 1 interface fastethernet 0/0 ip wccp 90 redirect in end write memory TIP: Enter configuration commands, one per line. End each command with Ctrl-Z. Configuring Access Lists This section describes how to configure access lists (ACLs). It includes the following sections: Using Access Lists for Specific Traffic Redirection, next Access List Command Parameters on page 82 Using Access Lists With WCCP on page 84 When you configure ACLs, consider the following: ACLs are processed in order, from top to bottom. As soon as a particular packet matches a statement, it is processed according to that statement and the packet is not evaluated against subsequent statements. Therefore, the order of your access list statements is very important. If port information is not explicitly defined, all ports are assumed. By default all lists include an implied deny all Cisco command at the end, which ensures that traffic that is not explicitly included is denied. You cannot change or delete this implied entry. STEELHEAD APPLIANCE DEPLOYMENT GUIDE 81

82 Using Access Lists for Specific Traffic Redirection If redirection is based on traffic characteristics other than ports, you can use ACLs on the router to define what traffic is redirected. If you only want the traffic for IP address /16 to be redirected to the WCCP Steelhead appliance, configure the router according to the following example. IMPORTANT: The following router commands are not required for the example network configurations in this chapter. Use caution when you issue the ip wccp [NR] router command because each ip wccp [NR] router command overwrites the previous ip wccp [NR] router command. You cannot use an ip wccp [NR] router command to augment ip wccp [NR] router commands you previously issued. For details, see About the ip wccp Router Command on page 75. To configure specific traffic redirection on the router On the router, at the system prompt, enter the following set of commands: enable configure terminal access-list 101 permit tcp any access-list 101 permit tcp any ip wccp 90 redirect-list 101 interface fastethernet 0/0 ip wccp 90 redirect in end interface serial0 ip wccp 90 redirect in end write memory IMPORTANT: If you have defined fixed-target rules, redirect traffic in one direction, as shown in the example above. TIP: Enter configuration commands, one per line. End each command with Ctrl-Z. Access List Command Parameters This section describes the Cisco access-list router command for using ACLs to configure WCCP redirect lists. For details about ACL commands, refer to your router documentation WCCP DEPLOYMENTS

83 The access-list router command has the following syntax: access-list <access_list_number> [permit deny] tcp <source IP/mask> <source_port> <destination IP/ mask> <destination_port> redirect_list_number permit deny tcp source IP/mask source_port Specifies the number from that identifies the redirect list. Standard redirect lists are numbered 1-99; extended redirect lists are numbered A standard redirect list matches traffic based on source IP address. An extended redirect list matches traffic based on source or destination IP address. Riverbed recommends that you use extended IP redirect lists. Specifies whether the redirect list allows or stops traffic redirection. Specify permit to allow traffic redirection; specify deny to stop traffic redirection. Specifies the traffic to redirect. WCCP only redirects TCP traffic. Use only this option when configuring a redirect list for WCCP. Specifies the source IP address and mask. To set the mask, specify 0 or 1, where 0 = match and 1 = does not matter. For example: any. Matches any IP address Matches any host on the network Matches host exactly Matches host exactly. This option is identical to specifying Specifies the source port number or corresponding keyword. Cisco routers support many keywords. For details, refer to your router documentation. For example: eq 80 or eq www. Identical options that match port 80. gt Matches any port greater than lt Matches any port less than neq 80. Matches any port except port 80. range Matches any port between and including 80 through 90. destination IP/mask Specifies the destination IP address and mask. To set the mask, specify 0 or 1, where 0 = match and 1 = does not matter. For example: any. Matches any IP address Matches any host on the network Matches host exactly Matches host exactly. This option is identical to specifying destination_port Specifies the destination port number or corresponding keyword. Cisco routers support several keywords. For details, refer to your router documentation. For example: eq 80 or eq www. Identical options that match port 80. gt Matches any port greater than lt Matches any port less than neq 80. Matches any port except port 80. range Matches any port between and including 80 through 90. STEELHEAD APPLIANCE DEPLOYMENT GUIDE 83

84 Using Access Lists With WCCP To avoid requiring the router to do extra work, Riverbed recommends that you create an ACL that only routes intranet traffic to the Steelhead appliance. Suppose your network is structured so that all Internet traffic passes through the WCCP-configured router, and all intranet traffic is confined to /8. Because it is unlikely that remote Internet hosts have a Steelhead appliance, do not redirect Internet traffic to the Steelhead appliance. The following is an example ACL that achieves this goal. IMPORTANT: The following router commands are not required for the example network configurations in this chapter. Use caution when you issue the ip wccp [NR] router command because each ip wccp [NR] router command overwrites the previous ip wccp [NR] router command. You cannot use an ip wccp [NR] router command to augment ip wccp [NR] router commands you previously issued. For details, see About the ip wccp Router Command on page 75. To configure an ACL to route intranet traffic to your WCCP-enabled Steelhead appliance On the WCCP router, at the system prompt, enter the following set of commands: enable configure terminal access-list 101 deny ip host <WCCP_Steelhead_IP> any access-list 101 deny ip any host <WCCP_Steelhead_IP> access-list 101 permit tcp any access-list 101 permit tcp any access-list 101 deny ip any any! ip wccp 90 redirect-list 101! end write memory Repeat these commands for each WCCP Steelhead appliance in the service group. NOTE: You must save your changes to memory and restart the Steelhead appliance for your changes to take effect. Configuring Load Balancing in WCCP You can perform load balancing using WCCP. WCCP supports load balancing using either the hash assignment method or the mask assignment method. With the hash assignment method, traffic is redirected based on a hashing scheme and the weight of the Steelhead appliances. You can hash on a combination of the source IP address, destination IP address, source port, or destination port. The default weight is based on the Steelhead appliance model (for example, for the Model 5000 the weight is 5000). You can modify the default weight. To change the hashing scheme and assign a weight on a WCCP Steelhead Appliance 1. Connect to the Riverbed CLI on the WCCP Steelhead appliance. For details, see the Riverbed Command- Line Interface Reference Manual WCCP DEPLOYMENTS

85 2. At the system prompt, enter the following command: wccp service-group 90 routers flags dst-ip-hash,src-ip-hash 3. To change the weight on the WCCP Steelhead appliance, at the system prompt, enter the following command: wccp service-group 90 routers weight 20 NetFlow in WCCP In virtual in-path deployments such as WCCP, traffic moves in and out of the same WAN interface. The LAN interface is not used. As a result, when the Steelhead appliance exports data to a NetFlow collector, all traffic has the WAN interface index. Although it is technically correct for all traffic to have the WAN interface index because the input and output interfaces are the same, this makes it impossible to use the interface index to distinguish between LAN-to-WAN and WAN-to-LAN traffic. You can configure the fake index feature on your Steelhead appliance to insert the correct interface index before exporting data to a NetFlow collector. For details about configuring the fake index feature, see Configuring NetFlow in Virtual In-Path Deployments on page 56. Verifying and Troubleshooting WCCP Configurations This section describes the basic commands for verifying WCCP configuration on the router and the WCCP Steelhead appliance. To verify the router configuration On the router, at the system prompt, enter the following set of commands: enable show ip wccp show ip wccp 90 detail show ip wccp 90 view To verify the WCCP configuration on an interface On the router, at the system prompt, enter the following set of commands: enable show ip interface Look for WCCP status messages near the end of the output. To verify the access list configuration On the router, at the system prompt, enter the following set of commands: enable show access-lists <access_list_number> STEELHEAD APPLIANCE DEPLOYMENT GUIDE 85

86 To trace WCCP packets and events on the router On the router, at the system prompt, enter the following set of commands: enable debug ip wccp events WCCP events debugging is on debug ip wccp packets WCCP packet info debugging is on term mon To verify the WCCP Steelhead appliance configuration 1. Connect to the Riverbed CLI on the WCCP Steelhead appliance. For details, see the Riverbed Command- Line Interface Reference Manual. 2. At the system prompt, enter the following command: show wccp service-group 61 detail WCCP Support Enabled: yes WCCP Multicast TTL: 1 WCCP Return via Gateway Override: no Router IP Address: Identity: State: Connected Redirect Negotiated: l2 Return Negotiated: l2 Assignment Negotiated: mask i-see-you Message Count: 20 Last i-see-you Message: 2008/07/06 22:05:16 (1 second(s) ago) Removal Query Message Count: 0 Last Removal Query Message: N/A (0 second(s) ago) here-i-am Message Count: 20 Last here-i-am Message: 2008/07/06 22:05:16 (1 second(s) ago) Redirect Assign Message Count: 1 Last Redirect Assign Message: 2008/07/06 22:02:21 (176 second(s) ago) Web Cache Client Id: Weight: 25 Distribution: 1 (25.00%) Mask SrcAddr DstAddr SrcPort DstPort : 0x x x0000 0x0001 Value SrcAddr DstAddr SrcPort DstPort Cache-IP : 0x x x0000 0x Web Cache Client Id: Weight: 25 Distribution: 2 (50.00%) Mask SrcAddr DstAddr SrcPort DstPort : 0x x x0000 0x0001 Value SrcAddr DstAddr SrcPort DstPort Cache-IP : 0x x x0000 0x : 0x x x0000 0x Web Cache Client Id: WCCP DEPLOYMENTS

87 Weight: 25 Distribution: 1 (25.00%) Mask SrcAddr DstAddr SrcPort DstPort : 0x x x0000 0x0001 Value SrcAddr DstAddr SrcPort DstPort Cache-IP : 0x x x0000 0x The following table lists some of the configurations that the show wccp service-group <num> details CLI command displays: Configuration Example Redirection Method Redirect Negotiated: l2 Return Method Return Negotiated: l2 Assignment Method Assignment Negotiated: mask GRE Encapsulation WCCP Return via Gateway Override: no WCCP Control Messages i-see-you Message Count: 20 For details about troubleshooting WCCP and other deployments, see Troubleshooting Deployment Problems on page 165. STEELHEAD APPLIANCE DEPLOYMENT GUIDE 87

88 WCCP DEPLOYMENTS

89 CHAPTER 6 PBR Deployments This chapter describes how to configure PBR to redirect traffic to a Steelhead appliance or group of Steelhead appliances. It includes the following sections: Overview of PBR, next Connecting the Steelhead Appliance in a PBR Deployment on page 92 Configuring PBR on page 92 NetFlow and Virtual In-Path Deployments on page 98 This chapter assumes you are familiar with: The Management Console. For details, see the Steelhead Management Console User s Guide. The RiOS CLI. For details, see the Riverbed Command-Line Interface Reference Manual. The installation and configuration process for the Steelhead appliance. For details, see the Steelhead Appliance Installation and Configuration Guide. This chapter provides the basic steps for PBR network deployments. For details about the factors you must consider before you design and deploy the Steelhead appliance in a network environment, see Choosing the Right Steelhead Appliance on page 19. Overview of PBR PBR is a packet redirection mechanism that allows you to define policies to route packets instead of relying on routing protocols. PBR is used to redirect packets to Steelhead appliances that are in a virtual in-path deployment. You define PBR policies on your router for switching packets. PBR policies can be based on identifiers available in access lists, such as the source IP address, destination IP address, protocol, source port, or destination port. When a PBR-enabled router interface receives a packet that matches a defined policy, PBR switches the packet according to the rule defined for the policy. If a packet does not match a defined policy, PBR routes the packet to the IP address specified in the routing table entry that most closely matches the packet. STEELHEAD APPLIANCE DEPLOYMENT GUIDE 89

90 IMPORTANT: To avoid an infinite loop, PBR must be enabled on the router interfaces where client traffic arrives, and disabled on the router interface that is connected to the Steelhead appliance. PBR is enabled as a global configuration and applied on an interface basis. The Steelhead appliance that intercepts traffic redirected by PBR is configured with both in-path and virtual in-path support enabled. PBR Failover and CDP A major issue with PBR is that it can blackhole traffic; that is, it drops all packets to a destination if the device it is redirecting to fails. You can avoid the blackholing of traffic by enabling PBR to track whether or not the PBR next hop IP address is available. You configure the PBR-enabled router to use the Cisco Discovery Protocol (CDP). You also enable CDP on the Steelhead appliance. CDP is a protocol used by Cisco routers and switches to obtain neighbor IP addresses, models, IOS versions, and so forth. The protocol runs at the OSI Layer-2 using the Ethernet frame. NOTE: CDP must be enabled on the Steelhead appliance that is used in the PBR deployment. You enable CDP using the in-path cdp enable CLI command. For details, see the Riverbed Command-Line Interface Reference Manual. CDP enables Steelhead appliances to provide automatic failover for PBR deployments. You configure the Steelhead appliance to send out CDP frames. The PBR-enabled router uses these frames to determine whether the Steelhead appliance is operational. If the Steelhead appliance is not operational, the PBRenabled router stops receiving the CDP frames, and PBR stops switching traffic to the Steelhead appliance. The Steelhead appliance must be physically connected to the PBR-enabled router for CDP to send frames. If a switch or other Layer-2 device is located between the PBR-enabled router and the Steelhead appliance, CDP frames cannot reach the router. If the CDP frames do not reach the router, the router assumes the Steelhead appliance is not operational. NOTE: CDP is not supported as a failover mechanism on all Cisco platforms. For details about whether your Cisco device supports this feature, refer to your router documentation. To enable CDP on the Steelhead appliance 1. Connect to the Riverbed CLI on the Steelhead appliance. For details, see the Riverbed Command-Line Interface Reference Manual. 2. At the system prompt, enter the following set of commands: enable configure terminal in-path cdp enable write memory restart PBR DEPLOYMENTS

91 NOTE: You must save your changes to memory and restart the Steelhead appliance for your changes to take effect. To enable CDP failover on the router On the PBR router, at the system prompt, use the set ip next-hop verify-availability command. For details, refer to your router documentation. NOTE: ICMP and HTTP GET can both also be used to track whether or not the PBR next hop IP address is available. PBR Failover Process When you configure the set ip next-hop verify-availability Cisco router command, PBR sends a packet in the following manner: PBR checks the CDP neighbor table to verify that the PBR next hop IP address is available. If the PBR next hop IP address is available, PBR sends an ARP request for the address, obtains an answer for it, and redirects traffic to the PBR next hop IP address (the Steelhead appliance). PBR continues sending traffic to the next hop IP address as long as the ARP requests obtain answers for the next hop IP address. If the ARP request fails to obtain an answer, PBR checks the CDP table. If there is no entry in the CDP table, PBR stops using the route map to send traffic. This verification provides a failover mechanism. NOTE: A Cisco 6500 router and switch combination that is configured in hybrid mode does not support PBR with CDP. A hybrid setup requires that you use a native setup for PBR with CDP to work. This configuration fails because all routing is performed on the MSFC. The MSFC card is treated as an independent system in a hybrid setup. Therefore, when you run the show cdp neighbors Cisco command on the MSFC, it displays the supervisor card as its only neighbor. PBR does not see the devices that are connected to the switch ports. As a result, PBR does not redirect any traffic for route maps that use the set ip next-hop verify-availability Cisco command. For details, refer to your router documentation. More recent Cisco IOS software versions support a feature called Object Tracking. In addition to the old method of using CDP information, Object Tracking allows the use of methods such as HTTP GET, and ping, to determine whether the PBR next hop IP address is available. NOTE: Object Tracking is not available on all Cisco devices. For details about whether your Cisco device supports this feature, refer to your router documentation. STEELHEAD APPLIANCE DEPLOYMENT GUIDE 91

92 Connecting the Steelhead Appliance in a PBR Deployment There are two Ethernet cables attached to the Steelhead appliance in PBR deployments: A straight-through cable to the primary interface. You use this connection to manage the Steelhead appliance, reaching it through HTTPS or SSH. A straight-through cable to the WAN0_0 interface if you are connecting to a switch. A crossover cable to the WAN0_0 interface if you are connecting to a router. You assign an IP address to the in-path interface; this is the IP address that you redirect traffic to (the target of the router PBR rule). Configuring PBR This section describes how to configure PBR and provides example deployments. It includes the following sections: Configuring PBR Overview, next Steelhead Appliance Directly Connected to the Router on page 93 Steelhead Appliance Connected to Layer-2 Switch with a VLAN to the Router on page 94 Steelhead Appliance Connected to a Layer-3 Switch on page 96 Configuring PBR Overview You can use access lists to specify which traffic is redirected to the Steelhead appliance. Traffic that is not specified in the access list is switched normally. If you do not have an access list, or if your access list is not correctly configured in the route map, traffic is not redirected. For details about access lists, see Configuring Access Lists on page 81. IMPORTANT: Riverbed recommends that you define a policy based on the source or destination IP address rather than on the TCP source or destination ports, because certain protocols use dynamic ports instead of fixed ones PBR DEPLOYMENTS

93 Steelhead Appliance Directly Connected to the Router The following figure illustrates a Steelhead appliance deployment in which the Steelhead appliance is configured with PBR, and is directly connected to the router. Figure 6-1. Steelhead Appliance Directly Connected to the Router In this example: The router fastethernet0/0 interface is attached to the Layer-2 switch. The router fastethernet0/1 interface is attached to the Steelhead appliance. A single Steelhead appliance is configured. You can add more Steelhead appliances using the same method as for the first Steelhead appliance. NOTE: Although the primary interface is not included in this example, Riverbed recommends, as a best practice, that you connect the primary interface for management purposes. For details about configuring the primary interface, see the Steelhead Management Console User s Guide. To configure the Steelhead appliance 1. Connect to the Riverbed CLI on the Steelhead appliance. For details, see the Riverbed Command-Line Interface Reference Manual. 2. At the system prompt, enter the following set of commands: enable configure terminal in-path enable in-path oop enable interface in-path0_0 ip address /24 ip in-path-gateway inpath0_ write memory restart NOTE: You must save your changes to memory and restart the Steelhead appliance for your changes to take effect. STEELHEAD APPLIANCE DEPLOYMENT GUIDE 93

94 To configure the PBR router On the PBR router, at the system prompt, enter the following set of commands: enable configure terminal route-map riverbed match IP address 101 set ip next-hop exit ip access-list extended 101 permit tcp any permit tcp any exit interface fa0/0 ip policy route-map riverbed interface S0/0 ip policy route-map riverbed exit exit write memory TIP: Enter configuration commands, one per line. Enter CTRL-Z to end the configuration. Steelhead Appliance Connected to Layer-2 Switch with a VLAN to the Router The following figure illustrates a Steelhead appliance deployment in which the Steelhead appliance is configured with PBR, and is directly connected to the router through a switch. This deployment also has a trunk between the switch and the router. Figure 6-2. Steelhead Appliance Connected to a Layer-2 Switch with a VLAN PBR DEPLOYMENTS

95 In this example: The switch logically separates the server and the Steelhead appliance by placing: the server on VLAN 10. the Steelhead appliance on VLAN 20. The router fastethernet0/1 interface is attached to the Layer-2 switch. The router performs inter-vlan routing; that is, the router switches packets from one VLAN to the other. The link between the router and the switch is configured as a dot1q trunk to transport traffic from multiple VLANs. NOTE: Although the primary interface is not included in this example, Riverbed recommends as a best practice that you connect the primary interface for management purposes. For details about configuring the primary interface, see the Steelhead Management Console User s Guide. To configure the Steelhead appliance 1. Connect to the Riverbed CLI on the Steelhead appliance. For details, see the Riverbed Command-Line Interface Reference Manual. 2. At the system prompt, enter the following set of commands: enable configure terminal in-path enable in-path oop enable interface in-path0_0 ip address /24 ip in-path-gateway inpath0_ write memory restart NOTE: You must save your changes to memory and restart the Steelhead appliance for your changes to take effect. To configure the PBR router On the PBR router, at the system prompt, enter the following set of commands: enable configure terminal route-map riverbed match IP address 101 set ip next-hop exit ip access-list extended 101 permit tcp any permit tcp any exit interface fa0/1.10 encapsulation dot1q 10 ip address interface fa0/1.20 encapsulation dot1q 20 ip address STEELHEAD APPLIANCE DEPLOYMENT GUIDE 95

96 exit interface fa0/1.10 ip policy route-map riverbed interface S0/0 ip policy route-map riverbed exit exit write memory TIP: Enter configuration commands, one per line. Enter CTRL-Z to end the configuration. NOTE: In this example, it is assumed that both the Steelhead appliance and the server are connected to the correct VLAN. It is also assumed that these VLAN connections are established through the switch port configuration on the Layer-2 switch. Steelhead Appliance Connected to a Layer-3 Switch The following figure illustrates a Steelhead appliance deployment in which the Steelhead appliance is configured with PBR, and is directly connected to a Layer-3 switch. Figure 6-3. Steelhead Appliance Connected to a Layer-3 Switch In this example: The Layer-3 switch fastethernet0/0 interface is attached to the server, and is on VLAN 10. The Layer-3 switch fastethernet0/1 interface is attached to the Steelhead appliance, and is on VLAN 20. A single Steelhead appliance is configured. More appliances can be added using the same method as for the first Steelhead appliance PBR DEPLOYMENTS

97 NOTE: Although the primary interface is not included in this example, Riverbed recommends as a best practice that you connect the primary interface for management purposes. For details about configuring the primary interface, see the Steelhead Management Console User s Guide. To configure the Steelhead appliance 1. Connect to the Riverbed CLI on the Steelhead appliance. For details, see the Riverbed Command-Line Interface Reference Manual. 2. At the system prompt, enter the following set of commands: enable configure terminal in-path enable in-path oop enable in-path CDP enable interface in-path0_0 ip address /24 ip in-path-gateway inpath0_ write memory restart NOTE: You must save your changes to memory and restart the Steelhead appliance for your changes to take effect. To configure the Layer-3 switch On the Layer-3 switch, at the system prompt, enter the following set of commands: enable configure terminal route-map riverbed match IP address 101 set ip next-hop set ip next-hop verify-availability exit ip access-list extended 101 permit tcp any permit tcp any exit interface vlan 10 ip address ip policy route-map riverbed interface vlan 20 ip address interface S0/0 ip policy route-map riverbed exit exit write memory TIP: Enter configuration commands, one per line. Enter CTRL-Z to end the configuration. STEELHEAD APPLIANCE DEPLOYMENT GUIDE 97

98 NetFlow and Virtual In-Path Deployments In virtual in-path deployments, such as PBR, traffic moves in and out of the same WAN0_0 interface. The LAN interface is not used. As a result, when the Steelhead appliance exports data to a NetFlow collector, all traffic has the WAN0_0 interface index. This makes it impossible for an administrator to use the interface index to distinguish between LAN-to-WAN and WAN-to-LAN traffic. You can configure the fake index feature on your Steelhead appliance to insert the correct interface index before exporting data to a NetFlow collector. For details, see Configuring NetFlow in Virtual In-Path Deployments on page PBR DEPLOYMENTS

99 CHAPTER 7 PFS Deployments In This Chapter This chapter describes PFS and the basic steps for configuring it. It includes the following sections: Overview of PFS, next Upgrading V2.x PFS Shares on page 101 Domain and Local Workgroup Settings on page 102 PFS Share Operating Modes on page 103 Configuring PFS on page 105 This chapter assumes you are familiar with: The installation and configuration process for the Steelhead appliance. For details, see the Steelhead Appliance Installation and Configuration Guide. The Steelhead Management Console. For details, see the Steelhead Management Console User s Guide. Overview of PFS This section describes PFS and how it works. It includes the following sections: When to Use PFS, next PFS Terms on page 100 PFS is an integrated virtual file server that allows you to store copies of files on the Steelhead appliance with Windows file access, creating several options for transmitting data between remote offices and centralized locations with improved performance. Data is configured into file shares that are periodically synchronized transparently in the background, over the optimized connection of the Steelhead appliance. PFS leverages the integrated disk capacity of the Steelhead appliance to store file-based data in a format that allows it to be retrieved by NAS clients. NOTE: PFS is supported on Steelhead appliance models 1010, 1020, 1050, 1520, 2010, 2011, 2020, 2050, 250, 2510, 2511, 3010, 3020, 3030, 3510, 3520, 5010, 5050, 520, 550, and STEELHEAD APPLIANCE DEPLOYMENT GUIDE 99

100 When to Use PFS Before you configure PFS, evaluate whether it is suitable for your network needs. The advantages of using PFS are: LAN access to data residing across the WAN. File access performance is improved between central and remote locations. PFS creates an integrated file server, enabling clients to access data directly from the proxy filer on the LAN instead of the WAN. Transparently in the background, data on the proxy filer is synchronized with data from the origin file server over the WAN. Continuous access to files in the event of WAN disruption. PFS provides support for disconnected operations. In the event of a network disruption that prevents access over the WAN to the origin server, files can still be accessed on the local Steelhead appliance. Simple branch infrastructure and backup architectures. PFS consolidates file servers and local tape backup from the branch into the data center. PFS enables a reduction in number and size of backup windows running in complex backup architectures. Automatic content distribution. PFS provides a means for automatically distributing new and changed content throughout a network. If any of these advantages can benefit your environment, then enabling PFS in the Steelhead appliance is appropriate. However, PFS requires pre-identification of files and is not appropriate in environments in which there is concurrent read-write access to data from multiple sites: Pre-identification of PFS files. PFS requires that files accessed over the WAN are identified in advance. If the data set accessed by the remote users is larger than the specified capacity of your Steelhead appliance model or if it cannot be identified in advance, end-users must access the origin server directly through the Steelhead appliance without PFS. (This configuration is also known as Global mode.) Concurrent read-write data access from multiple sites. In a network environment where users from multiple branch offices update a common set of centralized files and records over the WAN, the Steelhead appliance without PFS is the most appropriate solution because file locking is directed between the client and the server. The Steelhead appliance always consults the origin server in response to a client request; it never provides a proxy response or data from its data store without consulting the origin server. PFS Terms The following terms are used to describe PFS processes and devices. PFS Term Proxy File Server Origin File Server Domain Mode Description A virtual file server that resides on the Steelhead appliance and provides Windows file access (with ACLs) capability at a branch office on the LAN. The proxy file server is populated over an optimized WAN connection with data from the origin server. A server located in the data center that hosts the origin data volumes. A PFS configuration in which the Steelhead appliance joins a Windows domain (typically your company domain) as a member PFS DEPLOYMENTS

101 PFS Term Domain Controller (DC) Local Workgroup Mode Share Local Name Remote Path Share Synchronization Description The host that provides user login service in the domain. (Typically, with Windows 2000 Active Directory Service domains, given a domain name, the system automatically retrieves the DC name.) A PFS configuration in which you define a workgroup and add individual users who have access to the PFS shares on the Steelhead appliance. The data volume exported from the origin server to the remote Steelhead appliance. IMPORTANT: The PFS share and the origin-server share name cannot contain Unicode characters. The Management Console does not support Unicode characters. The name that you assign to a share on the Steelhead appliance. This is the name by which users identify and map a share. IMPORTANT: The PFS share and the origin-server share name cannot contain Unicode characters. The Management Console does not support Unicode characters. The path to the data on the origin server or the UNC path of a share you want to make available to PFS. The process by which data on the proxy file server is synchronized with the origin server. Synchronization runs periodically in the background, based on your configuration. You can configure the Steelhead appliance to refresh the data automatically at an interval you specify or manually at any time. There are two levels of synchronization: Incremental Synchronization. In incremental synchronization, only new and changed data are sent between the proxy file server and the origin file server. Full Synchronization. In full synchronization, a full directory comparison is performed. The last full synchronization is sent between the proxy file server and the origin file server. Upgrading V2.x PFS Shares By default, when you configure PFS shares with Steelhead appliance software v3.x and higher, you create v3.x PFS shares. PFS shares configured with Steelhead appliance software v2.x are v2.x shares. V2.x shares are not upgraded when you upgrade Steelhead appliance software. If you have shares created with v2.x software, Riverbed recommends that you upgrade them to v3.x shares in the Management Console. If you upgrade any v2.x shares, you must upgrade all of them. Once you have upgraded shares to v3.x, you should only create v3.x shares. If you do not upgrade your v.2.x shares: Do not create v3.x shares. Install and start the RCU on the origin server or on a separate Windows host with write-access to the data PFS uses. The account that starts the RCU must have write permissions to the folder on the origin file server that contains the data PFS uses. You can download the RCU from the Riverbed Technical Support site at For details, see the Riverbed Copy Utility Reference Manual. STEELHEAD APPLIANCE DEPLOYMENT GUIDE 101

102 In Steelhead appliance software v3.x and higher, you do not need to install the RCU service on the server for synchronization purposes. All RCU functionality has been moved to the Steelhead appliance. Configure domain settings, not workgroup settings, as described in Domain and Local Workgroup Settings, next. Domain mode supports v2.x PFS shares but Workgroup mode does not. For details, see the Steelhead Management Console User s Guide. Domain and Local Workgroup Settings When you configure your PFS Steelhead appliance, set either domain or local workgroup settings. Domain Mode In Domain mode, you configure the PFS Steelhead appliance to join a Windows domain (typically, your company s domain). When you configure the Steelhead appliance to join a Windows domain, you do not have to manage local accounts in the branch office, as you do in Local Workgroup mode. Domain mode allows a DC to authenticate users accessing its file shares. The DC can be located at the remote site or over the WAN at the main data center. The Steelhead appliance must be configured as a Member Server in the Windows 2000, or later, ADS domain. Domain users are allowed to access the PFS shares based on the access permission settings provided for each user. Data volumes at the data center are configured explicitly on the proxy file server and are served locally by the Steelhead appliance. As part of the configuration, the data volume and ACLs from the origin server are copied to the Steelhead appliance. PFS allocates a portion of the Steelhead appliance data store for users to access as a network file system. Before you enable Domain mode in PFS: configure the Steelhead appliance to use NTP to synchronize the time. For details, see the Steelhead Management Console User s Guide. configure the DNS server correctly. The configured DNS server must be the same DNS server to which all the Windows client machines point. have a fully-qualified domain name for which PFS is configured. This domain name must be the domain name for which all the Windows desktop machines are configured. set the owner of all files and folders in all remote paths to a domain account and not to a local account. NOTE: PFS only supports domain accounts on the origin file server; PFS does not support local accounts on the origin file server. During an initial copy from the origin file server to the PFS Steelhead appliance, if PFS encounters a file or folder with permissions for both domain and local accounts, only the domain account permissions are preserved on the Steelhead appliance. For details about the how ACLS are propagated from the origin server to a PFS share, refer to the Riverbed Technical Support site at PFS DEPLOYMENTS

103 Local Workgroup Mode In Local Workgroup mode you define a workgroup and add individual users that have access to the PFS shares on the Steelhead appliance. Use Local Workgroup mode in environments where you do not want the Steelhead appliance to be a part of a Windows domain. Creating a workgroup eliminates the need to join a Windows domain and vastly simplifies the PFS configuration process. NOTE: If you use Local Workgroup mode, you must manage the accounts and permissions for the branch office on the Steelhead appliance. The local workgroup account permissions might not match the permissions on the origin file server. PFS Share Operating Modes PFS provides Windows file service in the Steelhead appliance at a remote site. When you configure PFS, you specify an operating mode for each individual file share on the Steelhead appliance. The proxy file server can export data volumes in Local mode, Broadcast mode, and Stand-Alone mode. After the Steelhead appliance receives the initial copy of the data and ACLs, shares can be made available to local clients. In Broadcast and Local mode only, shares on the Steelhead appliance are periodically synchronized with the origin server at intervals you specify, or manually if you choose. During the synchronization process, the Steelhead appliance optimizes this traffic across the WAN. The following modes are available: Broadcast Mode. Use Broadcast mode for environments seeking to broadcast a set of read-only files to many users at different sites. Broadcast mode quickly transmits a read-only copy of the files from the origin server to your remote offices. The PFS share on the Steelhead appliance contains read-only copies of files on the origin server. The PFS share is synchronized from the origin server according to parameters you specify when you configure it. However, files deleted on the origin server are not deleted on the Steelhead appliance until you perform a full synchronization. Additionally, if you perform directory moves on the origin server (for example, move.\dir1\dir2.\dir3\dir2) regularly, incremental synchronization does not reflect these directory changes. In this case, you must perform a full synchronization frequently to keep the PFS shares in synchronization with the origin server. Local Mode. Use Local mode for environments that need to efficiently and transparently copy data created at a remote site to a central data center, perhaps where tape archival resources are available to back up the data. Local mode enables read-write access at remote offices to update files on the origin file server. After the PFS share on the Steelhead appliance receives the initial copy from the origin server, the PFS share copy of the data becomes the master copy. New data generated by clients is synchronized from the Steelhead appliance copy to the origin server based on parameters you specify when you configure the share. The folder on the origin server essentially becomes a backup folder of the share on the Steelhead appliance. If you use Local mode, users must not directly write to the corresponding folder on the origin server. STEELHEAD APPLIANCE DEPLOYMENT GUIDE 103

104 CAUTION: In Local mode, the Steelhead appliance copy of the data is the master copy; do not make changes to the shared files from the origin server while in Local mode. Changes are propagated from the remote office hosting the share to the origin server. IMPORTANT: Riverbed recommends that you do not use Windows file shortcuts if you use PFS. For detailed information, contact Riverbed Technical Support at Stand-Alone Mode. Use Stand-Alone mode for network environments where it is more effective to maintain a separate copy of files that are accessed locally by the clients at the remote site. The PFS share also creates additional storage space. The PFS share on the Steelhead appliance is a one-time, working copy of data copied from the origin server. You can specify a remote path to a directory on the origin server, creating a copy at the branch office. Users at the branch office can read from or write to stand-alone shares but there is no synchronization back to the origin server because a stand-alone share is an initial and one-time only synchronization. Figure 7-1. PFS Deployment Lock Files When you configure a v3.x Local mode share or any v2.x share (except a Stand-Alone share in which you do not specify a remote path to a directory on the origin server), a text file (._rbt_share_lock. txt) that keeps track of which Steelhead appliance owns the share is created on the origin server. Do not remove this file. If you remove the._rbt_share_lock. txt file on the origin file server, PFS does not function properly. (V3.x Broadcast and Stand-Alone shares do not create these files.) PFS DEPLOYMENTS

105 Configuring PFS This following section describes Steelhead appliance requirements for configuring PFS, and basic steps for configuring PFS shares using the Management Console. For details, see the Steelhead Management Console User s Guide. Configuration Requirements This section describes prerequisites and tips for using PFS: Before you enable PFS, configure the Steelhead appliance to use NTP to synchronize the time. To use PFS, the Steelhead appliance and DC clocks must be synchronized. For details, see the Steelhead Management Console User s Guide. The PFS Steelhead appliance must run the same version of the Steelhead appliance software as the server-side Steelhead appliance. PFS traffic to and from the Steelhead appliance travels through the Primary interface. PFS requires that the Primary interface is connected to the same switch as the LAN interface. For details, see the Steelhead Appliance Installation and Configuration Guide. The PFS share and origin-server share names cannot contain Unicode characters; the Management Console does not support Unicode characters. Ensure that the name of the Steelhead appliance is entered into your DNS server, and that a host record exists for it. The Steelhead appliance name should either resolve to your Primary or your Auxiliary interface. Failure to resolve the Steelhead appliance name results in an inability to join a Windows 2000 or 2003 domain. Basic Steps Perform the following basic steps on the client-side Steelhead appliance to configure PFS. NOTE: For the server-side Steelhead appliance, you need only verify that it is intercepting and optimizing connections. No configuration is required for the server-side Steelhead appliance. 1. Configure the Steelhead appliance to use NTP to synchronize the time in the Management Console. For details, see the Steelhead Management Console User s Guide. 2. Navigate to Configure - Branch Services - PFS Settings page: Enable PFS. Restart the optimization service. Configure either domain or local workgroup settings, as described in Domain and Local Workgroup Settings on page 102. If you configured domain settings, join a domain. If you configured local workgroup settings, join a workgroup. STEELHEAD APPLIANCE DEPLOYMENT GUIDE 105

106 NOTE: To join a domain, the Windows domain account must have the correct privileges to perform a join domain operation. Start PFS. Optionally, configure additional PFS settings such as security signature settings, the number of minutes after which to time-out idle connections, and the local administrator password. 3. Create and manage PFS shares in the Configure - Branch Services - PFS Shares page. 4. Configure PFS share details in the Configure - Branch Services - PFS Shares Details page: Enable and synchronize PFS shares. If you have v2.x PFS shares (created by Steelhead appliance software v2.x), upgrade them to v3.x shares. By default, Steelhead appliance software v3.x and higher create v3.x shares, which you do not need to upgrade. Optionally, modify PFS share settings. Optionally, perform manual actions such as full synchronization, cancelling an operation, and deleting shares. For details, see the Steelhead Management Console User s Guide PFS DEPLOYMENTS

107 CHAPTER 8 Protocol Optimization in the Steelhead Appliance In This Chapter This chapter introduces and describes the basic steps for configuring Steelhead appliance protocol optimization. It includes the following sections: CIFS Optimization, next HTTP Optimization on page 108 MAPI Optimization on page 110 MS-SQL Optimization on page 111 NFS Optimization on page 111 SSL Optimization on page 113 This chapter assumes you are familiar with: CIFS, HTTP, MAPI, MS-SQL, NFS, and SSL protocols. The installation and configuration process for the Steelhead appliance. For details, see the Steelhead Appliance Installation and Configuration Guide. The Management Console. For details about the Management Console and how to use it, see the Steelhead Management Console User s Guide. The RiOS CLI. For details about the Steelhead appliance CLI and how to use it, see the Riverbed Command-Line Interface Reference Manual. By default, Steelhead appliances optimize CIFS and MAPI protocols. You can also configure Steelhead appliances to optimize, MS-SQL, NFS, and SSL protocols. CIFS Optimization CIFS optimization is enabled by default. Typically, you disable CIFS optimization only to troubleshoot the system. For details about disabling and configuring CIFS optimization, see the Steelhead Management Console User s Guide or the Riverbed Command-Line Interface Reference Manual. STEELHEAD APPLIANCE DEPLOYMENT GUIDE 107

108 For details about CIFS configuration to work around SMB-signed sessions, see Server Message Block Signed Sessions on page 173. HTTP Optimization A typical Web page is not a single file that is downloaded all at once. Instead, Web pages are composed of dozens of separate objects including.jpg and.gif images, JavaScript code, cascading style sheets, and more each of which must be requested and retrieved separately, one after the other. Given the presence of latency, this behavior is highly detrimental to the performance of Web-based applications over the WAN. The higher the latency, the longer it takes to fetch each individual object and, ultimately, to display the entire page. HTTP optimization works for most HTTP and HTTPS applications, including SAP, Customer Relationship Management, Enterprise Resource Planning, Financials, Document Management, and Intranet portals. RiOS v5.0.x and later optimizes HTTP applications using: Parsing and Prefetching of Dynamic Content. The Steelhead appliance includes a specialized algorithm that allows it to determine which objects are requested for a given Web page and prefetch them so that they are readily available when the client makes its requests. Parse and Prefetch essentially reads a page, finds HTML tags that it recognizes as containing a prefetchable object, and sends out prefetch requests for those objects. Typically, a client would need to request the base page, parse it, and then send out requests for each of these objects. This still occurs, but with Parse and Prefetch the Steelhead appliance has quietly perused the page before the client receives it and has already sent out the requests. This allows it to serve the objects as soon as the client requests them, rather than forcing the client to wait on a slow WAN link. For example, when an HTML page contains the tag <img src=my_picture.gif>, the Steelhead appliance prefetches the image my_picture.gif because it parses an img tag with an attribute of src by default. The HTML tags that are prefetched by default are: base/href, body/background, img/src, link/ href, and script/src. You can add additional object types to be prefetched. URL Learning. The Steelhead appliance eliminates redundancies between successive downloads of content that is dynamically generated on a Web page, such as SAP and CRM transactions. Instead of saving each object transaction, the Steelhead appliance saves only the request URL of object transactions in a Knowledge Base and then generates APT transactions from the list. This feature uses the referer header field to generate relationships between object requests and the base HTML page that referenced them and to group embedded objects. This information is stored in an internal HTTP database. The following objects are retrieved by default:.gif,.jpg,.css,.js,.png. You can add additional object types to be retrieved. Removal of Unfetchable Objects. Removes unfetchable objects from the URL Learning Knowledge Base. HTTP Metadata Responses. The Steelhead appliance stores metadata responses from HTTP GET requests for cascading style sheets, static images, and Java scripts PROTOCOL OPTIMIZATION IN THE STEELHEAD APPLIANCE

109 Persistent Connections. The Steelhead appliance uses an existing TCP connection between a client and a server to prefetch objects from the Web server that it determines are about to be requested by the client. Many Web browsers open multiple TCP connections to the Web server when requesting embedded objects. Typically, each of these TCP connections go through a lengthy authentication dialogue before the browser might request and receive objects from the Web server on that connection. NTLM is a Microsoft authentication protocol which employs a challenge-response mechanism for authentication, in which clients are required to prove their identities without sending a password to a server. NTLM requires the transmission of three messages between the client (wanting to authenticate) and the server (requesting authentication). Because these authentication dialogues are time consuming, if your Web servers require NTLM authentication you can configure your Steelhead appliance to re-use existing NTLM authenticated connections to avoid unnecessarily authenticating extra connections. All HTTP optimization features are driven by the client-side Steelhead appliance. The client-side Steelhead appliance sends the prefetched information to the server-side Steelhead appliance. Prefetched data and metadata responses are served from the client-side Steelhead appliance upon request from the browser. You can set up an optimization scheme that applies to all HTTP traffic, or create individual schemes for each server subnet. Therefore, you can configure an optimization scheme that includes your choice of prefetch optimizations for one range of server addresses, encompassing as large a network as you need, from a single address to all possible addresses. The following situations might effect HTTP optimization: Fat Client. Not all applications accessed through a Web browser utilize the HTTP protocol. This is especially true for fat clients that run inside a Web browser which may use proprietary protocols to communicate with a server. HTTP optimization does not improve performance in such cases. Digest for Authentication. Some Web servers might require users to authenticate themselves before allowing them access to certain Web content. Digest authentication is one of the less popular authentication schemes, although it is still supported by most Web servers and browsers. Digest authentication requires the browser to include a secret value which only the browser and server know how to generate and decode. Because the Steelhead appliance cannot generate these secret values, it cannot prefetch objects protected by Digest Authentication and does not improve performance for applications using this authentication scheme. Object Authentication. It is uncommon for Web servers to require separate authentication for each object requested by the client, but occasionally Web servers are configured to use per object authentication. In such cases, the HTTP optimization does not improve HTTP performance. NOTE: HTTP optimization has been tested on the following browsers: Internet Explorer v5.5 or later, Firefox v1.5 or later, and Netscape Communicator v6.2. HTTP optimization has been tested on the following servers: Apache v1.3, Apache v2.2, Microsoft IIS v5.0, and Microsoft IIS v6.0. Basic Steps The following procedures summarize the basic steps for configuring HTTP optimization. 1. Enable HTTP optimization for prefetching Web objects. This is the default setting. 2. Enable strip compression. This is the default setting. Strip compression enables the HTTP blade to remove the Accept-Encoding lines from the HTTP header that contain gzip or deflate. These Accept- STEELHEAD APPLIANCE DEPLOYMENT GUIDE 109

110 Encoding directives allow Web browsers and servers to send and receive compressed content rather than raw HTML. 3. Specify object extensions that represent prefetched objects for URL Learning. By default the Steelhead appliance prefetches.jpg,.gif,.js,.png, and.css objects. 4. Select Insert Keep Alive to maintain persistent connections. Often this feature is turned off even though the Web server can support it. This is especially true for Apache Web servers that serve HTTPS to Microsoft Internet Explorer browsers. 5. Enable cookies to track repeat requests from the client. 6. Optionally, specify which HTML tags to prefetch for Parse and Prefetch. By default the Steelhead appliance prefetches base/href, body/background, img/src, link/href, and script/src HTML tags. 7. Optionally, set an HTTP optimization scheme for each server subnet. For example, an optimization scheme can include a combination of the URL learning, Parse and Prefetch, or metadata response features. The default setting is URL Learning only. 8. If necessary, define in-path rules that specify when to apply HTTP optimization and whether to enable HTTP latency support for HTTPS. NOTE: In order for the Steelhead appliance to optimize HTTPS traffic (HTTP over SSL), you must configure a specific in-path rule that enables both SSL optimization and HTTP optimization. 9. Click the Save icon to save your settings permanently. 10. View and monitor HTTP statistics in the Management Console Reports - Optimization - HTTP Statistics page. NOTE: In order for the Steelhead appliance to optimize HTTPS traffic (HTTP over SSL), you must configure a specific in-path rule that enables HTTP acceleration. MAPI Optimization MAPI optimization is enabled by default. However, by default, MAPI 2007 native optimizations are not enabled. If you have Outlook 2007 and Exchange 2007 in your environment, you need to enable MAPI 2007 optimizations explicitly from the Steelhead Management Console. Typically, you only disable MAPI optimization to troubleshoot the system. For MAPI latency optimizations, traffic must be unencrypted. This can be done on the Outlook client or, on a wider scale, by applying a group policy. For details, see the following Riverbed Knowledge Base article, Disabling Outlook Encryption, located at For details about disabling and configuring MAPI optimization, see the Steelhead Management Console User s Guide or the Riverbed Command-Line Interface Reference Manual PROTOCOL OPTIMIZATION IN THE STEELHEAD APPLIANCE

111 MS-SQL Optimization MS-SQL optimization improves optimization for Microsoft Project. For details, see the Steelhead Management Console User s Guide. You can also use MS-SQL protocol optimization to optimize other database applications, but you must define SQL rules to obtain maximum optimization. If you are interested in enabling the MS-SQL feature for other database applications, contact Riverbed Professional Services at NFS Optimization NFS optimization provides latency optimization improvements for NFS operations by prefetching data, storing it on the client Steelhead appliance for a short amount of time, and using it to respond to client requests. You enable NFS optimization in high-latency environments. You can configure NFS settings globally for all servers and volumes or you can configure NFS settings that are specific to particular servers or volumes. When you configure NFS settings for a server, the settings are applied to all volumes on that server unless you override settings for specific volumes. NOTE: NFS latency optimization is only supported for NFS v3. For each Steelhead appliance, you specify a policy for prefetching data from NFS servers and volumes. You can set the following policies for NFS servers and volumes: Global Read/Write. Choose this policy when the data on the NFS server or volume can be accessed from any client, including LAN clients and clients using other file protocols. This policy ensures data consistency but does not allow for the most aggressive data optimization. Global Read/Write is the default value. Custom. Create a custom policy for the NFS server. Read-only. Any client can read the data on the NFS server or volume but cannot make changes. After you add a server, the Management Console includes options to configure volume policies. For detailed information, see the Steelhead Management Console User s Guide. Implementing NFS Optimization This section describes the basic steps for using the Management Console to implement NFS. For detailed information, see the Steelhead Management Console User s Guide. Basic Steps Perform the following basic steps to configure NFS optimizations. 1. Enable NFS in the Configure - Optimization - NFS page. STEELHEAD APPLIANCE DEPLOYMENT GUIDE 111

112 Enable NFS on all desired client and server Steelhead appliances. 2. For each client Steelhead appliance you desire, configure NFS settings that apply by default to all NFS servers and volumes. For details, see the Steelhead Management Console User s Guide. Configure these settings on all desired client Steelhead appliances. These settings are ignored on server Steelhead appliances. If you have enabled NFS optimization (as described in the previous step) on a server Steelhead appliance, NFS configuration information for a connection is uploaded from the client Steelhead appliance to the server Steelhead appliance when the connection is established. IMPORTANT: If NFS is disabled on a server Steelhead appliance, the appliance does not perform NFS optimization. 3. For each client Steelhead appliance you desire, override global NFS settings for a server or volume that you specify. You do not need to configure these settings on server Steelhead appliances. If you have enabled NFS optimization on a server Steelhead appliance, NFS configuration information for a connection is uploaded from the client Steelhead appliance to the server Steelhead appliance when the connection is established. If you do not override settings for a server or volume, the global NFS settings are used. If you do not configure NFS settings for a volume, the server-specific settings, if configured, are applied to the volume. If server-specific settings are not configured, the global settings are applied to the server and its volumes. NOTE: When you configure a prefetch policy for an NFS volume, you specify the desired volume by an FSID number. An FSID is a number NFS uses to distinguish mount points on the same physical file system. Because two mount points on the same physical file system have the same FSID, more than one volume can have the same FSID. For details, see the Steelhead Management Console User s Guide. 4. If you have configured IP aliasing for an NFS server, specify all of the server IP addresses in the Steelhead appliance NFS-protocol settings. 5. View and monitor NFS statistics in the Management Console Reports - Optimization - NFS Statistics page. Configuring IP Aliasing If you have configured IP aliasing (multiple IP addresses) for an NFS server, you must specify all of the server IP addresses in the Steelhead appliance NFS protocol settings in order for NFS optimization to work properly. To configure IP aliasing on a Steelhead appliance 1. In the Management Console, navigate to the Configure - Optimization - NFS page. 2. Click Add New NFS Server to expand the page. 3. In the Name box, type the name of the NFS server PROTOCOL OPTIMIZATION IN THE STEELHEAD APPLIANCE

113 4. Enter each server IP address in a comma separated list in the Server IP box. 5. Click Add Server. SSL Optimization SSL is a cryptographic protocol which provides secure communications between two parties over a network. Typically in a Web-based application, it is the client that authenticates the server. To identify itself, an SSL certificate is installed on a Web server and the client checks the credentials of the certificate to make sure it is valid and signed by a trusted third party. Trusted third parties that sign SSL certificates are called Certificate Authorities (CA). How Does SSL Work? With Riverbed SSL, Steelhead appliances are configured to have a trust relationship, so they can exchange information securely over an SSL connection. SSL clients and servers communicate with each other exactly as they do without Steelhead appliances; no changes are required for the client and server application, nor are they for the configuration of proxies. Riverbed splits up the SSL handshake, the sequence of message exchanges at the start of an SSL connection. In an ordinary SSL handshake, the client and server first establish identity using public-key cryptography, then negotiate a symmetric session key to be used for data transfer. With Riverbed SSL acceleration, the initial SSL message exchanges take place between the client and the server-side Steelhead appliance. Then the server-side Steelhead appliance sets up a connection to the server, to ensure that the service requested by the client is available. In the last part of the handshake sequence, a Steelhead-to-Steelhead process ensures that both appliances (client-side and server-side) know the session key. The client SSL connection logically terminates at the server but physically terminates at the client-side Steelhead appliance just as is true for logical versus physical unencrypted TCP connections. And just as the Steelhead-to-Steelhead TCP connection over the WAN might use a better TCP implementation than the ones used by the client or server, the Steelhead-to-Steelhead connection might be configured to use better ciphers and protocols than the client and server would normally use. The Steelhead appliance also contains a secure vault which stores all SSL server settings, other certificates (that is, the CA, peering trusts, and peering certificates), and the peering private key. The secure vault protects your SSL private keys and certificates when the Steelhead appliance is not powered on. You set a password for the secure vault which is used to unlock it when the Steelhead appliance is powered on. After rebooting the Steelhead appliance, SSL traffic is not optimized until the secure vault is unlocked with the correct password. For details, see the Steelhead Management Console User s Guide. Configuring SSL Using the Management Console This section describes the basic steps for using the Management Console to configure SSL optimization on Steelhead appliances. For details, see the Steelhead Management Console User s Guide. STEELHEAD APPLIANCE DEPLOYMENT GUIDE 113

114 You can also perform a bulk import or export of SSL server configurations, keys, and certificates. For details, see the Steelhead Management Console User s Guide. Additionally, you can also use the CMC to configure SSL optimization. For details, see the Steelhead Central Management Console User s Guide. Basic Steps Perform the following basic steps to configure SSL. 1. Enable SSL support in the Configure - Optimization - General SSL Settings page. 2. Set the SSL secure vault password on the client and server-side Steelhead appliance in the Configure - Security - Secure Vault page. 3. On the server-side Steelhead appliance, configure a proxy certificate and private key for the SSL backend server in the Configure - Optimization - General SSL Settings page and add a new server. This step enables the server-side Steelhead appliance to act as a proxy for the back-end server, which is necessary to intercept the SSL connection and to optimize it. 4. For in-path configurations only. You do not do this step if you have an out-of-path configuration. Client-side in-path rules are not required for SSL optimization with in-path configurations. However, if you want to enable the HTTP latency optimization module for connections to this server, navigate to the Configure - Optimization - In-Path Rules page. Add a new in-path rule for the client-side Steelhead appliance. Use the following property values: Latency Optimization Policy - HTTP Use the default settings for the remaining fields. 5. For in-path configurations only. You do not do this step if you have an out-of-path configuration. Navigate to the Configure - Networking - Port Labels. Remove port 443 from the Secure Label list. Click Apply. 6. For out-of-path configurations only. You do not do this step if you have an in-path configuration. Navigate to the Configure - Optimization - In-Path Rules page. Add a new in-path rule for the clientside Steelhead appliance. The new in-path rule identifies which connections are to be intercepted and applied to SSL optimization. Use the following property values: Type - Fixed-target Destination Subnet/Port. Riverbed recommends you specify the exact SSL server IP address (for example, /32) and SSL port number. VLAN Tag - All Preoptimization Policy - SSL Optimization Policy - Normal Latency Optimization Policy - HTTP NOTE: Latency optimization might not always be HTTP, especially for applications that use the SSL protocol but are not HTTP based. In such cases, specify None for the latency optimization PROTOCOL OPTIMIZATION IN THE STEELHEAD APPLIANCE

115 Neural Framing Mode - Always 7. Navigate to the Configure - Optimization - SSL Peering page and configure mutual peering trusts so the server-side Steelhead appliance trusts the client-side Steelhead appliance. Use one of the following methods: Auto-discovery of self-signed peering certificates. The peers are automatically discovered upon the first SSL connection and appear in the self-signed peer gray list. You simply mark them as trusted. Both the client-side and server-side Steelhead appliances must use RiOS v5.0.x or later. Add CA-signed peer certificates. Add the certificate of the designated CA as a new trusted entity for each Steelhead appliance. For production networks with multiple Steelhead appliances, use the CMC or the bulk import and export feature to simplify configuring trusted peer relationships. TIP: Your organization might choose to replace all of the default self-signed identity certificates and keys on their Steelhead appliances with those certificates signed by another CA (either internal to your organization or an external well-known CA). In such cases, every Steelhead appliance must have the certificate of the designated CA (that signed all of the Steelhead appliance identity certificates) added as a new trusted entity. 8. If your organization uses internal CAs to sign their SSL server certificates, navigate to the Configure - Optimization - General SSL Settings page to import each of the certificates (in the chain) on to the serverside Steelhead appliance. You must perform this step if you use internal CAs because the Steelhead appliance default list of wellknown CAs (trusted by our server-side Steelhead appliance) does not include your internal CA certificate. To identify the certificate of your internal CA (in some cases, the chain of certificate authorities) go to your Web browser repository of trusted-root or intermediate CAs. (For example, Internet Explorer->Tools -> Internet Options -> Certificates.) 9. On the client and server-side Steelhead appliance, navigate to the Configure - Maintenance - Reboot/ Shutdown page to restart the Steelhead service. 10. View and monitor SSL server statistics in the Management Console Reports - Optimization - SSL Servers page. For details about SSL, see the Steelhead Management Console User s Guide. STEELHEAD APPLIANCE DEPLOYMENT GUIDE 115

116 PROTOCOL OPTIMIZATION IN THE STEELHEAD APPLIANCE

117 CHAPTER 9 QoS Configuration and Integration In This Chapter This chapter describes how to integrate Steelhead appliances into existing Quality of Service (QoS) architectures, and how to configure Riverbed QoS. Additionally, this chapter describes how to use and configure MX-TCP. It includes the following sections: Overview of QoS, next Integrating Steelhead Appliances into Existing QoS Architectures on page 118 Enforcing QoS Policies Using Riverbed QoS on page 122 Configuring Riverbed QoS on page 131 This chapter assumes you are familiar with: The RiOS CLI. For details, see the Riverbed Command-Line Interface Reference Manual. The Management Console. For details, see the Steelhead Management Console User s Guide. Overview of QoS This section introduces QoS and Riverbed QoS. It includes the following sections: Introduction to QoS, next Introduction to Riverbed QoS on page 118 Introduction to QoS QoS is a reservation system for network traffic in which you create QoS classes to distribute network resources. The classes are based on traffic importance, bandwidth needs, and delay-sensitivity. You allocate network resources to each of the classes. Traffic flows according to the network resources allocated to its class. Steelhead appliances enforce QoS policies or co-exist in networks where QoS classification and enforcement is performed outside the Steelhead appliances. STEELHEAD APPLIANCE DEPLOYMENT GUIDE 117

118 Many QoS implementations use some form of Packet Fair Queueing (PFQ), such as Weighted Fair Queueing or Class-Based Weighted Fair Queueing. As long as high-bandwidth traffic requires a high priority (or vice-versa), PFQ systems perform adequately. However, problems arise for PFQ systems when the traffic mix includes high-priority, low-bandwidth traffic, or high-bandwidth traffic that does not require a high priority, particularly when both of these traffic types occur together. Features such as low-latency queueing (LLQ) attempt to address these concerns by introducing a separate system of strict priority queueing that is used for high-priority traffic. However, LLQ is not a principled way of handling bandwidth and latency trade-offs. LLQ is a separate queueing mechanism meant as a work around for PFQ limitations. Introduction to Riverbed QoS The Riverbed QoS system is not based on PFQ, but rather on Hierarchical Fair Service Curve (HFSC). HFSC delivers low latency to traffic without wasting bandwidth and delivers high bandwidth to delay-insensitive traffic without disrupting delay-sensitive traffic. The Riverbed QoS system achieves the benefits of LLQ without the complexity and potential configuration errors of a separate queueing mechanism. The Steelhead appliance HFSC-based QoS enforcement system provides the flexibility needed to simultaneously support varying degrees of delay requirements and bandwidth usage. For example, you can enforce a mix of high-priority, low-bandwidth traffic patterns (for example, SSH, Telnet, Citrix, RDP, and CRM systems) with lower priority, high-bandwidth traffic (for example, FTP, backup, and replication). This allows you to protect delay-sensitive traffic such as VoIP, as well as other delay-sensitive traffic such as RDP and Citrix. You can do this without having to reserve large amounts of bandwidth for the traffic classes. For details, see Configuring Riverbed QoS on page 131. Integrating Steelhead Appliances into Existing QoS Architectures This section describes the integration of Steelhead appliances into existing QoS architectures. It includes the following sections: WAN-Side Traffic Characteristics and QoS, next QoS Integration Techniques on page 120 QoS Marking on page 120 When you integrate Steelhead appliances into your QoS architecture you can: Choose whether to enforce QoS in the WAN or WAN infrastructure, or on the Steelhead appliance. Have your optimized connections appear the same way that unoptimized connections appear to the WAN infrastructure. Selectively apply different QoS policies depending on whether a connection is optimized or not. Control the appearance of Steelhead appliance-optimized connections based on the following values of the original TCP connection: DSCP, IP TOS, IP address, port, and VLAN tag. Steelhead appliances enable you to perform the following functions: Retain the original DSCP or IP precedence values QOS CONFIGURATION AND INTEGRATION

119 Choose the DSCP or IP precedence values Retain the original destination TCP port Choose the destination TCP port Retain all of the original IP addresses and TCP ports You do not have to use all of the Steelhead appliance functions on your optimized connections. You can selectively apply functions to different optimized traffic, based on IP addresses, TCP ports, DSCP, VLAN tags, and so on. WAN-Side Traffic Characteristics and QoS When you integrate Steelhead appliances into an existing QoS architecture, it is helpful to understand how optimized and pass-through traffic appear to the WAN, or any WAN-side infrastructure. The following figure illustrates how traffic appears on the WAN when Steelhead appliances are present. Figure 9-1. How Traffic Appears to the WAN When Steelhead Appliances are Present When Steelhead appliances are present in a network: The optimized data for each LAN-side connection is carried on a unique WAN-side TCP connection. The IP addresses, TCP ports, and DSCP or IP precedence values of the WAN connections are determined by the QoS marking configuration, and the Steelhead appliance WAN visibility mode configured for the connection. When you enable Riverbed QoS enforcement, the amount of bandwidth and delay assigned to traffic is determined by the Riverbed QoS enforcement configuration. This applies to both pass-through and optimized traffic. However, this configuration is separate from the WAN traffic appearance features such as QoS marking, and Steelhead appliance WAN visibility modes. For details about WAN visibility modes, see WAN Visibility Modes on page 139. STEELHEAD APPLIANCE DEPLOYMENT GUIDE 119

120 QoS Integration Techniques In some networks, QoS policies do not differentiate traffic that is optimized by the Steelhead appliance. For example, because VoIP traffic is passed through the Steelhead appliance, a QoS policy that only gives priority to VoIP traffic, without differentiating between non-voip traffic, is unaffected by the introduction of Steelhead appliances. In these networks no QoS configuration changes are needed to maintain the existing policy, because the configuration treats all non-voip traffic identically, regardless of whether it is optimized by the Steelhead appliance. Another example of a network that might not require QoS configuration changes to integrate Steelhead appliances is where traffic is marked with DSCP or TOS values before reaching the Steelhead appliance, and enforcement is made after reaching the Steelhead appliances based only on DSCP or TOS. The default Steelhead appliance configuration reflects the DSCP/TOS values from the LAN-side to the WAN-side of an optimized connection. For example, if the QoS configuration is performed by marking the DSCP values at the source or on LANside switches, and enforcement is performed on WAN routers, the WAN routers see the same DSCP values for all classes of traffic, optimized or not. These examples assume that the post-integration goal is to treat optimized and non-optimized traffic in the same manner with respect to QoS policies; some administrators might want to allocate different network resources to optimized traffic. For details about QoS marking, see QoS Marking on page 120. In networks where both classification or marking and enforcement are performed on traffic after it passes through the Steelhead appliance, you have several configuration options: In a network where classification and enforcement is based only on TCP ports, you can use port mapping, or the port transparency WAN visibility mode. For details about port transparency, see Port Transparency on page 142. In a network where classification and enforcement is based on IP addresses, you can use the full address transparency WAN visibility mode. For details about full address transparency, see Full Address Transparency on page 143. For details about WAN visibility modes, see WAN Visibility Modes on page 139. QoS Marking This section describes how to use Steelhead appliance QoS marking when integrating Steelhead appliances into an existing QoS architecture. It includes the following sections: QoS Marking Default Setting, next QoS Marking and Optimized Traffic on page 121 QoS Marking and Pass-Through Traffic on page 122 Steelhead appliances can retain or alter the DSCP or IP TOS value of both pass-through traffic and optimized traffic. To alter the DSCP or IP TOS value of optimized or pass-through traffic, you create a list that maps which traffic receives a certain DSCP value. The first matching mapping is applied QOS CONFIGURATION AND INTEGRATION

121 QoS Marking Default Setting By default, Steelhead appliances reflect the DSCP or IP TOS value found on pass-through traffic and optimized connections. This means that the DSCP or IP TOS value on pass-through traffic is unchanged when it passes through the Steelhead appliance. The following figure illustrates reflected DSCP or IP TOS values seen on a network. Figure 9-2. Reflected DSCP/IP TOS Values QoS Marking and Optimized Traffic For optimized connections, the packets on the WAN-side TCP connection between the Steelhead appliances are marked with the same DSCP or IP TOS value seen on the incoming LAN-side connection. You can control when and how often the Steelhead appliance reads the DSCP/IP TOS value on the LAN-side connection. The Steelhead appliance reads the DSCP/IP TOS value to determine what value to place on the WAN-side connection. The following figure illustrates the DSCP values seen on a network when Steelhead 1 is configured to mark traffic with DSCP value 10, and a connection is initiated at the site where Steelhead 1 resides. Figure 9-3. QoS Marking Applied to Optimized Traffic The connection on the LAN has a DSCP value X. On optimized traffic the DSCP value changes to DSCP 10 when it passes through Steelhead 1. The traffic for the WAN connection has a DSCP value 10. This QoS marking is also seen on the LAN-side of Steelhead 2, and on the WAN-side from Steelhead 2. This is because Steelhead 1 communicates the QoS marking to Steelhead 2 when it creates the optimized connection. Any DSCP value arriving to Steelhead 2 from its LAN is overwritten. STEELHEAD APPLIANCE DEPLOYMENT GUIDE 121

122 QoS Marking and Pass-Through Traffic The following figure illustrates the DSCP values seen on a network when Steelhead 1 has a QoS marking for pass-through traffic. The DSCP value is set on WAN-side traffic leaving the Steelhead appliance. Figure 9-4. QoS Marking Applied to Pass-Through Traffic For details about configuring QoS marking, see the Steelhead Management Console User s Guide. Enforcing QoS Policies Using Riverbed QoS This section describes how to enforce QoS policies using Riverbed QoS. It includes the following sections: QoS Classes, next QoS Class Parameters on page 127 QoS Queue Types on page 128 MX-TCP on page 128 QoS Rules on page 129 Guidelines for the Maximum Number of QoS Classes and Rules on page 130 QoS in Virtual In-Path and Out-of-Path Deployments on page 130 Riverbed QoS Enforcement Best Practices on page 130 The main components of the Riverbed QoS enforcement system are QoS classes and QoS rules. A QoS class represents an arbitrary aggregation of traffic that is treated the same way by the QoS scheduler. QoS rules determine membership of traffic in a particular QoS class, and are based on the following parameters: IP addresses, protocols, ports, DSCP, traffic type (optimized and pass-through), and VLAN tags. The QoS scheduler uses the constraints and parameters configured on the QoS classes, such as minimum bandwidth guarantee and latency priority, to determine in what order packets are transmitted from the Steelhead appliance. QoS Classes This section describes Riverbed QoS classes. It includes the following sections: Hierarchical Mode, next Flat Mode on page 126 Choosing a QoS Enforcement System on page QOS CONFIGURATION AND INTEGRATION

123 QoS Class Parameters on page 127 There is no requirement that QoS classes represent applications, traffic to remote sites, or any other particular aggregation. There are two QoS classes that are always present on the Steelhead appliance: Root Class. The root class is used to constrain the total outbound rate of traffic leaving the Steelhead appliance to the configured, per-link WAN bandwidth. This class is not configured directly, but is created when you enable QoS classification and enforcement on the Steelhead appliance. Built-in Default Class. The QoS scheduler applies the built-in default class constraints and parameters on traffic not otherwise placed in a class by the configured QoS rules. You must adjust the minimum bandwidth value for the default class to the appropriate value for your deployment. The default class cannot be deleted, and has a bandwidth of 0.01% which cannot be reduced. NOTE: Because you cannot reduce the default class bandwidth to less than 0.01%, the sum of minimum bandwidth allocation in hierarchical mode cannot exceed 99.99% for the children of the root class. For details about hierarchical mode, see Hierarchical Mode, next. QoS classes are configured in one of two different modes: flat mode and hierarchical mode. The difference between the two modes primarily consists of how QoS classes are created. Hierarchical Mode In hierarchical mode, you can create QoS classes as children of QoS classes other than the root class. This allows you to create overall parameters for a certain traffic type, and specify parameters for subtypes of that traffic. There is no enforced limit to the number of QoS class levels you can create. In hierarchical mode, the following relationships exist between QoS classes: Sibling classes. Classes that share the same parent class. Leaf classes. Classes at the bottom of the class hierarchy. Inner classes. Classes that are neither the root class nor leaf classes. STEELHEAD APPLIANCE DEPLOYMENT GUIDE 123

124 In hierarchical mode, QoS rules can only specify leaf classes as targets for traffic. The following figure illustrates the hierarchical mode structure and the relationships between the QoS classes. Figure 9-5. Hierarchical Mode Class Structure Riverbed QoS controls the traffic of hierarchical QoS classes in the following manner: QoS rules assign active traffic to leaf classes. The QoS scheduler: applies active leaf class parameters to the traffic. applies parameters to inner classes that have active leaf class children. continues this process up the class hierarchy. constrains the total output bandwidth to the WAN rate specified on the root class. How Class Hierarchy Controls Traffic In this example there are six QoS classes. The root and default QoS classes are built-in and are always present. The following figure illustrates the hierarchical mode structure for this example. Figure 9-6. Example of QoS Class Hierarchy QOS CONFIGURATION AND INTEGRATION

125 In this example there is active traffic beyond the overall WAN bandwidth rate. The following figure illustrates a scenario in which the QoS rules place active traffic into three QoS classes: classes 2, 3, and 6. Figure 9-7. QoS Classes 2, 3, and 6 Have Active Traffic The QoS scheduler: first applies the constraints for the lower leaf classes. applies bandwidth constraints to all leaf classes. The QoS scheduler awards minimum guarantee percentages among siblings, after which the QoS scheduler awards excess bandwidth, after which the QoS scheduler applies upper limits to the leaf class traffic. applies latency priority to the leaf classes. For example, if class 2 is configured with a higher latency priority than class 3, the QoS scheduler gives traffic in class 2 the chance to be transmitted before class 3. Bandwidth guarantees still apply for the classes. applies the constraints of the parent classes. The QoS scheduler treats the traffic of the children as one traffic class. The QoS scheduler uses class 1 and class 4 parameters to determine how to treat the traffic. The following figure illustrates the following points: Traffic from class 2 and class 3 is logically combined, and treated as if it were class 1 traffic. Because class 4 only has active traffic from class 6, the QoS scheduler treats the traffic as if it were class 4 traffic. Figure 9-8. How the QoS Scheduler Applies Constraints of Parent Class to Child Classes STEELHEAD APPLIANCE DEPLOYMENT GUIDE 125

126 Flat Mode In flat mode, you cannot define parent classes. All of the QoS classes you create have the same parent class, the root class. This means that all of the QoS classes you create are siblings. The following figure illustrates the flat mode structure. Figure 9-9. Flat Mode Class Structure The QoS scheduler treats QoS classes in flat mode the same way that it does in hierarchical mode. However, only a single class level is defined. QoS rules place active traffic into the leaf classes. Each active class has their own QoS rule parameters which the QoS scheduler applies to traffic. Choosing a QoS Enforcement System The appropriate QoS enforcement system to use depends on the location of WAN bottlenecks for traffic leaving the site. The following model is typically used for implementing QoS: A site that acts as a data server for other locations, such as a data center or regional hub, typically uses hierarchical mode. The first level of classes represents remote sites, and those remote site classes have child classes that either represent application types, or are indirectly connected remote sites. A site that typically receives data from other locations, such as a branch site, typically uses flat mode. The classes represent different application types. For example, suppose you have a network with ten locations, and you want to choose the correct mode for site 1. Traffic from site 1 normally goes to two other sites: sites 9 and 10. If the WAN links at sites 9 and 10 are at a higher bandwidth than the link at site 1, the WAN bottleneck rate for site 1 is always the link speed for site 1. In this case, you can use flat mode to enforce QoS at site 1, because the bottleneck that needs to be managed is the link at site 1. In flat mode, the parent class for all created classes is the root class that represents the WAN link at site 1. In the same network, site 10 sends traffic to sites 1 8. Sites 1 8 have slower bandwidth links than site 10. Because the traffic from site 10 faces multiple WAN bottlenecks (one at each remote site), you configure hierarchical mode for site 10. NOTE: Changing the QoS enforcement mode while QoS is enabled can cause disruption to traffic flowing through the Steelhead appliance. Riverbed recommends that you configure QoS while the QoS functionality is disabled and only enable it after you are ready for the changes to take effect QOS CONFIGURATION AND INTEGRATION

127 QoS Class Parameters The QoS scheduler uses the per-class configured parameters to determine how to treat traffic belonging to the QoS class. The per-class parameters are: Latency Priority. There are five QoS class latency priorities. For details, see QoS Class Latency Priorities on page 127. Queue Types. For details, see QoS Queue Types on page 128. Guaranteed Bandwidth (GBW). When there is bandwidth contention, specifies the minimum amount of bandwidth as a percentage of the parent class bandwidth. The QoS class might receive more bandwidth if there is unused bandwidth remaining. In hierarchical mode, excess bandwidth is allocated based on the relative ratios of guaranteed bandwidth. The total minimum guaranteed bandwidth of all QoS classes must be less than or equal to 100% of the parent class. The smallest value you can assign is 0.01%. Link Share Weight. This applies to flat mode only. Specifies how excess bandwidth is allocated among sibling classes. In flat QoS, link share does not depend on the minimum guaranteed bandwidth. By default, all link shares are equal. QoS classes with a larger link-share weight are allocated more of the excess bandwidth than QoS classes with a lower link share weight. The link share weight does not apply to MX-TCP queues. The link share weight does not apply to hierarchical mode because hierarchical QoS because allocation is based on the minimum guarantee for each class. Upper Bandwidth (UBW). Specifies the maximum allowed bandwidth a QoS class receives as a percentage of the parent class guaranteed bandwidth. The upper bandwidth limit is applied even if there is excess bandwidth available. The upper bandwidth limit must be greater than or equal to the minimum bandwidth guarantee for the class. The smallest value you can assign is 0.01%. The upper bandwidth limit does not apply to MX-TCP queues. Connection Limit. Specifies the maximum number of optimized connections for the QoS class. When the limit is reached, all new connections are passed through unoptimized. In hierarchical mode, a parent class connection limit does not affect its child. Each child-class optimized connection is limited by the connection limit specified for their class. For example, if B is a child of A, and the connection limit for A is set to 5, while the connection limit for B is set to 10, the connection limit for B is 10. Connection limit is supported only in in-path configurations. It is not supported in out-of-path or virtual-in-path configurations. QoS Class Latency Priorities Latency priorities indicate how delay-sensitive a traffic class is. A latency priority does not control how bandwidth is used or shared among different QoS classes. You can assign a QoS class latency priority when you create a QoS class, or modify it later. STEELHEAD APPLIANCE DEPLOYMENT GUIDE 127

128 Riverbed QoS has five QoS class latency priorities. The following table summarizes the QoS class latency priorities in descending order. Latency Priority Realtime Interactive Business Critical Normal Priority Low Priority Example VoIP, video conferencing. Citrix, RDP, telnet, and ssh. Thick client applications, ERPs, CRMs. Internet browsing, file sharing, . FTP, back up, replication, other highthrough put data transfers, and recreational applications such as audio file sharing. Typically, applications such as VoIP and video conferencing are given real-time latency priority, while applications that are especially delay-insensitive, such as backup and replication, are given low latency priority. IMPORTANT: The latency priority describes only the delay sensitivity of a class, not how much bandwidth it is allocated, nor how important the traffic is compared to other classes. Therefore, it is common to configure low latency priority for high-throughput, delay-insensitive applications such as ftp, backup, and replication. QoS Queue Types Each QoS class has a configured queue type parameter. There are three types of parameters available: Stochastic Fairness Queueing (SFQ). Determines Steelhead appliance behavior when the number of packets in a QoS class outbound queue exceeds the configured queue length. When SFQ is used, packets are dropped from within the queue in a round-robin fashion, among the present traffic flows. SFQ ensures that each flow within the QoS class receives a fair share of output bandwidth relative to each other, preventing bursty flows from starving other flows within the QoS class. SFQ is the default queue parameter. First-in First-Out (FIFO). Determines Steelhead appliance behavior when the number of packets in a QoS class outbound queue exceeds the configured queue length. When FIFO is used, packets received after this limit is reached are dropped, hence the first packets received are the first packets transmitted. MX-TCP. MX-TCP is a QoS class queue parameter. For details, see MX-TCP, next. MX-TCP MX-TCP is a QoS class queue parameter, but with very different use cases than the other queue parameters. MX-TCP also has secondary effects that you need to understand before configuration. When optimized traffic is mapped into a QoS class with the MX-TCP queuing parameter, the TCP congestion control mechanism for that traffic is altered on the Steelhead appliance. The normal TCP behavior of reducing the outbound sending rate when detecting congestion or packet loss is disabled, and the outbound rate is made to match the minimum guaranteed bandwidth configured on the QoS class QOS CONFIGURATION AND INTEGRATION

129 You can use MX-TCP to achieve high throughput rates even when the physical medium carrying the traffic has high loss rates. For example, a common usage of MX-TCP is for ensuring high throughput on satellite connections where no lower layer loss recovery technique is in use. Another usage of MX-TCP is to achieve high throughput over high-bandwidth, high-latency links, especially when intermediate routers do not have properly tuned interface buffers. Improperly tuned router buffers cause TCP to perceive congestion in the network, resulting in unnecessarily dropped packets, even when the network can support high throughput rates. IMPORTANT: Use caution when specifying MX-TCP. The outbound rate for the optimized traffic in the configured QoS class immediately increases to the specified bandwidth, and does not decrease in the presence of network congestion. The Steelhead appliance always tries to transmit traffic at the specified rate. If no QoS mechanism (either parent classes on the Steelhead appliance, or another QoS mechanism in the WAN or WAN infrastructure) is in use to protect other traffic, that other traffic might be impacted by MX-TCP not backing off to fairly share bandwidth. When MX-TCP is configured as the queue parameter for a QoS class, the following parameters for that class are also affected: Link share weight. The link share weight parameter has no effect on a QoS class configured with MX- TCP. Upper limit. The upper limit parameter has no effect on a QoS class configured with MX-TCP. QoS Rules QoS rules map different types of network traffic to QoS classes. After you define a QoS class, you can create one or more QoS rules to assign traffic to it. QoS rules can match traffic based on: a source address, port, or subnet. a destination address, port, or subnet. the IP protocol in use: TCP, UDP, GRE, or all. whether or not the traffic is optimized. a VLAN tag. a DSCP/IP TOS value. QoS rules are processed in the order in which they are shown on the QoS Classification page of the Management Console. The first matching rule determines what QoS class the traffic is assigned to. A QoS class can have many rules assigning traffic to it. In hierarchical mode, QoS rules can only be defined for, and map traffic to, the leaf classes. Also, you cannot associate QoS rules to inner classes. A default QoS rule always exists at the end of the QoS rule list and cannot be deleted. This default rule is used for traffic that does not match any rules in the QoS rule list. The default rule assigns this traffic to the built-in default QoS class. STEELHEAD APPLIANCE DEPLOYMENT GUIDE 129

130 Guidelines for the Maximum Number of QoS Classes and Rules The number of QoS classes and rules you can create on a Steelhead appliance depends on the appliance model number, the traffic flow, and what other RiOS features you have enabled. The following table describes general guidelines for the number of QoS classes and rules. Steelhead Appliance Model Maximum Allowable QoS Classes Maximum Allowable QoS Rules 2xx and lower x0, 1xx xx xx xx0 and higher QoS in Virtual In-Path and Out-of-Path Deployments You can use QoS enforcement in virtual in-path deployments (for example, WCCP and PBR) and out-ofpath deployments. In both of these types of deployments you connect the Steelhead appliance to the network through a single interface: the WAN interface for WCCP deployments, and the primary interface for out-of-path deployments. You enable QoS for these types of deployments as follows: Use hierarchical mode for the QoS enforcement system. Set the WAN throughput for the network interfaces to the total speed of the LAN+WAN interfaces, or to the speed of the local link, whichever number is lower. Create two top-level classes: A LAN class to classify LAN traffic. Set the class UBW to the LAN link rate. The GBW depends on your network and QoS policies. If the total LAN + WAN bandwidth is less than the interface rate, typically the LAN class GBW is equal to the UBW. Create one or more QoS rules so that all LAN-destined traffic is sent to the LAN class or LAN class siblings. Create a rule for LAN traffic so that traffic whose destination IP address is in the subnet of the Steelhead appliance is classified as LAN traffic. A WAN class to classify WAN traffic. Set the class UBW to the WAN link rate. The GBW depends on your network and QoS policies. If the total LAN + WAN bandwidth is less than the interface rate, typically the WAN class GBW is equal to the UBW. Create a hierarchy of classes under the WAN class as if the Steelhead appliance were deployed in a physical in-path mode. Create all QoS classes as children of the WAN class. Create QoS rules for the WAN classes, including one or more that send all WAN-destined traffic to the WAN class or WAN class siblings. Create a rule for WAN traffic so that traffic whose destination IP address is in the subnet of the Steelhead appliance is classified as WAN traffic. Riverbed QoS Enforcement Best Practices Riverbed recommends the following guidelines because they ensure optimal performance and require the least amount of initial and ongoing configuration: Configure QoS while the QoS functionality is disabled and only enable it after you are ready for the changes to take effect QOS CONFIGURATION AND INTEGRATION

131 Steelhead appliances at larger sites, such as data centers and regional hubs, use hierarchical mode. Steelhead appliances at branch locations use flat mode. Increase the Minimum Guaranteed Bandwidth (MGBW) and define the link share for the built-in default class. The built-in default class is configured with a MGBW of 0.01%, and has no defined link share. These default values typically need to be altered. For example, in hierarchical mode, another QoS class allocated at the top-level with a MGBW of 5% receives 500 times more of the link share than any QoS class found in the default class. A typical indication that the default class must be adjusted is when traffic that is not specified in the QoS classes (typical examples include Web browsing and routing updates) receives very little bandwidth during times of congestion. In hierarchical mode, if you are using a model where the top-level QoS classes represent sites: For each site, create a site-specific default class. Create a QoS rule that comes after any other QoS rules that are specific to that site and that captures traffic to that site. Specify the per-site default class as the target so that no traffic is assigned to the built-in default class. The default class is also used to dequeue important packets such as ARPs. All traffic must be dequeued from the default class. Configure the first level classes to represent remote sites, and the second level classes to represent applications. For example, at data centers the first level class represents regional hubs, and the second level class represents indirectly connected sites. Configuring Riverbed QoS This section describes the basic steps for configuring QoS using the Management Console. This section also includes a configuration example. You can also use the Riverbed CLI to configure QoS. For detailed information about QoS commands, see the Riverbed Command-Line Interface Reference Manual. You can use the CMC to enable QoS and to configure and apply QoS rules to multiple Steelhead appliances. For details, see the Steelhead Central Management Console User s Guide. Basic Steps Perform the following basic steps to configure Riverbed QoS. 1. Connect to the Management Console. For details, see the Steelhead Management Console User s Guide. 2. Choose Configure - Networking - QoS Classification and select either Flat or Hierarchical mode. NOTE: Selecting a mode does not enable QoS traffic classification. The Enable QoS Classification and Enforcement check box must be selected and a bandwidth link rate must be set for each WAN interface where QoS is to be enabled before traffic optimization begins. 3. Select each WAN interface and define the bandwidth link rate for each interface. STEELHEAD APPLIANCE DEPLOYMENT GUIDE 131

132 4. Define the QoS classes for each traffic flow. For detailed information, see QoS Class Parameters on page Define rules for each class or subclass. For detailed information, see QoS Rules on page Check the Enable QoS Classification and Enforcement box. 7. Click Apply. Your changes take effect immediately. IMPORTANT: If you delete or add new rules, the existing optimized connections are not affected; the changes only affect new optimized connections. For details about configuring QoS, see the Steelhead Management Console User s Guide. Riverbed QoS Configuration Example The following figure illustrates a Steelhead appliance deployment in which QoS best practices are applied. It includes the following sections: Data Center Specifications, next Branch Office Specifications on page 133 Configuring the Data Center Steelhead Appliance on page 133 Configuring the Branch Office Steelhead Appliance on page 135 For details about best practices, see Riverbed QoS Enforcement Best Practices on page 130. In this example, traffic between the data center and the remote office branches includes VoIP, Citrix, software updates, and other traffic. Figure Steelhead Appliance Configuration Example QOS CONFIGURATION AND INTEGRATION

133 Data Center Specifications The data center: has Citrix servers located in the /24 subnet. transmits software updates from a server with IP address Steelhead appliance: is deployed physical in-path. has a WAN link with 10 Mbps of bandwidth. serves 20 remote branch offices. uses Riverbed QoS hierarchical mode. has the following QoS policies for outbound traffic: VoIP traffic is guaranteed at least 100 Kbps when active. Citrix traffic is guaranteed at least 100 Kbps when active. VoIP traffic is guaranteed the highest latency priority, and Citrix gets the second highest. Software updates are allocated the lowest latency priority. Branch Office Specifications Each branch office has: a 2 Mbps WAN link. Steelhead appliances that are deployed physical in-path. a separate X.0/24 subnet, where X is the number of the site. VoIP phones that are always in the X.128/25 subnet. Riverbed QoS flat mode enabled. Configuring the Data Center Steelhead Appliance To configure Riverbed QoS for the data center Steelhead appliance: Use the class per site model, where each of the created child classes from the root class represent a site. Each site-level class has child classes that represent a type of application. Each site class has a child default class that is configured to receive any traffic not otherwise specified for the site. Because this includes important traffic such as routing updates, Riverbed recommends that such a class be created, and that it receive some bandwidth guarantee. The actual amount needed for the bandwidth guarantee depends on the total amount of bandwidth used at the site, what other classes are configured, and the guaranteed bandwidth (GBW) of the other classes. In hierarchical mode, bandwidth is allocated first based on the minimum bandwidth guarantee of the active classes. Excess bandwidth is allocated according to the minimum bandwidth guarantee ratios. For this reason, it is important to keep the minimum bandwidth guarantees relatively close to each other. For example, suppose class A is configured with a minimum bandwidth guarantee of 1%, and class B is configured with 10%. When they are the only active classes, class B is allocated ten times the bandwidth of class A. STEELHEAD APPLIANCE DEPLOYMENT GUIDE 133

134 Configuring the Data Center Site-Based Classes Assuming that 200 Kbps is specified as a bandwidth guarantee for the site default class, each site in this example needs at least 400 Kbps of total guaranteed bandwidth. This is the sum of 100 Kbps for VoIP, 100 Kbps for Citrix, and 200 Kbps for the site default class. Each site must also be configured with an upper limit of 2 Mbps. Specifying the upper limit for the QoS class ensures that any queueing for the site traffic occurs on the data center Steelhead appliance, instead of on the WAN. Because GBW and upper bandwidth (UBW) are configured via percentages of the parent class upper bandwidth, each site is configured with a GBW of 4% (400 Kbps / 10 Mbps) and an UBW of 20% (2 Mbps / 10 Mbps). Configuring the Data Center Application-Based Classes Each site-based class has four child classes: VoIP. The VoIP class is created with a GBW of 5% (100 Kbps / 2 Mbps), and a Realtime latency priority. Citrix. The Citrix class is created with a GBW percentage of 5% (100 Kbps / 2 Mbps), and an Interactive latency priority. Software Updates. A Software Updates class is needed to give this class a lower latency priority than the other classes. When you create a class, a GBW must be specified. In this example, a GBW of 5% is specified. Using a low GBW such as this ensures that when the Software Updates class and the site default class are both active, the Software Updates class receives half (5% is half of the GBW of the site default class) of any excess bandwidth over the minimum allocated to both QoS classes. Default. The site default class receives a GBW of 10%, and a Normal latency priority of. Configuring the Data Center QoS Rules The QoS rules are constructed based on the previously described network information. The rule that directs traffic to the site default class must be the last rule on the list, because it is the rule that directs any traffic not otherwise specified to the site default class QOS CONFIGURATION AND INTEGRATION

135 You can view QoS settings on the Configure - Networking - QoS Classification page of the Management Console. The following figure illustrates the subsequent QoS configuration for the data center Steelhead appliance. Figure Data Center Steelhead Appliance QoS Configuration You can verify the QoS configuration on the Reports - Appliance - QoS Statistics Dropped page, and the Reports - Appliance - QoS Statistics Sent page of the Management Console. For details about configuring QoS, see the Steelhead Management Console User s Guide. Configuring the Branch Office Steelhead Appliance Flat mode is used at the branch office Steelhead appliances because the branch office Steelhead appliances only send data to the data center. There is a single WAN bottleneck to consider, the local 2 Mbps WAN link. No hierarchy is needed to encode a single WAN bottleneck. STEELHEAD APPLIANCE DEPLOYMENT GUIDE 135

136 Configuring the Branch Office Application-Based Classes The QoS classes are created similarly to the data center Steelhead appliance, with a few exceptions. Each site class has four child classes: VoIP. The VoIP class is created with a GBW of 5% (100 Kbps / 2 Mbps), and a latency priority of Realtime. Citrix. The Citrix class is created with a GBW percentage of 5% (100 Kbps / 2 Mbps), and a latency priority of Interactive. Software Updates. The Link Share Weight parameter is specified as 50%. The Link Share Weight parameter is available to classes when flat mode is used, and determines how excess bandwidth is allocated after all minimum bandwidth allocations are granted for active traffic. Specifying 50% means that when the Software Updates and the Default classes are active, the Software Updates class receives half of the excess bandwidth of the other active traffic. Default. The site default class receives a GBW of 10%, and a latency priority of Normal. You can view QoS settings on the Configure - Networking - QoS Classification page of the Management Console. The following figure illustrates the subsequent QoS configuration for the branch office Steelhead appliance. Figure Branch Office Steelhead Appliance QoS Configuration QOS CONFIGURATION AND INTEGRATION

137 You can verify the QoS configuration on the Reports - Appliance - QoS Statistics Dropped page, and the Reports - Appliance - QoS Statistics Sent page of the Management Console. For details about configuring QoS, see the Steelhead Management Console User s Guide. STEELHEAD APPLIANCE DEPLOYMENT GUIDE 137

138 QOS CONFIGURATION AND INTEGRATION

139 CHAPTER 10 WAN Visibility Modes In This Chapter This chapter describes Steelhead appliance WAN visibility modes, and how to configure them. It includes the following sections: Overview of WAN Visibility Modes, next Correct Addressing on page 140 Transparent Addressing on page 141 Configuring WAN Visibility Modes on page 149 Implications of Transparent Addressing on page 151 This chapter assumes you are familiar with: The RiOS CLI. For details, see the Riverbed Command-Line Interface Reference Manual. The installation and configuration process for the Steelhead appliance. For details, see the Steelhead Appliance Installation and Configuration Guide. This chapter provides the basic steps for configuring WAN visibility modes. For details about the factors you must consider before you design and deploy the Steelhead appliance in a network environment, see Choosing the Right Steelhead Appliance on page 19. Overview of WAN Visibility Modes WAN visibility modes pertain to how TCP/IP packets traversing the WAN are addressed. RiOS v5.0.x and later offers the following types of WAN visibility modes: Correct addressing Two types of transparent addressing: Port transparency Full address transparency STEELHEAD APPLIANCE DEPLOYMENT GUIDE 139

140 The WAN visibility mode feature gives you several options for addressing optimized traffic across your WAN. The most suitable WAN visibility mode depends primarily on your existing network configuration. For example, if you must manage IP address-based or TCP port-based QoS policies for optimized traffic on your WAN or WAN routers, you might use full address transparency or port transparency. If you need your optimized traffic to pass through a content-scanning firewall that creates alarms when application ports are used on optimized traffic payload, you might use correct addressing. You can use different types of addressing modes on the same Steelhead appliance. This enables you to choose the most appropriate addressing mode based on IP addresses, subnets, TCP ports, and VLAN. You configure WAN visibility modes on the client-side Steelhead appliance (where the connection is initiated). Correct Addressing Correct addressing uses Steelhead appliance IP addresses and port numbers in the TCP/IP packet header fields for optimized traffic in both directions across the WAN. By default, Steelhead appliances use correct addressing. The following figure illustrates TCP/IP packet headers when correct addressing is used. The IP addresses and port numbers of your Steelhead appliances are visible across your WAN. Refer to this figure to compare it to port transparency and full address transparency packet headers. Figure Correct Addressing Correct addressing uses the following values in your TCP/IP packet headers in both directions: Client to client-side Steelhead appliance: Client IP address and port + Server IP address and port Client-side Steelhead appliance to server-side Steelhead appliance: Client-side Steelhead appliance IP address and port + Server-side Steelhead appliance IP address and port Server-side Steelhead appliance to server: Client IP address and port + Server IP address and port For details about configuring correct addressing, see Configuring WAN Visibility Modes on page WAN VISIBILITY MODES

141 Correct addressing avoids networking risks that are inherent to enabling transparent addressing. For details, see Implications of Transparent Addressing on page 151. Correct addressing enables you to use the connection pooling optimization feature. Connection pooling works only for connections optimized using correct addressing. Connection pooling enables Steelhead appliances to create a number of TCP connections between each other before they are needed. When transparent addressing is enabled, Steelhead appliances cannot create the TCP connections in advance because they do not know what types of client and server IP addresses and ports are needed. For details about connection pooling, see Connection Pooling on page 18. Transparent Addressing This section describes transparent addressing: port transparency and full address transparency. It includes the following sections: Port Transparency, next Full Address Transparency on page 143 Transparent addressing reuses client and server addressing for optimized traffic across the WAN. Traffic is optimized while addressing appears to be unchanged. Both optimized and pass-through traffic present identical addressing information to the router and network monitoring devices. In RiOS v5.0.x and later, transparent addressing can be used in conjunction with many deployment configurations, including the following: In-Path Connection forwarding VLAN WCCP PBR Serial clustering Layer-4 switching Enhanced auto-discovery Asymmetric route detection QoS NetFlow export Transparent addressing does not support the following deployment configurations: Server-side out-of-path Steelhead appliance configurations Fixed-target rules Connection pooling STEELHEAD APPLIANCE DEPLOYMENT GUIDE 141

142 You configure transparent addressing on the client-side Steelhead appliance (where the connection is initiated). Both the server-side and the client-side Steelhead appliances must support transparent addressing (RiOS v5.0.x or later) for transparent addressing to work. You can configure a Steelhead appliance for transparent addressing even if its peer does not support it. The connection is optimized but it is not transparent. When you use full or port transparency, Steelhead appliances add a TCP option field to the packet headers of optimized traffic. This TCP option field is sent between the Steelhead appliances. For transparency to work, this option must not be stripped off by intermediate network devices. A given pair of Steelhead appliances can also have multiple types of transparent addressing enabled for different connections. For example, a pair of Steelhead appliances can use correct addressing for connections to a destination subnet, and use full address transparency or port transparency for connections to another destination subnet. A pair of Steelhead appliances can also use correct addressing for connections to a destination port, and use full address transparency or port transparency for connections to another destination subnet. If both port transparency and full address transparency are acceptable solutions, port transparency is preferable. Port transparency avoids potential networking risks that are inherent in enabling full address transparency. For details, see Implications of Transparent Addressing on page 151. Port Transparency Port transparency preserves your server port numbers in the TCP/IP header fields for optimized traffic in both directions across the WAN. Traffic is optimized while the server port number in the TCP/IP header field appears to be unchanged. Routers and network monitoring devices deployed in the WAN segment between the communicating Steelhead appliances can view these preserved fields. Port transparency does not require dedicated port configurations on your Steelhead appliances. Port transparency only provides server port visibility. Port transparency does not provide client and server IP address visibility, nor does it provide client port visibility. The following figure illustrates TCP/IP packet headers when port transparency is enabled. Server port numbers are visible across your WAN WAN VISIBILITY MODES

143 To compare port transparency packet headers to correct addressing packet headers, see Figure 10-1 on page 140. Figure Port Transparency Port transparency uses the following values in your TCP/IP packet headers in both directions: Client to client-side Steelhead appliance: Client IP address and port + Server IP address and port Client-side Steelhead appliance to Server-side Steelhead appliance: Client-side Steelhead appliance IP address and port + Server-side Steelhead appliance IP address and port Server-side Steelhead appliance to Server: Client IP address and port + Server IP address and port For details about configuring port transparency, see Configuring WAN Visibility Modes on page 149. Use port transparency if you want to manage and enforce QoS policies that are based on destination ports. If your WAN router is following traffic classification rules that are written in terms of TCP destination port numbers, port transparency enables your routers to use existing rules to classify the traffic without any changes. Port transparency enables network analyzers deployed within the WAN (between the Steelhead appliances) to monitor network activity, and to capture statistics for reporting, by inspecting traffic according to its original TCP destination port number. NOTE: Port transparency does not support active FTP. Full Address Transparency This section describes full address transparency. It includes the following sections: Overview of Full Address Transparency, next VLANs and Full Address Transparency on page 145 The Out-of-Band (OOB) Connection on page 146 STEELHEAD APPLIANCE DEPLOYMENT GUIDE 143

144 Overview of Full Address Transparency Full address transparency preserves your client and server IP addresses and port numbers in the TCP/IP header fields for optimized traffic in both directions across the WAN. VLAN tags are also preserved. Traffic is optimized while these TCP/IP header fields appear to be unchanged. Routers and network monitoring devices deployed in the WAN segment between the communicating Steelhead appliances can view these preserved fields. The following figure is an example of how TCP/IP packet headers might be addressed when full address transparency is enabled. In this example, Steelhead appliance IP addresses and port numbers are no longer visible on the optimized connections. Client and server IP addresses and port numbers are now visible in both directions across the WAN. When you enable full address transparency, you have several addressing options for the out-of-band (OOB) connection. The type of addressing you configure for your OOB connection ultimately determines whether the Steelhead appliance in-path IP addresses are used in the TCP/IP packet headers. For details, see The Out-of-Band (OOB) Connection on page 146. To compare full address transparency packet headers to correct addressing packet headers, see Figure 10-1 on page 140. Figure Full Address Transparency In this example, full address transparency uses the following values in the TCP/IP packet headers in both directions: Client to client-side Steelhead appliance: Client IP address and port + Server IP address and port Client-side Steelhead appliance to Server-side Steelhead appliance: Client IP address and port + Server IP address and port Server-side Steelhead appliance to server: Client IP address and port + Server IP address and port For details about configuring full address transparency, see Configuring WAN Visibility Modes on page 149. IMPORTANT: Enabling full address transparency requires symmetrical traffic flows from clients to servers. If any asymmetry exists on the network, enabling full address transparency might yield unexpected results, including loss of connectivity. For more information, see Implications of Transparent Addressing on page WAN VISIBILITY MODES

145 If both port transparency and full address transparency are acceptable solutions, port transparency is preferable. Port transparency mitigates potential networking risks that are inherent in enabling full address transparency. For details, see Implications of Transparent Addressing on page 151. However, if you must use your client or server IP addresses across your WAN, full address transparency is your only configuration option. Full address transparency enables network monitoring applications deployed within the WAN (between the Steelhead appliances) to measure traffic load issued to the WAN by the end-host. Network routers can also perform load balancing and policy-based routing. Full address transparency also enables you to manage and enforce QoS policies based on port numbers or IP addresses. IMPORTANT: When full address transparency is enabled, router QoS policies cannot distinguish between optimized and unoptimized traffic, even though an optimized packet might represent much more data. Full address transparency also enables the use of NAT. With correct addressing, Steelhead appliances use their own IP addresses in the packet header, which NAT does not recognize. When full address transparency is enabled the original client and server IP addresses are used, and the connections are recognizable to NAT. However, the type of addressing you configure for your OOB connection ultimately determines whether the Steelhead appliance in-path IP addresses are used in the TCP/IP packet headers. For details, see The Out-of-Band (OOB) Connection on page 146. Full address transparency also supports several transparency options for the out-of-band (OOB) connection. For details, see The Out-of-Band (OOB) Connection on page 146. VLANs and Full Address Transparency Full address transparency supports transparent VLANs. You can configure full address transparency so that optimized traffic remains on the original VLANs. Because you can keep traffic on the original VLANs, full address transparency enables you to perform VLAN-based QoS on the WAN-side of the Steelhead appliance. NOTE: You must first configure WAN visibility full address transparency for VLAN transparency to function correctly. To configure full address transparency for a VLAN 1. On the Steelhead appliance, connect to the CLI. For details, see the Riverbed Command-Line Interface Reference Manual. 2. At the system prompt, enter the following set of commands: enable configure terminal in-path peering auto in-path simplified routing all in-path vlan-conn-based in-path mac-match-vlan no in-path probe-caching enable in-path probe-ftp-data in-path probe-mapi-data write memory service restart STEELHEAD APPLIANCE DEPLOYMENT GUIDE 145

146 NOTE: You must save your changes to memory and restart the Steelhead appliance for your changes to take effect. NOTE: If packets on your network use two different VLANs in the forward and reverse directions, see the following Riverbed Knowledge Base article, Understanding VLANs and Transparency, located at The Out-of-Band (OOB) Connection This section describes transparency options for the OOB connection. It includes the following sections: Overview of the OOB Connection, next OOB Connection Destination Transparency on page 147 OOB Connection Full Transparency on page 148 Overview of the OOB Connection The OOB connection is a single, unique TCP connection that is established by a pair of Steelhead appliances that are optimizing traffic. The pair of Steelhead appliances use this connection strictly to communicate internal information required by them to optimize traffic. With RiOS v5.0.x or later, and if you use WAN visibility full address transparency, you have the following transparency options for the OOB connection: OOB connection destination transparency and OOB connection full transparency. You configure OOB transparent addressing on the client-side Steelhead appliance (where the connection is initiated). By default, the OOB connection uses correct addressing. Correct addressing uses the client-side Steelhead appliance IP address, port number, and VLAN ID, and the server-side Steelhead appliance IP address, port number, and VLAN ID. If you are using OOB connection correct addressing and the client-side Steelhead appliance cannot establish the OOB connection to the server-side Steelhead appliance, OOB connection transparency can resolve this issue. For example, if you have a server on a private network that is located behind a NAT device. You configure OOB connection transparency so that the client-side Steelhead appliance uses the server IP address and port number as the remote IP address and port number. Steelhead appliances route packets on the OOB connection to the NAT device. The NAT device then translates the packet address to that of the server-side Steelhead appliance. If both of the OOB connection transparency options are acceptable solutions, OOB connection destination transparency is preferable. OOB connection destination transparency mitigates the slight possibility of port number collisions which can occur with OOB connection full transparency. When OOB connection transparency is enabled and the OOB connection is lost, the Steelhead appliances re-establish the connection using the server IP address and port number from the next optimized connection WAN VISIBILITY MODES

147 OOB Connection Destination Transparency The following figure illustrates TCP/IP packet headers when OOB connection destination transparency is enabled. Figure OOB Connection Destination Transparency OOB connection destination transparency uses the following values in the TCP/IP packet headers in both directions across the WAN: Client-side Steelhead appliance IP address and an ephemeral port number chosen by the Clientside Steelhead appliance + Server IP address and port number Steelhead appliances use the server IP address and port number from the first optimized connection. Use OOB connection destination transparency if the client-side Steelhead appliance cannot establish the OOB connection to the server-side Steelhead appliance. To enable OOB connection destination transparency NOTE: You must first configure WAN visibility full address transparency for OOB connection destination transparency to function correctly. 1. Connect to the Riverbed CLI on the client-side Steelhead appliance. For details, see the Riverbed Command-Line Interface Reference Manual. 2. At the system prompt, enter the following set of commands: enable configure terminal in-path peering oobtransparency mode destination write memory NOTE: You must save your changes to memory for your changes to take effect. STEELHEAD APPLIANCE DEPLOYMENT GUIDE 147

148 To disable OOB connection destination transparency 1. Connect to the Riverbed CLI on the client-side Steelhead appliance. For details, see the Riverbed Command-Line Interface Reference Manual. 2. At the system prompt, enter the following set of commands: enable configure terminal in-path peering oobtransparency mode none write memory NOTE: You must save your changes to memory for your changes to take effect. OOB Connection Full Transparency The following figure illustrates TCP/IP packet headers when OOB connection full transparency is enabled. Figure OOB Connection Full Transparency OOB connection full transparency uses the following values in the TCP/IP packet headers in both directions across the WAN: Client IP address and Client-side Steelhead appliance pre-determined port number Server IP address and port number Steelhead appliances use the client IP address, and the server IP address and port number from the first optimized connection. If the client is already using port 708 to connect to the destination server, enter the following CLI command to change the client-side Steelhead appliance pre-determined port number: in-path peering oobtransparency port <port number> OOB connection full transparency supports Steelhead appliances deployed on trunks. Because you can configure full address transparency so that optimized traffic remains on the original VLAN, there is no longer a need for a Steelhead VLAN. Use OOB connection full transparency if your network is unable to route between Steelhead appliance inpath IP addresses or in-path VLANs, or you do not want to see Steelhead appliance IP addresses used for the OOB connection WAN VISIBILITY MODES

149 To enable OOB connection full transparency NOTE: You must first configure WAN visibility full address transparency for OOB connection destination transparency to function correctly. For details, see Full Address Transparency on page Connect to the Riverbed CLI on the client-side Steelhead appliance. For details, see the Riverbed Command-Line Interface Reference Manual. 2. At the system prompt, enter the following set of commands: enable configure terminal in-path peering oobtransparency mode full write memory NOTE: You must save your changes to memory for your changes to take effect. To disable OOB connection full transparency 1. Connect to the Riverbed CLI on the client-side Steelhead appliance. For details, see the Riverbed Command-Line Interface Reference Manual. 2. At the system prompt, enter the following set of commands: enable configure terminal in-path peering oobtransparency mode none write memory NOTE: You must save your changes to memory for your changes to take effect. Configuring WAN Visibility Modes The following section describes how to configure WAN visibility modes using the RiOS CLI. You configure WAN visibility modes by creating an in-path auto-discovery rule on the client-side Steelhead appliance (where the connection is initiated). By default, the rule is placed before the default in-path rule, and after the Secure, Interactive, and RBT-Proto rules. For transparent addressing to function correctly, both of the Steelhead appliances must have RiOS v5.0.x or later installed. If one Steelhead appliance does not support transparent addressing (that is, it has RiOS v4.1 or earlier installed), the Steelhead appliance attempting to optimize a connection in one of the transparent addressing modes automatically reverts to correct addressing mode, and optimization continues. STEELHEAD APPLIANCE DEPLOYMENT GUIDE 149

150 NOTE: If you configure transparent addressing on any of your Steelhead appliances, Riverbed recommends that all of your Steelhead appliances have RiOS v5.0.x or later installed. By default, Steelhead appliances use correct addressing (for all RiOS versions). IMPORTANT: Enabling full address transparency requires symmetrical traffic flows from clients to servers. If any asymmetry exists on the network, enabling full address transparency might yield unexpected results, including loss of connectivity. For more information, see Implications of Transparent Addressing on page 151. WAN Visibility CLI Commands This section summarizes the WAN visibility CLI commands. The following figure illustrates the IP addresses and ports used in the following tables. Figure Configuring WAN Visibility Modes The following table summarizes the port transparency CLI commands. Action To enable port transparency for a specific server To enable full address transparency for a specific group of servers, and port transparency for servers not in the group To disable port transparency Command in-path rule auto-discover wan-visibility port dstaddr /32 dstport 80 in-path rule auto-discover wan-visibility full dstaddr / 24 in-path rule auto-discover wan-visibility port IMPORTANT: In this example, the first in-path rule must precede the second in-path rule in the rule list. To specify the placement of a rule in the list, use the rulenum CLI option. For details, see the Riverbed Command-Line Interface Reference Manual. Delete the in-path rule that enables it. For details about deleting inpath rules, see the Riverbed Command-Line Interface Reference Manual WAN VISIBILITY MODES

151 The following table summarizes the full address transparency CLI commands. Action To enable full address transparency globally To enable full address transparency for servers in a specific IP address range To enable full address transparency for a specific server To enable full address transparency for a specific group of servers, and port transparency for servers not in the group To disable full address transparency Command in-path rule auto-discover wan-visibility full in-path rule auto-discover wan-visibility full dstaddr / 16 in-path rule auto-discover wan-visibility full dstaddr / 32 in-path rule auto-discover wan-visibility full dstaddr / 24 in-path rule auto-discover wan-visibility port IMPORTANT: In this example, the first in-path rule must precede the second in-path rule in the rule list. To specify the placement of a rule in the list, use the rulenum CLI option. For details, see the Riverbed Command-Line Interface Reference Manual. Delete the in-path rule that enables it. For details about deleting inpath rules, see the Riverbed Command-Line Interface Reference Manual. NOTE: You must save your changes to memory and restart the Steelhead appliance for your changes to take effect. Implications of Transparent Addressing This section describes some of the common problems that are inherent to transparent addressing. It includes the following sections: Stateful Systems, next Network Design Issues on page 152 NOTE: The problems described in this section occur with all proxy-based solutions. Stateful Systems Transparent addressing does not work with firewalls that track the connection state. When a Steelhead appliance establishes a second TCP connection to optimize data, it uses the same IP address and port numbers as the initial client and server connection in the packet header. However, the Steelhead appliance makes changes to the initial SYN probe sequence number. This SYN probe sequence number change means that the sequence number on the initial client and server connection is different from the sequence number on the transparent connection. The stateful firewall might detect this change, raise an alarm, and disallow the second connection. STEELHEAD APPLIANCE DEPLOYMENT GUIDE 151

152 Transparent addressing also does not work with intrusion detection and prevention systems which perform stateful packet inspection. Steelhead appliances use a proprietary Riverbed application protocol to communicate. When intrusion detection and prevention systems perform stateful packet inspections, they expect to see an application protocol based on the port numbers of the original client and server connection. When these systems discover the Riverbed proprietary application protocol, they perceive this as a mismatch, causing it to log the packet, drop it, trigger an alarm, or they perform all of the above. You can avoid these problems with stateful systems, which are inherent to transparent addressing, by using correct addressing. Network Design Issues This section describes some of the common networking problems that are inherent to transparent addressing. It includes the following sections: Network Asymmetry, next Mis-Routing Optimized Traffic on page 153 Firewalls Located Between Steelhead Appliances on page 154 Network Asymmetry Enabling full address transparency increases the likelihood of problems inherent to asymmetric routing. For a network connection to be optimized, packets traveling in both network directions (from the server to the client, and from the client to the server) must pass through the same client-side and server-side Steelhead appliance. When client requests or server responses traverse alternate paths, the network has asymmetry. When full address transparency is used, the router no longer uses the Steelhead appliance addresses, it uses the client and server addresses. The router sends the packet to the client or server IP address instead of the Steelhead appliance. If the router traverses an alternate route that does not have the Steelhead appliance installed, the connection fails. The following figure illustrates an asymmetric server-side network in which a server response can traverse a path (the bottom path) in which a Steelhead appliance is not installed. Figure Server-Side Asymmetric Network WAN VISIBILITY MODES

153 To ensure that all required traffic is optimized and accelerated, a Steelhead appliance must be installed on every possible path that a packet traverses. Connection forwarding must also be configured and enabled for each Steelhead appliance. For details, see Connection Forwarding on page 34. If there is a path that does not have a Steelhead appliance, it is possible that some traffic will not be optimized. For details about how to eliminate asymmetric routing problems, see Troubleshooting Deployment Problems on page 165. You can avoid this type of asymmetric routing problem, which is inherent to transparent addressing, by using correct addressing. NOTE: With RiOS v3.0.x and later, you can configure your Steelhead appliances to automatically detect and report asymmetric routes within your network. For details, see the Steelhead Management Console User s Guide. Mis-Routing Optimized Traffic Enabling transparent addressing introduces the likelihood of mis-routing optimized traffic in the event of a Steelhead appliance failure. Steelhead appliances use a proprietary Riverbed protocol to communicate. Normally, a functioning serverside Steelhead appliance receives a packet from the WAN, and converts the packet to its native format before forwarding it to the server. In an environment in which transparent addressing is used, if the server-side Steelhead appliance is not functioning, or if a packet is routed along an alternative network path, the packet might go from the clientside Steelhead appliance directly to the server. Because the server-side Steelhead appliance does not have an opportunity to convert the packet to its native format, the server cannot recognize it, and the connection fails. In most cases, the server is able to detect whether a packet contains invalid payload information or, in this case, has an unrecognizable format, and rejects the packet. Assuming the server does detect that it is unrecognizable, the server rejects the packet and resets the TCP connection. If the connection is successfully reset, the client connects to the server without any Steelhead appliance involvement. However, data corruption might still occur. This type of traffic mis-routing can occur in both directions across the WAN. If the client-side Steelhead appliance experiences a failure, or if an alternate network path exists from the server to the client, traffic might go from the server-side Steelhead appliance directly to the client. IMPORTANT: Before enabling and utilizing full address transparency, carefully consider the risks and exposures in the event that a server accepts and routes a packet that has an unrecognizable format. STEELHEAD APPLIANCE DEPLOYMENT GUIDE 153

154 The following figure illustrates a traffic mis-route when the server-side Steelhead appliance fails on a network using transparent addressing. Figure Transparent Addressing and Mis-Routing Optimized Traffic The traffic is routed in the following manner: 1. Client A sends an HTTP packet to Steelhead B. 2. Steelhead B converts the packet to the proprietary Riverbed protocol, and forwards it to Steelhead C. 3. Steelhead C is not functioning and cannot convert the packet to HTTP. The proprietary Riverbed protocol packet goes directly to the server D. 4. The server does not recognize the packet format, and the connection fails. You can avoid this type of mis-routing problem, which is inherent to transparent addressing, by using correct addressing. If correct addressing is configured for this scenario, the client-side Steelhead appliance detects that the server-side Steelhead appliance has failed. The client-side Steelhead appliance automatically resets the client connection, allowing the client to connect directly to the server without Steelhead appliance involvement. Firewalls Located Between Steelhead Appliances If your firewall inspects traffic between two Steelhead appliances, there are addressing issues that you need to be aware of. Figure Firewalls and Transparent Addressing WAN VISIBILITY MODES

Steelhead Central Management Console User s Guide. Version June 2009

Steelhead Central Management Console User s Guide. Version June 2009 Steelhead Central Management Console User s Guide Version 5.0.4 June 2009 2003-2009 Riverbed Technology, Incorporated. All rights reserved. Riverbed Technology, Riverbed, Steelhead, RiOS, Interceptor and

More information

Riverbed Services Platform Installation and Configuration Guide. RiOS Version September 2009

Riverbed Services Platform Installation and Configuration Guide. RiOS Version September 2009 Riverbed Services Platform Installation and Configuration Guide RiOS Version 5.5.4 September 2009 2003-2009 Riverbed Technology, Incorporated. All rights reserved. Riverbed Technology, Riverbed, Steelhead,

More information

RSP User s Guide. RiOS Version 6.0 December 2009

RSP User s Guide. RiOS Version 6.0 December 2009 RSP User s Guide RiOS Version 6.0 December 2009 2003-2009 Riverbed Technology, Incorporated. All rights reserved. Riverbed Technology, Riverbed, Steelhead, RiOS, Interceptor and the Riverbed logo are trademarks

More information

Interceptor Appliance User s Guide. Version April 2012

Interceptor Appliance User s Guide. Version April 2012 Interceptor Appliance User s Guide Version 3.0.0 April 2012 2012 Riverbed Technology. All rights reserved. Riverbed, Cloud Steelhead, Granite, Interceptor, RiOS, Steelhead, Think Fast, Virtual Steelhead,

More information

Steelhead Appliance Installation and Configuration Guide. Version June 2008

Steelhead Appliance Installation and Configuration Guide. Version June 2008 Steelhead Appliance Installation and Configuration Guide Version 4.1.10 June 2008 2003-2007 Riverbed Technology, Incorporated. All rights reserved. Riverbed Technology, Riverbed, Steelhead, RiOS, Interceptor

More information

Riverbed Central Management Console Virtual Edition Installation Guide. Version 8.5 September 2013

Riverbed Central Management Console Virtual Edition Installation Guide. Version 8.5 September 2013 Riverbed Central Management Console Virtual Edition Installation Guide Version 8.5 September 2013 2013 Riverbed Technology. All rights reserved. Riverbed, Cloud Steelhead, Granite, Interceptor, RiOS, Steelhead,

More information

Steelhead Central Management Console User s Guide. Version February 2012

Steelhead Central Management Console User s Guide. Version February 2012 Steelhead Central Management Console User s Guide Version 6.5.3 February 2012 2012 Riverbed Technology. All rights reserved. Riverbed, Cloud Steelhead, Granite, Interceptor, RiOS, Steelhead, Think Fast,

More information

Vendor: Riverstone. Exam Code: Exam Name: Riverbed Certified Solutions Associate. Version: Demo

Vendor: Riverstone. Exam Code: Exam Name: Riverbed Certified Solutions Associate. Version: Demo Vendor: Riverstone Exam Code: 101-01 Exam Name: Riverbed Certified Solutions Associate Version: Demo QUESTION 1 What factors should you use when measuring resources to help you size your Steelhead appliances?

More information

SteelHead Interceptor User s Guide. Version 5.0 July 2015

SteelHead Interceptor User s Guide. Version 5.0 July 2015 SteelHead Interceptor User s Guide Version 5.0 July 2015 2015 Riverbed Technology, Inc. All rights reserved. Riverbed and any Riverbed product or service name or logo used herein are trademarks of Riverbed.

More information

Steelhead Appliance Installation and Configuration Guide. Steelhead CX (Series xx55 and x70) Steelhead (Series xx50) 8.0.

Steelhead Appliance Installation and Configuration Guide. Steelhead CX (Series xx55 and x70) Steelhead (Series xx50) 8.0. Steelhead Appliance Installation and Configuration Guide Steelhead CX (Series xx55 and x70) Steelhead (Series xx50) 8.0.6 April 2014 2014 Riverbed Technology. All rights reserved. Riverbed, Cloud Steelhead,

More information

HP StorageWorks Enterprise File Services WAN Accelerator installation and configuration guide

HP StorageWorks Enterprise File Services WAN Accelerator installation and configuration guide HP StorageWorks Enterprise File Services WAN Accelerator 3.0.4 installation and configuration guide Part number: AG424 96004 Sixth edition: March 2007 Legal and notice information Copyright 2006 2007 Hewlett-Packard

More information

HP StorageWorks Enterprise File Services WAN Accelerator Manager 2.1.1

HP StorageWorks Enterprise File Services WAN Accelerator Manager 2.1.1 HP StorageWorks Enterprise File Services WAN Accelerator Manager 2.1.1 user s guide *392360-003* *392360-003* Part number: 392360-003 Third edition: June 2006 Legal and notice information Copyright 2005

More information

Interceptor Appliance Deployment Guide. July 2013

Interceptor Appliance Deployment Guide. July 2013 Interceptor Appliance Deployment Guide July 2013 2013 Riverbed Technology. All rights reserved. Riverbed, Cloud Steelhead, Granite, Interceptor, RiOS, Steelhead, Think Fast, Virtual Steelhead, Whitewater,

More information

Riverbed Central Management Console Installation Guide. Version 7.0 August 2012

Riverbed Central Management Console Installation Guide. Version 7.0 August 2012 Riverbed Central Management Console Installation Guide Version 7.0 August 2012 2012 Riverbed Technology. All rights reserved. Riverbed, Cloud Steelhead, Granite, Interceptor, RiOS, Steelhead, Think Fast,

More information

SteelHead Interceptor Installation Guide

SteelHead Interceptor Installation Guide SteelHead Interceptor Installation Guide Version 4.5 June 2014 2016 Riverbed Technology, Inc. All rights reserved. Riverbed and any Riverbed product or service name or logo used herein are trademarks of

More information

HP StorageWorks Enterprise File Services WAN Accelerator Manager 2.1

HP StorageWorks Enterprise File Services WAN Accelerator Manager 2.1 HP StorageWorks Enterprise File Services WAN Accelerator Manager 2.1 user s guide *392360-002* *392360-002* Part number: 392360-002 Second edition: February 2006 Legal and notice information Copyright

More information

HP StorageWorks Enterprise File Services WAN Accelerator 1.2

HP StorageWorks Enterprise File Services WAN Accelerator 1.2 HP StorageWorks Enterprise File Services WAN Accelerator 1.2 installation and configuration guide *393931-002* *393931 002* Part number: 393931 002 First edition: May 2005 Legal and notice information

More information

Cisco Wide Area Application Services: Secure, Scalable, and Simple Central Management

Cisco Wide Area Application Services: Secure, Scalable, and Simple Central Management Solution Overview Cisco Wide Area Application Services: Secure, Scalable, and Simple Central Management What You Will Learn Companies are challenged with conflicting requirements to consolidate costly

More information

SteelConnect Virtual Gateway Installation Guide

SteelConnect Virtual Gateway Installation Guide SteelConnect Virtual Gateway Installation Guide Version 2.8 August 2017 2017 Riverbed Technology, Inc. All rights reserved. Riverbed and any Riverbed product or service name or logo used herein are trademarks

More information

Seven Criteria for a Sound Investment in WAN Optimization

Seven Criteria for a Sound Investment in WAN Optimization Seven Criteria for a Sound Investment in WAN Optimization Introduction WAN optimization technology brings three important business benefits to IT organizations: Reduces branch office infrastructure costs

More information

Optimize and Accelerate Your Mission- Critical Applications across the WAN

Optimize and Accelerate Your Mission- Critical Applications across the WAN BIG IP WAN Optimization Module DATASHEET What s Inside: 1 Key Benefits 2 BIG-IP WAN Optimization Infrastructure 3 Data Optimization Across the WAN 4 TCP Optimization 4 Application Protocol Optimization

More information

SteelCentral Flow Gateway Software Installation Guide. Virtual Edition for VMware ESXi 5.5 and 6.0 Version x June 2017

SteelCentral Flow Gateway Software Installation Guide. Virtual Edition for VMware ESXi 5.5 and 6.0 Version x June 2017 SteelCentral Flow Gateway Software Installation Guide Virtual Edition for VMware ESXi 5.5 and 6.0 Version 10.10.x June 2017 2017 Riverbed Technology. All rights reserved. Riverbed, SteelApp, SteelCentral,

More information

SteelHead (Virtual Edition) Installation Guide. RiOS Version 9.0 December 2014

SteelHead (Virtual Edition) Installation Guide. RiOS Version 9.0 December 2014 SteelHead (Virtual Edition) Installation Guide RiOS Version 9.0 December 2014 2014 Riverbed Technology, Inc. All rights reserved. Riverbed, SteelApp, SteelCentral, SteelFusion, SteelHead, SteelScript,

More information

Steelhead Appliance Installation and Configuration Guide

Steelhead Appliance Installation and Configuration Guide Steelhead Appliance Installation and Configuration Guide Steelhead DX Appliance Verson 8.5.2 January 2014 2014 Riverbed Technology. All rights reserved. Riverbed, Cloud Steelhead, Granite, Interceptor,

More information

Riverbed Command-Line Interface Reference Manual. Version 4.0 May 2007

Riverbed Command-Line Interface Reference Manual. Version 4.0 May 2007 Riverbed Command-Line Interface Reference Manual Version 4.0 May 2007 2003-2007 Riverbed Technology, Incorporated. All rights reserved. Riverbed Technology, Riverbed, Steelhead, RiOS, Interceptor and the

More information

SteelCentral Controller for SteelHead (Virtual Edition) Installation Guide. Version 9.0 December 2014

SteelCentral Controller for SteelHead (Virtual Edition) Installation Guide. Version 9.0 December 2014 SteelCentral Controller for SteelHead (Virtual Edition) Installation Guide Version 9.0 December 2014 2015 Riverbed Technology, Inc. All rights reserved. Riverbed, SteelApp, SteelCentral, SteelFusion, SteelHead,

More information

Optimizing NetApp SnapMirror

Optimizing NetApp SnapMirror Technical White Paper Optimizing NetApp SnapMirror WAN Optimization using Riverbed Steelhead appliances Technical White Paper 2014 Riverbed Technology. All rights reserved. Riverbed, Cloud Steelhead, Granite,

More information

Upgrade and Maintenance Guide. Version December 2011

Upgrade and Maintenance Guide. Version December 2011 Upgrade and Maintenance Guide Version 7.0.0 December 2011 2011 Riverbed Technology. All rights reserved. Riverbed, Cloud Steelhead, Granite, Interceptor, RiOS, Steelhead, Think Fast, Virtual Steelhead,

More information

Riverbed Command-Line Interface Reference Manual. Version April 2010

Riverbed Command-Line Interface Reference Manual. Version April 2010 Riverbed Command-Line Interface Reference Manual Version 4.1.10 April 2010 2003-2010 Riverbed Technology, Incorporated. All rights reserved. Riverbed Technology, Riverbed, Steelhead, RiOS, Interceptor,

More information

Configuring Web Cache Services By Using WCCP

Configuring Web Cache Services By Using WCCP CHAPTER 44 Configuring Web Cache Services By Using WCCP This chapter describes how to configure your Catalyst 3560 switch to redirect traffic to wide-area application engines (such as the Cisco Cache Engine

More information

Riverbed Steelhead Cloud Accelerator Software User s Guide. Version 2.0 April 2014

Riverbed Steelhead Cloud Accelerator Software User s Guide. Version 2.0 April 2014 Riverbed Steelhead Cloud Accelerator Software User s Guide Version 2.0 April 2014 2014 Riverbed Technology, Inc. All rights reserved. Riverbed, Cloud Steelhead, Granite, Interceptor, RiOS, Steelhead, Think

More information

Branch Repeater :51:35 UTC Citrix Systems, Inc. All rights reserved. Terms of Use Trademarks Privacy Statement

Branch Repeater :51:35 UTC Citrix Systems, Inc. All rights reserved. Terms of Use Trademarks Privacy Statement Branch Repeater 6.0 2013-07-22 14:51:35 UTC 2013 Citrix Systems, Inc. All rights reserved. Terms of Use Trademarks Privacy Statement Contents Branch Repeater 6.0... 3 Branch Repeater 6.0... 4 Release Notes

More information

Configuring Cache Services Using the Web Cache Communication Protocol

Configuring Cache Services Using the Web Cache Communication Protocol Configuring Cache Services Using the Web Cache Communication Protocol Finding Feature Information, page 1 Prerequisites for WCCP, page 1 Restrictions for WCCP, page 2 Information About WCCP, page 3 How

More information

Features. HDX WAN optimization. QoS

Features. HDX WAN optimization. QoS May 2013 Citrix CloudBridge Accelerates, controls and optimizes applications to all locations: datacenter, branch offices, public and private clouds and mobile users Citrix CloudBridge provides a unified

More information

Barracuda Link Balancer

Barracuda Link Balancer Barracuda Networks Technical Documentation Barracuda Link Balancer Administrator s Guide Version 2.3 RECLAIM YOUR NETWORK Copyright Notice Copyright 2004-2011, Barracuda Networks www.barracuda.com v2.3-111215-01-1215

More information

SteelHead (Virtual Edition) Installation Guide

SteelHead (Virtual Edition) Installation Guide SteelHead (Virtual Edition) Installation Guide RiOS Version 9.8 June 2018 2018 Riverbed Technology, Inc. All rights reserved. Riverbed and any Riverbed product or service name or logo used herein are trademarks

More information

Configuring Virtual Servers

Configuring Virtual Servers 3 CHAPTER This section provides an overview of server load balancing and procedures for configuring virtual servers for load balancing on an ACE appliance. Note When you use the ACE CLI to configure named

More information

Granite Core Installation and Configuration Guide. Version August 2012

Granite Core Installation and Configuration Guide. Version August 2012 Granite Core Installation and Configuration Guide Version 1.0.2 August 2012 2012 Riverbed Technology. All rights reserved. Riverbed, Cloud Steelhead, Granite, Interceptor, RiOS, Steelhead, Think Fast,

More information

Symbols INDEX > 12-14

Symbols INDEX > 12-14 INDEX Symbols > 12-14 A AAA accounting configuring 6-32 AAA-based management systems 2-25, 6-2 acceleration about 1-6, 12-1 features 1-6 TCP settings 12-17 accounts creating 7-3 creation process 7-2 deleting

More information

Virtual WAN Optimization Controllers

Virtual WAN Optimization Controllers Virtual WAN Optimization Controllers vwan Virtual WAN Optimization Controllers accelerate applications, speed data transfers and reduce bandwidth costs using a combination of application, network and protocol

More information

HP 5120 SI Switch Series

HP 5120 SI Switch Series HP 5120 SI Switch Series Network Management and Monitoring Configuration Guide Part number: 5998-1813 Software version: Release 1505 Document version: 6W102-20121111 Legal and notice information Copyright

More information

SteelHead Installation and Configuration Guide

SteelHead Installation and Configuration Guide SteelHead Installation and Configuration Guide SteelHead CX (x70), (xx70), (xx55) Version 9.5 December 7, 2016 2016 Riverbed Technology, Inc. All rights reserved. Riverbed and any Riverbed product or service

More information

SD-WAN Deployment Guide (CVD)

SD-WAN Deployment Guide (CVD) SD-WAN Deployment Guide (CVD) All Cisco Meraki security appliances are equipped with SD-WAN capabilities that enable administrators to maximize network resiliency and bandwidth efficiency. This guide introduces

More information

Riverbed Central Management Console Installation Guide. Version 8.6.0c October 2014

Riverbed Central Management Console Installation Guide. Version 8.6.0c October 2014 Riverbed Central Management Console Installation Guide Version 8.6.0c October 2014 2014 Riverbed Technology, Inc. All rights reserved. Riverbed, SteelApp, SteelCentral, SteelFusion, SteelHead, SteelScript,

More information

CertifyMe. CertifyMe

CertifyMe. CertifyMe CertifyMe Number: 642-652 Passing Score: 800 Time Limit: 120 min File Version: 8.9 http://www.gratisexam.com/ CertifyMe 642-652 Exam A QUESTION 1 Exhibit: You work as an engineer at Certkiller.com. Study

More information

SteelCentral AppResponse 11 Virtual Edition Installation Guide

SteelCentral AppResponse 11 Virtual Edition Installation Guide SteelCentral AppResponse 11 Virtual Edition Installation Guide Virtual Edition for VMware ESXi 5.5 and ESXi 6.0 Version 11.1.x April 2017 2017 Riverbed Technology. All rights reserved. Riverbed, SteelApp,

More information

Riverbed Certified Solutions Associate WAN Optimization (RCSA-W) Blueprint

Riverbed Certified Solutions Associate WAN Optimization (RCSA-W) Blueprint Riverbed Certified Solutions Associate WAN Optimization (RCSA-W) Blueprint Exam 101-01 October, 2014 Version 2.8 2014 Riverbed Technology. All rights reserved. Riverbed, Cloud Steelhead, Granite, Interceptor,

More information

Configuring Cisco IOS IP SLAs Operations

Configuring Cisco IOS IP SLAs Operations CHAPTER 39 This chapter describes how to use Cisco IOS IP Service Level Agreements (SLAs) on the switch. Cisco IP SLAs is a part of Cisco IOS software that allows Cisco customers to analyze IP service

More information

SteelCentral AppResponse 11 Virtual Edition Installation Guide

SteelCentral AppResponse 11 Virtual Edition Installation Guide SteelCentral AppResponse 11 Virtual Edition Installation Guide Virtual Edition for VMware ESXi 5.5 and ESXi 6.0 Version 11.4.x March 2018 2018 Riverbed Technology, Inc. All rights reserved. Riverbed, SteelConnect,

More information

SteelCentral AppResponse 11 Virtual Edition Installation Guide

SteelCentral AppResponse 11 Virtual Edition Installation Guide SteelCentral AppResponse 11 Virtual Edition Installation Guide Virtual Edition for VMware ESXi 5.5 and ESXi 6.0 Version 11.0 November 2016 2016 Riverbed Technology. All rights reserved. Riverbed, SteelApp,

More information

Configuring NAT for High Availability

Configuring NAT for High Availability Configuring NAT for High Availability Last Updated: December 18, 2011 This module contains procedures for configuring Network Address Translation (NAT) to support the increasing need for highly resilient

More information

SonicWALL / Toshiba General Installation Guide

SonicWALL / Toshiba General Installation Guide SonicWALL / Toshiba General Installation Guide SonicWALL currently maintains two operating systems for its Unified Threat Management (UTM) platform, StandardOS and EnhancedOS. When a SonicWALL is implemented

More information

What is New in Cisco ACE 4710 Application Control Engine Software Release 3.1

What is New in Cisco ACE 4710 Application Control Engine Software Release 3.1 What is New in Cisco ACE 4710 Application Control Engine Software Release 3.1 PB478675 Product Overview The Cisco ACE Application Control Engine 4710 represents the next generation of application switches

More information

Deployment Scenarios for Standalone Content Engines

Deployment Scenarios for Standalone Content Engines CHAPTER 3 Deployment Scenarios for Standalone Content Engines This chapter introduces some sample scenarios for deploying standalone Content Engines in enterprise and service provider environments. This

More information

How to deploy a virtual machine on a Granite Datastore

How to deploy a virtual machine on a Granite Datastore SOLUTION GUIDE How to deploy a virtual machine on a Granite Datastore Solution Guide Riverbed Technical Marketing December 2013 2012 Riverbed Technology. All rights reserved. Riverbed, Cloud Steelhead,

More information

Installation Guide. McAfee Web Gateway. for Riverbed Services Platform

Installation Guide. McAfee Web Gateway. for Riverbed Services Platform Installation Guide McAfee Web Gateway for Riverbed Services Platform COPYRIGHT Copyright 2010 McAfee, Inc. All Rights Reserved. No part of this publication may be reproduced, transmitted, transcribed,

More information

Gigabit SSL VPN Security Router

Gigabit SSL VPN Security Router As Internet becomes essential for business, the crucial solution to prevent your Internet connection from failure is to have more than one connection. PLANET is the ideal to help the SMBs increase the

More information

CISCO EXAM QUESTIONS & ANSWERS

CISCO EXAM QUESTIONS & ANSWERS CISCO 642-618 EXAM QUESTIONS & ANSWERS Number: 642-618 Passing Score: 800 Time Limit: 120 min File Version: 39.6 http://www.gratisexam.com/ CISCO 642-618 EXAM QUESTIONS & ANSWERS Exam Name: Deploying Cisco

More information

Introduction to Cisco WAAS

Introduction to Cisco WAAS 1 CHAPTER This chapter provides an overview of the Cisco WAAS solution and describes the main features that enable WAAS to overcome the most common challenges in transporting data over a wide area network.

More information

Videoscape Distribution Suite Software Installation Guide

Videoscape Distribution Suite Software Installation Guide First Published: August 06, 2012 Last Modified: September 03, 2012 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800

More information

Dolby Conference Phone 3.1 configuration guide for West

Dolby Conference Phone 3.1 configuration guide for West Dolby Conference Phone 3.1 configuration guide for West 17 January 2017 Copyright 2017 Dolby Laboratories. All rights reserved. For information, contact: Dolby Laboratories, Inc. 1275 Market Street San

More information

Connectivity to Cloud-First Applications

Connectivity to Cloud-First Applications Aruba and Riverbed Partner to Accelerate and Optimize Mobile-First Connectivity to Cloud-First Applications Today s workforce is more distributed, more mobile, and more demanding. Constant availability

More information

E June Oracle Linux Storage Appliance Deployment and User's Guide

E June Oracle Linux Storage Appliance Deployment and User's Guide E90100-03 June 2018 Oracle Linux Storage Appliance Deployment and User's Guide Oracle Legal Notices Copyright 2018, Oracle and/or its affiliates. All rights reserved. This software and related documentation

More information

Virtual WAN Optimization Controllers

Virtual WAN Optimization Controllers acel E RA VA DATAS HEET Virtual WAN Optimization Controllers acelera VA Virtual WAN Optimization Controllers accelerate applications, speed data transfers and reduce bandwidth costs using a combination

More information

Deployment Modes Citrix Product Documentation docs.citrix.com January 3, 2019

Deployment Modes Citrix Product Documentation docs.citrix.com January 3, 2019 Citrix Product Documentation docs.citrix.com January 3, 2019 Contents Customizing the Ethernet ports 3 Port List.............................................. 3 Port Parameters 3 Accelerated Bridges (apa,

More information

WCCPv2 and WCCP Enhancements

WCCPv2 and WCCP Enhancements WCCPv2 and WCCP Enhancements Release 12.0(11)S June 20, 2000 This feature module describes the Web Cache Communication Protocol (WCCP) Enhancements feature and includes information on the benefits of the

More information

XLmanage Version 2.4. Installation Guide. ClearCube Technology, Inc.

XLmanage Version 2.4. Installation Guide. ClearCube Technology, Inc. XLmanage Version 2.4 Installation Guide ClearCube Technology, Inc. www.clearcube.com Copyright and Trademark Notices Copyright 2009 ClearCube Technology, Inc. All Rights Reserved. Information in this document

More information

Granite Deployment Guide. Version March 2013

Granite Deployment Guide. Version March 2013 Granite Deployment Guide Version 2.0.1 March 2013 2013 Riverbed Technology. All rights reserved. Riverbed, Cloud Steelhead, Granite, Interceptor, RiOS, Steelhead, Think Fast, Virtual Steelhead, Whitewater,

More information

ZENworks for Desktops Preboot Services

ZENworks for Desktops Preboot Services 3.2 Novell ZENworks for Desktops Preboot Services DEPLOYMENT www.novell.com Legal Notices Novell, Inc. makes no representations or warranties with respect to the contents or use of this documentation,

More information

WHITE PAPER: BEST PRACTICES. Sizing and Scalability Recommendations for Symantec Endpoint Protection. Symantec Enterprise Security Solutions Group

WHITE PAPER: BEST PRACTICES. Sizing and Scalability Recommendations for Symantec Endpoint Protection. Symantec Enterprise Security Solutions Group WHITE PAPER: BEST PRACTICES Sizing and Scalability Recommendations for Symantec Rev 2.2 Symantec Enterprise Security Solutions Group White Paper: Symantec Best Practices Contents Introduction... 4 The

More information

Cisco Nexus 7000 Series Switches Configuration Guide: The Catena Solution

Cisco Nexus 7000 Series Switches Configuration Guide: The Catena Solution Cisco Nexus 7000 Series Switches Configuration Guide: The Catena Solution First Published: 2016-12-21 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com

More information

Cisco WAAS Software Command Summary

Cisco WAAS Software Command Summary 2 CHAPTER This chapter summarizes the Cisco WAAS 4.0.7 software commands. lists the WAAS commands (alphabetically) and indicates the command mode for each command. The commands used to access modes are

More information

Planning Your WAAS Network

Planning Your WAAS Network 2 CHAPTER Before you set up your Wide Area Application Services (WAAS) network, there are general guidelines to consider and some restrictions and limitations you should be aware of if you are migrating

More information

CCNA Exploration Network Fundamentals. Chapter 03 Application Functionality and Protocols

CCNA Exploration Network Fundamentals. Chapter 03 Application Functionality and Protocols CCNA Exploration Network Fundamentals Chapter 03 Application Functionality and Protocols Updated: 27/04/2008 1 3.1 Applications: The Interface Between Human and Networks Applications provide the means

More information

WINNER 2007 WINNER 2008 WINNER 2009 WINNER 2010

WINNER 2007 WINNER 2008 WINNER 2009 WINNER 2010 2010 2009 2008 2007 WINNER 2007 WINNER 2008 WINNER 2009 WINNER 2010 DATA SHEET VIRTUAL ACCELERATOR Six Reasons to say Yes to Expand 1. Comprehensive Whether the WAN is used to connect file servers, email

More information

ProSAFE 8-Port and 16-Port 10-Gigabit Ethernet Web Managed Switch Models XS708Ev2 and XS716E User Manual

ProSAFE 8-Port and 16-Port 10-Gigabit Ethernet Web Managed Switch Models XS708Ev2 and XS716E User Manual ProSAFE 8-Port and 16-Port 10-Gigabit Ethernet Web Managed Switch Models XS708Ev2 and XS716E User Manual March 2017 202-11656-03 350 East Plumeria Drive San Jose, CA 95134 USA Support Thank you for purchasing

More information

Installing and Configuring VMware Identity Manager Connector (Windows) OCT 2018 VMware Identity Manager VMware Identity Manager 3.

Installing and Configuring VMware Identity Manager Connector (Windows) OCT 2018 VMware Identity Manager VMware Identity Manager 3. Installing and Configuring VMware Identity Manager Connector 2018.8.1.0 (Windows) OCT 2018 VMware Identity Manager VMware Identity Manager 3.3 You can find the most up-to-date technical documentation on

More information

48-Port 10/100/1000BASE-T + 4-Port 100/1000BASE-X SFP Gigabit Managed Switch GS T4S

48-Port 10/100/1000BASE-T + 4-Port 100/1000BASE-X SFP Gigabit Managed Switch GS T4S 48-Port 10/100/1000BASE-T + 4-Port 100/1000BASE-X SFP Gigabit Managed Switch GS-4210-48T4S Outlines Product Overview Product Benefits Applications Appendix Product Features 2 / 42 Product Overview Layer

More information

Avi Networks Technical Reference (16.3)

Avi Networks Technical Reference (16.3) Page 1 of 7 view online A TCP/UDP profile determines the type and settings of the network protocol that a subscribing virtual service will use. It sets a number of parameters, such as whether the virtual

More information

SteelCentral Controller for SteelHead Deployment Guide. December 2014

SteelCentral Controller for SteelHead Deployment Guide. December 2014 SteelCentral Controller for SteelHead Deployment Guide December 2014 2014 Riverbed Technology, Inc. All rights reserved. Riverbed, SteelApp, SteelCentral, SteelFusion, SteelHead, SteelScript, SteelStore,

More information

Cascade Sensor Installation Guide. Version 8.2 March 2009

Cascade Sensor Installation Guide. Version 8.2 March 2009 Cascade Sensor Installation Guide Version 8.2 March 2009 Trademarks Riverbed, the Riverbed logo, Riverbed Cascade, and Cascade are trademarks of Riverbed Technology, Inc. Intel is a registered trademark

More information

Getting Started. Contents

Getting Started. Contents Contents 1 Contents Introduction................................................... 1-2 Conventions................................................... 1-2 Feature Descriptions by Model................................

More information

ProSAFE 8-Port 10-Gigabit Web Managed Switch Model XS708Ev2 User Manual

ProSAFE 8-Port 10-Gigabit Web Managed Switch Model XS708Ev2 User Manual ProSAFE 8-Port 10-Gigabit Web Managed Switch Model XS708Ev2 User Manual April 2016 202-11656-01 350 East Plumeria Drive San Jose, CA 95134 USA Support Thank you for purchasing this NETGEAR product. You

More information

American Dynamics RAID Storage System iscsi Software User s Manual

American Dynamics RAID Storage System iscsi Software User s Manual American Dynamics RAID Storage System iscsi Software User s Manual Release v2.0 April 2006 # /tmp/hello Hello, World! 3 + 4 = 7 How to Contact American Dynamics American Dynamics (800) 507-6268 or (561)

More information

DEPLOYMENT GUIDE DEPLOYING F5 WITH ORACLE ACCESS MANAGER

DEPLOYMENT GUIDE DEPLOYING F5 WITH ORACLE ACCESS MANAGER DEPLOYMENT GUIDE DEPLOYING F5 WITH ORACLE ACCESS MANAGER Table of Contents Table of Contents Introducing the F5 and Oracle Access Manager configuration Prerequisites and configuration notes... 1 Configuration

More information

Monitoring WAAS Using WAAS Central Manager. Monitoring WAAS Network Health. Using the WAAS Dashboard CHAPTER

Monitoring WAAS Using WAAS Central Manager. Monitoring WAAS Network Health. Using the WAAS Dashboard CHAPTER CHAPTER 1 This chapter describes how to use WAAS Central Manager to monitor network health, device health, and traffic interception of the WAAS environment. This chapter contains the following sections:

More information

Request for Proposal (RFP) for Supply and Implementation of Firewall for Internet Access (RFP Ref )

Request for Proposal (RFP) for Supply and Implementation of Firewall for Internet Access (RFP Ref ) Appendix 1 1st Tier Firewall The Solution shall be rack-mountable into standard 19-inch (482.6-mm) EIA rack. The firewall shall minimally support the following technologies and features: (a) Stateful inspection;

More information

Configuring TCP Header Compression

Configuring TCP Header Compression Configuring TCP Header Compression First Published: January 30, 2006 Last Updated: May 5, 2010 Header compression is a mechanism that compresses the IP header in a packet before the packet is transmitted.

More information

CISCO EXAM QUESTIONS & ANSWERS

CISCO EXAM QUESTIONS & ANSWERS CISCO 642-655 EXAM QUESTIONS & ANSWERS Number: 642-655 Passing Score: 800 Time Limit: 120 min File Version: 70.0 http://www.gratisexam.com/ CISCO 642-655 EXAM QUESTIONS & ANSWERS Exam Name: WAASFE-Wide

More information

Configuring RTP Header Compression

Configuring RTP Header Compression Configuring RTP Header Compression First Published: January 30, 2006 Last Updated: July 23, 2010 Header compression is a mechanism that compresses the IP header in a packet before the packet is transmitted.

More information

Observer Probe Family

Observer Probe Family Observer Probe Family Distributed analysis for local and remote networks Monitor and troubleshoot vital network links in real time from any location Network Instruments offers a complete line of software

More information

Configuring High Availability (HA)

Configuring High Availability (HA) 4 CHAPTER This chapter covers the following topics: Adding High Availability Cisco NAC Appliance To Your Network, page 4-1 Installing a Clean Access Manager High Availability Pair, page 4-3 Installing

More information

Configuring Traffic Interception

Configuring Traffic Interception 4 CHAPTER This chapter describes the WAAS software support for intercepting all TCP traffic in an IP-based network, based on the IP and TCP header information, and redirecting the traffic to wide area

More information

Software Update C.09.xx Release Notes for the HP Procurve Switches 1600M, 2400M, 2424M, 4000M, and 8000M

Software Update C.09.xx Release Notes for the HP Procurve Switches 1600M, 2400M, 2424M, 4000M, and 8000M Software Update C.09.xx Release Notes for the HP Procurve Switches 1600M, 2400M, 2424M, 4000M, and 8000M Topics: TACACS+ Authentication for Centralized Control of Switch Access Security (page 7) CDP (page

More information

HP Load Balancing Module

HP Load Balancing Module HP Load Balancing Module System Management Configuration Guide Part number: 5998-4216 Software version: Feature 3221 Document version: 6PW100-20130326 Legal and notice information Copyright 2013 Hewlett-Packard

More information

Fundamental Questions to Answer About Computer Networking, Jan 2009 Prof. Ying-Dar Lin,

Fundamental Questions to Answer About Computer Networking, Jan 2009 Prof. Ying-Dar Lin, Fundamental Questions to Answer About Computer Networking, Jan 2009 Prof. Ying-Dar Lin, ydlin@cs.nctu.edu.tw Chapter 1: Introduction 1. How does Internet scale to billions of hosts? (Describe what structure

More information

HP A5500 EI & A5500 SI Switch Series Network Management and Monitoring. Configuration Guide. Abstract

HP A5500 EI & A5500 SI Switch Series Network Management and Monitoring. Configuration Guide. Abstract HP A5500 EI & A5500 SI Switch Series Network Management and Monitoring Configuration Guide Abstract This document describes the software features for the HP A Series products and guides you through the

More information

See Network Integrity Installation Guide for more information about software requirements and compatibility.

See Network Integrity Installation Guide for more information about software requirements and compatibility. Oracle Communications Network Integrity Release Notes Release 7.3.2 E66035-01 May 2016 This document provides information about Oracle Communications Network Integrity Release 7.3.2. This document consists

More information

H3C S1850 Gigabit WEB Managed Switch Series

H3C S1850 Gigabit WEB Managed Switch Series DATASHEET H3C S1850 Gigabit WEB Managed Switch Series Product overview The H3C 1850 Switch Series consists of advanced smart-managed fixed-configuration Gigabit switches designed for small businesses in

More information

HP Load Balancing Module

HP Load Balancing Module HP Load Balancing Module Load Balancing Configuration Guide Part number: 5998-4218 Software version: Feature 3221 Document version: 6PW100-20130326 Legal and notice information Copyright 2013 Hewlett-Packard

More information