Load Balancing Technology White Paper

Size: px
Start display at page:

Download "Load Balancing Technology White Paper"

Transcription

1 Load Balancing Technology White Paper Keywords: Server, gateway, link, load balancing, SLB, LLB Abstract: This document describes the background, implementation, and operating mechanism of the load balancing technology. In addition, it briefly presents load balancing applications. Acronyms: Acronyms ACL ARP DNAT DNS DR FTP HTTP ICMP ISP IPsec LB LLB NAT OAA OSPF RADIUS RTSP RTT SIP SLB SMTP SSL TCP TTL UDP URI URL Full spelling Access Control List Address Resolution Protocol Destination NAT Domain Name Service Direct Routing File Transfer Protocol Hypertext Transfer Protocol Internet Control Message Protocol Internet Service Provider IP Security Load Balancing Link Load Balancing Network Address Translation Open Application Architecture Open Shortest Path First Remote Authentication Dial-In User Service Real Time Streaming Protocol Round Trip Time Session Initiation Protocol Server Load Balancing Simple Mail Transfer Protocol Secure Socket Layer Transmission Control Protocol Time To Live User Datagram Protocol Uniform Resource Identifier Uniform Resource Locator Hewlett-Packard Development Company, L.P. 1

2 Acronyms VPN VRRP VSIP Full spelling Virtual Private Network Virtual Router Redundancy Protocol Virtual Service IP Address Hewlett-Packard Development Company, L.P. 2

3 Table of Contents Overview 4 Background 4 Background of Server Load Balancing 4 Background of Gateway Load Balancing 5 Background of Link Load Balancing 5 Benefits 6 Implementation 6 Concepts 6 Server Load Balancing 7 NAT-Mode Layer 4 Server Load Balancing 7 DR-Mode Layer 4 Server Load Balancing 9 Layer 7 Server Load Balancing 11 Gateway Load Balancing 13 Combination of Server Load Balancing and Gateway Load Balancing 15 Link Load Balancing 16 Outbound Link Load Balancing 16 Inbound Link Load Balancing 17 Stateful Failover of LB Devices 19 Load Balancing Deployment Modes 21 Direct Connection Mode 21 Bypass Mode and OAA 22 Technical Characteristics 23 Rich Load Balancing Scheduling Algorithms 23 Static Scheduling Algorithms 23 Dynamic Scheduling Algorithms 24 Best Performing Link Function 25 Rich Health Monitoring Methods 26 Persistence Function 27 Persistence Function with an Explicit Association 27 Persistence Function with an Implicit Association 27 Layer-4 and Layer-7 Server Load Balancing 28 Flexible Real Service/Logical Link Troubleshooting Methods 29 Rich real Service Group Match Criteria 29 Slow-Online 30 Enabling Stopping Service or Slow-Offline 30 Application Scenarios 31 Application in Campus Networks 31 Application in Data Centers and Large Portal Websites 32 Hewlett-Packard Development Company, L.P. 3

4 Overview Background Background of Server Load Balancing The increase of services brings large traffic to networks especially to data centers, large enterprises and portal websites. In addition, server websites provide more and more information by using applications such as HTTP, FTP and SMTP. Most websites (especially electronic business websites) have to provide all-day services, and any service interruption or key data loss in communication will result in business loss. All these require high performance and high reliability on application services. However, the increase of server processing speed and memory access speed is greatly lower than that of the network bandwidth and applications. In addition, the increase of network bandwidth makes server resource consumption more serious. Therefore, the servers become the network bottleneck, and the traditional single device mode becomes the network failure point. Figure 1 Shortage of the current network The following solutions are addressed to solve the above problems: Server hardware upgrade: high-performance servers replacing low-performance servers Disadvantages: High cost: High-performance servers are high in cost, and the original low-performance servers are left unused, wasting resources. Poor expandability: Increasing services require high investment for hardware upgrade. Problems of the current network cannot be completely solved, for example, single point failure and shortage of servers. Hewlett-Packard Development Company, L.P. 4

5 Building a server cluster to balance load among servers in the cluster with the load balancing technology Multiple servers form a server cluster, with each server providing the same or similar services. A load balancing device (LB device) is deployed at the front end of the server cluster to distribute user requests in the server cluster according to pre-configured load balancing rules, provide services, and maintain the servers. Advantages: Low cost: The available resources will not be wasted, and no high-end devices are needed for the new resources. Expandability: When services are increasing, the system can satisfy the needs by adding servers, without affecting the existing services and reducing service quality. High reliability: When a server fails, the LB device redistributes user requests to other servers in the same cluster, ensuring uninterrupted services. Figure 2 Load balancing technology Internet LB device Background of Gateway Load Balancing Gateways such as SSL VPN gateways, IPsec gateways, and firewalls are easy to be the bottleneck of networks due to the complexity of service processing. Take firewalls as an example: Firewalls are an indispensable part in network deployment. However, firewalls need to filter packets, which will result in low forwarding performance of the firewalls, so they will become the bottleneck of the network. If hardware upgrade is performed by discarding the available devices, resources will be wasted. With the increase of services, devices will be upgraded frequently, which brings a high cost. The concept of gateway cluster is addressed to solve this problem. Multiple gateways are connected to the network to form a gateway cluster to enhance the network processing capabilities. Background of Link Load Balancing To avoid the network availability problem brought by carrier dedicated line fault and solve the network access problems brought by shortage of network bandwidth, an enterprise may rent two or more carrier dedicated lines. To make better use of dedicated lines and provide better services for enterprises, policy routing can be applied. However, policy routing is not easy to configure and cannot adapt to network structure changes. In addition, it cannot distribute packets based on bandwidth, and the links with a high Hewlett-Packard Development Company, L.P. 5

6 Benefits throughput cannot be used to a full extent. Link load balancing can balance load among multiple links by a dynamic algorithm and adapt to network changes. Load balancing provides a cost-effective, efficient, and transparent method to expand the bandwidth of network devices and servers, increase the throughput, and enhance data process capability, increasing the flexibility and availability of networks. Load balancing features the following advantages: High performance: Distributes services evenly to multiple servers, solving the bottleneck that may exist in a system. Scalability: Facilitates addition of network devices or links, meeting the ever-increasing service requirements for servers, without decreasing service quality. Reliability: Monitors the status of the application servers in real time with the health monitoring function, ensuring that the entire system is available when some hardware and software fail. Transparency: Effectively enables a loosely coupled service system composed of multiple independent computers to form a virtual server. Increase or decrease of servers will not affect normal services, and the users will not be aware of the changes. Implementation Concepts Virtual service Services provided by LB devices are virtual services. Configured on an LB device, a virtual service is uniquely identified by VPN instance, virtual service IP address, service protocol, and service port number. Access requests of users are sent to the LB device through a public or private network. If matching the virtual service, the requests are distributed to real services by the LB device. Real service Services provided by real servers are real services. A real service can be a traditional FTP or HTTP service, and can also be a forwarding service in a generic sense. For example, a real service in firewall load balancing is the packet forwarding path. OAA The H3C-proprietary Open Application Architecture (OAA) provides a set of complete standard software and hardware interfaces. The third party vendors can develop products with specific functions. These products can be compatible with H3C devices as long as they conform to the OAA interface standards. Therefore the functions of single network products can be expanded and the users can get more benefits. Persistence function Hewlett-Packard Development Company, L.P. 6

7 The persistence function directs multiple connections belonging to the same application layer session to the same server, ensuring that the same service is processed by the same server (or forwarded over the same link), and reducing the times that an LB device distributes services and traffic. Load balancing scheduling algorithm An LB device needs to distribute service traffic to different real services (a real service corresponds to a server in server load balancing, a gateway in gateway load balancing, and a link in link load balancing) according to a load balancing scheduling algorithm. Best performing link The best performing link function enables an LB device to detect link status in real time to select a best link according to the detection result, ensuring that traffic is forwarded over the best link. Health monitoring The health monitoring function allows an LB device to detect whether real servers can provide services. Based on different detection methods (health monitoring methods), the LB device can detect whether servers exist and whether they can provide services. ISP table The ISP table describes IP address ranges of different carriers. Link load balancing searches the ISP table based on source addresses (inbound link load balancing) or destination addresses (outbound link load balancing) to get carrier information and select a physical link for the traffic according to the carrier information. Server Load Balancing Server load balancing provides load balancing services for a group of servers which are in the same LAN, and provides a group (or multiple groups) of same (or similar) services at the same time. Server load balancing is the most common networking for data centers. Server load balancing comprises Layer 4 server load balancing and Layer 7 server load balancing: Layer 4 server load balancing supports IPv4 and IPv6, and is implemented based on streams. It distributes packets of the same stream to the same server. Layer 4 server load balancing cannot distribute HTTP-based Layer 7 services based on contents, restricting the application scope of load balancing services. Layer 4 server load balancing can be classified into Network Address Translation (NAT)-mode server load balancing and Direct routing (DR)-mode server load balancing. Layer 7 server load balancing supports only IPv4, and is implemented based on contents. It analyses the packet contents, such as HTTP and RTSP, distributes packets one by one based on the contents, and distributes connections to the specified server according to the predefined policies. Layer 7 server load balancing applies load balancing services to a large scope and only supports NAT mode. NAT-Mode Layer 4 Server Load Balancing NAT-mode Layer 4 server load balancing features flexible networking. Servers can be located at different LANs. When an LB device distributes service requests, it changes the Hewlett-Packard Development Company, L.P. 7

8 destination IP address (the IP address of the real service) in a request and forwards the request to each real service. Figure 3 NAT-mode Layer 4 server load balancing NAT-mode Layer 4 server load balancing includes the following basic elements: LB device a device that distributes different service requests to multiple servers. Server a server that responds to and processes different service requests. VSIP virtual service IP address of the cluster, used for users to request services. Server IP IP address of a server, used for an LB device to distribute service requests. Mechanism A client sends a request with the destination as the VSIP to the LB device connected to the server cluster. The virtual service configured on the LB device receives the request, selects a real server according to the persistence function, ACL rules, and scheduling algorithm, rewrites the destination address using the real server address through DNS, and then sends the request to the selected real server; when the response from the real server passes the LB device, the source IP address of the packet is changed to the VSIP of the virtual server, and returned to the client. Then the load balancing scheduling process is completed. Hewlett-Packard Development Company, L.P. 8

9 Work flow Figure 4 Work flow of NAT-mode Layer 4 server load balancing The following describes the work flow of NAT-mode Layer 4 server load balancing. 1. The host sends a request where the source IP is the host IP and destination IP is the VSIP. 2. Upon receiving the request, the LB device uses the persistence function or an algorithm to calculate to which server it distributes the request. 3. The LB device uses the Destination NAT (DNAT) technology to distribute the request. The source IP is the host IP and the destination IP is the server IP. 4. The server receives and processes the request and then sends a response where the source IP is the server IP and the destination IP is the host IP. 5. The LB device receives the response, translates the source IP, and forwards the response where the source IP is the VSIP, and the destination IP is the host IP. The work flow shows that NAT is used in server load balancing, and NAT-mode server load balancing is thus called. Technical characteristics NAT-mode server load balancing features flexible networking and is applicable to various kinds of networking scenarios. DR-Mode Layer 4 Server Load Balancing In DR-mode Layer 4 server load balancing, only the requests from clients are sent to the LB device, and the responses from the servers are not sent to the LB device, reducing the work load of the LB device and preventing the LB device from being the bottleneck of the network. When the LB device distributes service requests, it does not change the destination IP address, but changes the destination MAC address to the MAC address of the real service and then forwards the packet to the real service. Hewlett-Packard Development Company, L.P. 9

10 Figure 5 DR-mode Layer 4 server load balancing Mechanism Work flow DR-mode Layer 4 server load balancing includes the following basic elements: LB device a device that distributes different service requests to multiple servers. Server a server that responds to and processes different service requests. VSIP virtual service IP address of the cluster, used for users to request services. Server IP IP address of a server, used by the LB device to distribute requests. In DR-mode Layer 4 server load balancing, both the LB device and the servers are configured with VSIPs. The VSIPs cannot respond to ARP requests, so you need to configure the VSIP on a loopback interface. A real service must be configured with a real IP address except the VSIP for communication with the LB device, and the LB device and the servers are in the same link domain. Packets sent to the LB device are distributed to the corresponding real servers by the LB device, and the packets from the real servers to the client are sent back by the LB device. Figure 6 Work flow of DR-mode Layer 4 server load balancing work flow Hewlett-Packard Development Company, L.P. 10

11 The following describes the work flow of DR-mode Layer 4 server load balancing: 1. The host sends a request where the source IP is the host IP and the destination IP is the VSIP. 2. Upon receiving the request, the general device forwards it to LB device. 3. The VSIP cannot be contained in an ARP request and response, so the general device only forwards the request to the LB device. 4. Upon receiving the request, the LB device uses the persistence function or an algorithm to calculate to which server it distributes the request. 5. The LB device distributes the request where the source IP is the host IP, destination IP is the VSIP, and the destination MAC is the MAC address of the server. 6. Upon receiving the request, the server processes the request and then sends a response where the source IP address is the VSIP and the destination IP address is the host IP. 7. After receiving the response, the general device forwards the response to the host. The work flow shows that the LB device does not distribute requests by searching the routing table, but routes the requests to the server by modifying the destination MAC address, and DR-mode server load balancing is thus called. Technical characteristics Only unidirectional packets pass the LB device, so the LB device takes a small load, not easy to become the bottleneck of the network, and has a higher forwarding capability. Layer 7 Server Load Balancing Figure 7 Network diagram for Layer 7 server load balancing Host Cluster LB device Server A IP A Service group A Server B IP B IP network VSIP Server C IP C Service group B Server D IP D Layer 7 server load balancing includes the following basic elements: LB device a device that distributes different service requests to multiple servers. Server a server that responds to and processes different service requests. Hewlett-Packard Development Company, L.P. 11

12 Mechanism Work flow Server group a real service group is a logical concept. Servers can be classified into different groups according to the common attributes of these servers. For example, servers can be classified into static storage server group and dynamic switching server group according to their functions; or they can be classified into music server group, video server group and picture server group according to the services they provide. VSIP virtual service IP address of the cluster, used for users to request services. Server IP IP address of a server, used by the LB device to distribute requests. The client establishes a TCP connection with the LB device and sends a request with the destination VSIP to the LB device. Upon receiving the request, the LB device selects an appropriate server group according to a persistence method, real service group match criteria, and scheduling algorithm. The LB device uses the IP address of the client to establish a TCP connection with the real server, use the IP address of the real server as the destination address of the client request, and then sends the request to the real server. The real server sends a response. When the LB device receives the response, it changes the source address of the packet to the VSIP of the virtual service, and returns it to the client. The load balancing process is then completed. Figure 8 Work flow of Layer 7 server load balancing Host LB device Server 1) SYN HIP->VSIP, seq=x 2) SYN ACK VSIP->HIP, seq=y, ack=x+1 3) ACK HIP->VSIP, seq=x, ack=y+1 4) Request HIP->VSIP, seq=x+1, ack=y+1 5) Scheduler 11) Response VSIP->HIP, seq=y+1, ack=x+n 6) SYN HIP->SIP, seq=x 7) SYN ACK SIP->HIP, seq=z, ack=x+1 8) ACK HIP->SIP, seq=x, ack=z+1 9) Request HIP->SIP, seq=x+1, ack=z+1 10) Response SIP->HIP, seq=z+1, ack=x+n Note: HIP=Host IP SIP=Server IP Hewlett-Packard Development Company, L.P. 12

13 The following describes the work flow of Layer 7 server load balancing: 1. through 3 Host and the LB device establish a TCP connection. 4. Host sends a service request where the source IP is the host IP and the destination IP is the VSIP. 5. Upon receiving the request, the LB device selects an appropriate server group for the request according to a real service group method, uses an algorithm to calculate to which server in the server group it distributes the request, and then caches the request. 6. The LB device sends an SYN packet to Server. The sequence number is the sequence number of the SYN packet sequence number sent by the host. The source IP address is the host IP, and the destination IP address is the server IP. 7. Server sends an SYN ACK packet where the destination IP address is the Host IP. 8. After receiving the SYN ACK packet, the LB device responds an ACK packet. 9. The LB device changes the destination IP and TCP sequence number of the request cached at step 5, and sends them to Server. 10. Server sends a response to the LB device, with the destination IP as the Host IP. 11. The LB device changes the source IP and TCP sequence number in the response, and sends them to Host. Technical characteristics Layer 7 server load balancing distributes packets based on packet content. It is applicable to networking scenarios where different servers provide different functions. Gateway Load Balancing Gateway load balancing includes the following basic elements: LB device: A device that distributes traffic from the request sender to multiple gateways. LB devices fall into level 1 LB devices and level 2 LB devices. As shown in Figure 9, if traffic is from Host A to Host B, LB device A is level 1, and LB device B is level 2; if traffic is from Host B to Host A, LB Device B is level 1, and LB Device A is level 2. Gateway: A gateway processes data, such as SSL VPN, IPsec, and firewall Take firewall load balancing as an example. The network diagram is as shown in Figure 9. Hewlett-Packard Development Company, L.P. 13

14 Figure 9 Network diagram for sandwich load balancing Cluster Firewall A Host A LB Device A LB Device B Host B IP network IP network Firewall B Mechanism Work flow Firewalls provide services based on sessions, so the requests and responses for one session must pass the same firewall. To ensure that the firewall services can be performed normally and the internal networking is not affected, firewall sandwich load balancing should be adopted. In a firewall sandwich load balancing networking environment, the level 1 LB device performs firewall load balancing for incoming traffic, and the level 2 LB device ensures that the traffic coming from a firewall returns from the same firewall. Processing of outgoing traffic is opposite. Figure 10 Firewall load balancing work flow LB device A Firewall LB device B 1) Traffic from source 2) Scheduler & Forward 3) Forward 4) Record & Forward to destination 5) Traffic from destination 6) Forward 7) Forward 8) Forward to source The following describes the work flow of firewall load balancing: 1. LB device A receives the traffic from the source. Hewlett-Packard Development Company, L.P. 14

15 2. LB device A forwards the traffic to a firewall based on the persistence function and the scheduling algorithm. 3. Use the source address hashing algorithm because firewalls provide services based on sessions. 4. The firewall forwards the traffic to LB device B. 5. As a level 2 LB device, LB device B records the firewall that forwards the traffic and then forwards the traffic to the destination. 6. LB device B receives the traffic sent from the destination. 7. LB device B forwards the traffic to the firewall recorded in step The firewall forwards the traffic to LB device A. 9. LB device A forwards the traffic back to the source. The load balanced firewalls between two LB devices perform network traffic load balancing, so network performance is increased. This load balancing mode is also called sandwich load balancing. Technical characteristics Gateway load balancing enhances firewall networking flexibility, and is applicable to any networking environment. Combination of Server Load Balancing and Gateway Load Balancing Gateway load balancing can be used together with server load balancing. Take combination of firewall load balancing and server load balancing as an example. The network diagram is as shown in Figure 11. Figure 11 Network diagram for combination of firewall load balancing and server load balancing Cluster A Firewall A Cluster B Server A IP A IP network LB device A VSIP LB device B Server B IP B Firewall B Server C IP C In the figure above, Cluster A adopts firewall load balancing, and Cluster B adopts NAT-mode server load balancing. The combination of these two modes is to combine the work flows of them. This networking mode not only prevents firewalls from being the Hewlett-Packard Development Company, L.P. 15

16 bottleneck of the network, but also enhances the performance and availability of different network services such as HTTP and FTP. Link Load Balancing Link load balancing falls into outbound link load balancing and inbound link load balancing according to the amount of services. Outbound Link Load Balancing When multiple links exist between the intranet and extranet, the traffic for intranet users accessing the extranet can be balanced among the links with the outbound link load balancing function, as shown in Figure 12. Figure 12 Outbound link load balancing network diagram Cluster ISP1 Router A VSIP ISP2 IP network Source LB device Router B Destination ISP3 Router C Mechanism Outbound link load balancing includes the following basic elements: LB device: A device that distributes traffic from the intranet to the extranet to multiple physical links. Physical links: Links provided by carriers. VSIP: Virtual service IP address provided. It is the destination network segment of the packets sent by users. In outbound link load balancing, VSIP is the destination segment of the packets sent by intranet users. After a user sends the packets with destination as the VSIP to the LB device, the LB device selects a best physical link according to the persistence function, ACL, best performing link algorithm, ISP table, and scheduling algorithm, and distributes the traffic from the intranet to the extranet to the link. Hewlett-Packard Development Company, L.P. 16

17 Work flow Figure 13 Outbound link load balancing work flow The following describes the work flow of outbound link load balancing: 1. The LB device receives the traffic from the intranet users. 2. The LB device selects a link according to the persistence function, ACL, best performing link algorithm, ISP table, and scheduling algorithm. 3. In outbound link load balancing, the best performing link algorithm or bandwidth-based scheduling algorithm is used for traffic distribution. 4. The LB device forwards the traffic to the selected link. 5. The LB device receives the traffic sent from the extranet users. 6. The LB device forwards the traffic to the intranet users. Technical characteristics Used together with a NAT gateway. Different links use different source IP addresses to ensure that requests and responses are sent over the same link. Checks the connectivity of any node on a link through health monitoring, ensuring the reachability of the entire path. Balances traffic among multiple links through a scheduling algorithm, and supports load balancing based on bandwidth. Dynamically calculates link quality by using the best performing link algorithm to distribute traffic to the best link. Inbound Link Load Balancing When multiple links exist between the intranet and extranet, the traffic for extranet users accessing the intranet can be balanced among the links with the inbound link load balancing function, as shown in Figure 14. Hewlett-Packard Development Company, L.P. 17

18 Figure 14 Inbound link load balancing network diagram Mechanism Inbound link load balancing includes the following basic elements: LB device: A device that forwards traffic from the extranet to the intranet through different physical links to balance traffic among multiple physical links. Meanwhile, an LB device works as the authoritative name server of the domain name to the resolved. Physical links: Links provided by carriers. Local DNS server: Resolves the DNS requests sent from extranet users, and forwards the requests to the authoritative name server, namely, the local DNS server of the LB device. In inbound link load balancing, an LB device works as an authoritative name server to record mappings between domain names and IP addresses of the intranet servers. One domain name can map multiple IP addresses, with each IP address corresponding to one physical link. When an extranet user accesses the internal server through DNS, the local DNS server forwards the DNS request to the authoritative name server, namely, the LB device. The LB device selects the best link using ACL, best performing link algorithm, and ISP table and sends the IP address of the interface connecting the link to the extranet as the DNS resolution result to the extranet user, which can access the internal server through the link. Hewlett-Packard Development Company, L.P. 18

19 Work flow Figure 15 Inbound link load balancing work flow The following describes the work flow of inbound link load balancing: 1. An extranet user sends a DNS request to its local DNS server to access the internal server. 2. The local DNS server forwards the request to the authoritative name server corresponding to the domain name, namely, the LB device. 3. The LB device selects the best link according to the persistence method, ACL rules, best performing link algorithm, and ISP table, and sends the IP address of the interface connecting the link to the extranet as the DNS resolution result to the extranet user. 4. In inbound link load balancing networking, the best performing link algorithm is used to balance the traffic from extranet to intranet among multiple links. 5. The LB device sends the DNS response to the local DNS server. 6. The local DNS server forwards the resolution result to the user. 7. The user can access the internal server following the selected link. Technical characteristics Combined with server load balancing to realize that traffic for extranet users accessing the internal server is balanced among multiple links, and that the traffic is balanced among multiple servers at the same time. Checks the connectivity of physical links through health monitoring. Dynamically calculates the quality of links by using the best performing link algorithm and guides the subsequent services to use the optimal egress. Stateful Failover of LB Devices An LB device is on the key path no matter in server load balancing, gateway load balancing or link load balancing, and the scalability and security of the LB device affect the availability of a network. Therefore, to avoid single point failure, the LB device must support stateful failover, that is, two LB devices back up the services of each other through a backup link, ensuring that the statuses of the services on the two devices are the same. Hewlett-Packard Development Company, L.P. 19

20 When one device fails, the services are switched to the other through VRRP or dynamic routing (for example, OSPF). Because the services of the failed device have been backed up on the other device, the service data can be transmitted by the backup device, avoiding network service interruption to a large extent. Stateful failover of LB devices supports two working modes: Main/backup mode: Two LB devices, with one acting the main device, and the other as the backup device. The main device processes all services, and sends the service information to the backup device over the backup link; the backup device does not process services, and only acts as the backup. When the main device fails, the backup devices takes over the main device to process services, thus ensuring that the new load balancing services can be processed normally, and the currently running load balancing services will not be interrupted. Load balancing mode: Two LB device are both main devices to process service traffic, and they work as the backup of each other to back up the service information of each other. When one device fails, the other device processes all the services, thus ensuring that the new load balancing services can be processed normally, and the currently running load balancing services will not be interrupted. Take gateway load balancing as an example. Stateful failover network diagram is as shown in Figure 16. Hewlett-Packard Development Company, L.P. 20

21 Figure 16 Gateway load balancing stateful failover network diagram Load Balancing Deployment Modes LB devices are deployed in a network in direct connection mode or bypass mode. Direct Connection Mode The following figure shows a direct connection mode load balancing. The LB device is deployed at the center of the network, and the load balancing packets between servers and client are routed by the LB device. Hewlett-Packard Development Company, L.P. 21

22 Figure 17 Direct connection mode network diagram Bypass Mode and OAA As shown in Figure 18, in bypass mode, the LB device does not work as the routing device between the servers and client, but connected to a routing device. The bypass mode can also be used with the OAA. Installing a card with the LB function on a routing device can implement bypass mode load balancing. In DR mode, the LB device can only work in bypass mode. Figure 18 Bypass mode network diagram In bypass mode, the configurations on the routing/switching device are very important. To enable traffic from the client to the servers to reach the LB device, a route to the VSIP must be configured on the routing/switching device. If the traffic from the servers to the client does not need to pass the LB device, the traffic can be returned to the client through a routing/switching device in the following modes: Hewlett-Packard Development Company, L.P. 22

23 The servers and the LB device are in the same Layer 2 network, and the servers configure the LB device as their gateway. Configure policy routing on the routing/switching device to direct the traffic returned from the servers to the LB device. The LB device performs source address NATing when forwarding traffic from the client. Technical Characteristics Rich Load Balancing Scheduling Algorithms A scheduling algorithm allows an LB device to distribute the traffic to be balanced to the specified servers in a server cluster or specified links in a link cluster according to certain rules to make each server or link have balanced load. Load balancing supports rich scheduling algorithms, each of which implements different types of load balancing. Users can adopt different algorithms based on the specific application environment. Static Scheduling Algorithms Static scheduling algorithms distribute service requests according to the preconfigured rules, without considering the current load of each real service or link. Static scheduling algorithms feature simple implementation and fast scheduling. Round robin scheduling Distributes requests to different servers or links in turn to make each real server or link balance the connection requests. For example, a real service cluster contains three real services A, B and C. Suppose the maximum number of connections for each real service does not reach the upper limit. The proportion of the connections distributed to A, B and C is 1:1:1. Round robin scheduling can be applied in a scenario where the performance of the servers in a server cluster or bandwidth of the links in a link cluster is equivalent. Weighted round robin scheduling Distributes requests to different servers or links based on the weights of real services; a higher weight indicates that more requests will be distributed. This algorithm can solve the problem of inconsistent server performance or link bandwidth. Weight marks the server performance or link bandwidth differences. For example, a real service cluster contains three real services A, B and C. Suppose the maximum number of connections for each real service does not reach the upper limit. The proportion of the connections distributed to A, B and C is 4:3:2. Weighted round robin scheduling can be applied in a scenario where the performance of the servers or bandwidth of the links in a server cluster or link cluster has difference. Random scheduling Hewlett-Packard Development Company, L.P. 23

24 Distributes requests to different servers or links randomly. According to the statistics of the data, each server or link will balance the users connection requests. Random scheduling can be applied in a scenario where the performance of the servers in a server cluster or bandwidth of the links in a link cluster is equivalent. Weighted random scheduling Distributes requests randomly to different servers or links. According to the statistics of the data, each server or link balances the users requests according to the weights of the requests. The percentage is the same as that of the weighted round robin algorithm. Weighted random scheduling can be applied in a scenario where the performance of the servers or bandwidth of the links in a server cluster or link cluster has difference. Source IP hashing scheduling Distributes the requests of the same source IP address to a specific server or link based on a hash function. Source IP hashing scheduling can be applied in a scenario where requests of the same user need to be distributed to one server or link. Source IP and source port hashing scheduling Distributes the requests with the same source IP address to a specific server or link based on a hash function. Source IP and source port hashing scheduling can be applied in a scenario where requests of the same service of the same user need to be distributed to one server or link. Destination IP hashing scheduling Distributes the requests of the same destination IP address to a specific server or link based on a hash function. Destination IP hashing scheduling can be applied in a scenario where requests to the same destination need to be distributed to one server or link. It is applicable to gateway load balancing and link load balancing. UDP packet load hashing scheduling Distributes the requests that have the same content for specific fields of the UDP packet load to a specific server. UDP packet load hashing scheduling can be applied in a scenario where requests that have the same content for specific fields of the UDP packet load need to be distributed to a specific server. Dynamic Scheduling Algorithms Compared with static scheduling algorithms, dynamic scheduling algorithms distribute connections according to the load status during the operation of the real services or physical links; therefore, load is better balanced. Least connection scheduling An LB device estimates the load of the servers and links according to the number of connections on the servers or links, and distributes new connections to the server or link with the fewest number of active connections. This algorithm can smoothly distribute the requests with a rather large load difference to each server or link. Hewlett-Packard Development Company, L.P. 24

25 Least connection scheduling can be applied in a scenario where the performance of the servers in a server cluster or bandwidth of the links in a link cluster is equivalent, and the saving time of the connections initiated by different users is different. Weighted least connection scheduling Makes the number of active connections proportional to the weights of servers or links when scheduling new connections. Weight indicates the server processing capability or link actual bandwidth. Weighted least connection scheduling can be applied in a scenario where the performance of the servers in a server cluster or bandwidth of the links in a link cluster is different, and the saving time of the connections initiated by different users is different. Bandwidth scheduling An LB device distributes the new connections to the link with the maximum available bandwidth according to the available bandwidth and weight of different links. Bandwidth scheduling can be applied in a scenario where the statuses (such as bandwidth and congestion) of the links in a link cluster are different. Best Performing Link Function For outbound link load balancing, the best performing link function allows you to forward packets whose destination address matches the best performing link entry over the link in the entry. If no matched entry is found, best performing link detection is performed and the device generates a dynamic best performing link entry to guide packet forwarding. For inbound link load balancing allows an LB device to match the source IP address of the DNS request against the best performing link entry, use the physical link in the matched best performing link entry, and respond with the IP address in the corresponding DNS A record. If no matched entry is found, best performing link detection is performed and the device generates a dynamic best performing link entry for DNS resolution. Best performing link allows an LB device to detect link status by using the health monitoring function. Outbound link load balancing detects link status by using the packet destination address as the destination, and inbound link load balancing detects link status by using the source address of DNS requests as the destination. The LB device calculates the best link based on detection results, and then forwards the service traffic over the best link. Health monitoring methods supported by the best performing link function include: DNS: Detects the availability of the remote DNS service and obtains the near-optimal link counting parameters through DNS packets. Only inbound load balancing supports this health monitoring method. ICMP: Detects the reachability of the remote address and obtains the near-optimal link counting parameters through ICMP messages. TCP half open: Detects the reachability of the remote address and obtains the near-optimal link counting parameters through a TCP half open connection. Best performing link algorithm performs weight calculation according to the following parameters, gets link weights, and determines which is the best link according to the weights. Hewlett-Packard Development Company, L.P. 25

26 Link bandwidth: The available bandwidth of a link Link cost: Depends on the cost of each link. For example, if it costs $500 USD to rent a 10 Mbps link of a carrier, and $1000 USD to rent a 10 Mbps link of another carrier, then the cost ratio of the two links is 1:2. Link delay time (namely, RTT): Obtained through detection Routing hops (namely, TTL): Obtained through detection You can also manually add static best performing link entries. When both static and dynamic best performing link entries exist, the system matches the entries in depth-first order Rich Health Monitoring Methods Health monitoring allows an LB device to check the statuses of real servers or links, collect the corresponding information, and quarantine the servers or links that work abnormally. Health monitoring can not only mark whether servers or links can work normally, but also can collect statistics of the response time of the servers or links for selecting servers or links. Load balancing supports rich health monitoring methods to detect and check the running statuses of the servers or links. ICMP Sends ICMP echo packets to the servers or nodes on the links in the serve cluster. If ICMP replies are received, the servers or links work normally. TCP Sends a TCP connection request to a port on a server in the server cluster. If a TCP connection is successfully established, the server works normally. HTTP Establishes a TCP connection with the HTTP port (80 by default) of a server in the server cluster, and then sends an HTTP request. If the content of the HTTP response received is correct, the server works normally. FTP Establishes a TCP connection with port 21 of a server in the server cluster, and then obtains the files on the server. If the contents of the files received are correct, the server works normally. TCP half open Sends a TCP connection request to a port of a node on a link. If a TCP half open connection is established successfully, the link works normally. DNS Sends a DNS request to a server in a server cluster or the DNS server on a link. If a correct DNS response is received, the link works properly. RADIUS Sends a RADIUS authentication request to a server in a server cluster. If authentication success response is received, the server works properly. Hewlett-Packard Development Company, L.P. 26

27 SSL Sends an SSL request to a server in a server cluster to establish an SSL connection. If an SSL connection is established, the server works properly. Persistence Function One session may include multiple TCP connections, such as FTP (includes one control channel and multiple data channels) and HTTP. These TCP connections have an explicit association between them. For example, in an FTP application, TCP connections of a data channel are obtained by control channel negotiation. Some TCP connections have an implicit association. For example, in HTTP network shopping, multiple connections comprise a service application, and these connections cannot obtain the sub-channel information through a father-channel like FTP. However, all the requests of the service should be sent to the same server; otherwise, the functions requested may not be completed, because the data packets contain implicit association information, for example cookie. Directing multiple connections belonging to one application layer session to the same server is the persistence function. With this function enabled, session entries are created according to the persistence method, ensuring that subsequent service packets are sent to the same server for processing. For example, source IP addresses can be used to create persistence entries to ensure the persistence of sessions. Persistence Function with an Explicit Association As mentioned in the previous section, multiple TCP connections in an FTP application have an explicit association: the quintuple of a sub-channel is obtained through father-channel negotiation. Therefore, an LB device will analyze father-channel packets of an FTP application one by one. As soon as finding sub-channel negotiation information, the LB device will create father-channel session association entries using the information. When the first packet of the sub-channel comes, the LB device will create session entries according to the association entries and father session entries, thus ensuring that the father channel and the sub-channel log in to the same server. Explicit association is automatically recognized by the LB device, without the need to configure policies. Persistence Function with an Implicit Association Source IP persistence Source IP persistence is used in Layer 4 server load balancing to ensure that the services of the same client can be distributed to one server. After receiving the first request for a service from a client, an LB device will create a persistence entry to record the server distributed to the client. Within the aging time of the session entry, the service packets with the same source IP address will be sent to the server for processing. Destination IP persistence Destination IP persistence is used in link load balancing to ensure that the services to the same destination can be distributed to the same link. After receiving the first request for a service from a client, an LB device will create a persistence entry to record the link distributed to the client. Within the aging time of the Hewlett-Packard Development Company, L.P. 27

28 session entry, the service packets with the same source IP address will be sent to the link for processing. Cookie persistence Cookie persistence is used in Layer 7 server load balancing to ensure that the packets of the same session can be distributed to one server. Cookie persistence is further classified into cookie insert and cookie get based on whether the set-cookie field that contains server information is carried in the response. Cookie Insert: If no Set-Cookie field with server information is carried in a response sent by the server, the LB device adds a Set-Cookie field including server information; then the client request will carry the inserted Cookie information. LB device will match the Cookie information, and then select a corresponding real server to send the request to it. Cookie Get: The response sent by the server carries the Set-Cookie information. The LB device gets the Cookie value in the response according to the user-configured Cookie ID. For the subsequent requests sent by the client, if the requests match the cached Cookie value, the LB device sends the requests to the corresponding real service. SIP call-id persistence SIP call-id persistence is used in Layer 7 server load balancing to ensure that SIP packets with the same call-id can be distributed to the same server. After receiving the first request for a service from a client, an LB device will create a persistence entry to record the server distributed to the client. Within the aging time of the session entry, the SIP packets with the same call-id will be sent to the server for processing. HTTP header persistence HTTP header persistence is used in Layer 7 server load balancing to ensure that packets with the same HTTP header can be distributed to the same server. After receiving the first request for a service from a client, an LB device will create a persistence entry to record the server distributed to the client. Within the aging time of the session entry, the packets with the same HTTP header will be sent to the server for processing. Layer-4 and Layer-7 Server Load Balancing Layer-4 server load balancing only checks the IP packet header and TCP/UDP packet header after obtaining data flows, without caring about the internal information of a TCP or UDP packet; while Layer-7 server load balancing requires that an LB device support Layer-4 server load balancing and also resolve the information above Layer 4, that is, information at the application layer. For example, Layer-7 server load balancing can retrieve HTTP URL or Cookie information from the data packets. Layer-7 server load balancing controls the application layer services, providing a control method to access traffic at a high layer. Layer-7 server load balancing features the following advantages: Layer-7 server load balancing can distribute data traffic to corresponding servers for processing based on the contents of the packets (for example, whether the packets Hewlett-Packard Development Company, L.P. 28

HP Load Balancing Module

HP Load Balancing Module HP Load Balancing Module Load Balancing Configuration Guide Part number: 5998-4218 Software version: Feature 3221 Document version: 6PW100-20130326 Legal and notice information Copyright 2013 Hewlett-Packard

More information

SANGFOR AD Product Series

SANGFOR AD Product Series SANGFOR Application Delivery (AD) Product Series provides customers with the global server load balance(gslb), inbound/outbound load balance, server load balance, SSL off-load and anti-ddos solutions for

More information

Q-Balancer Range FAQ The Q-Balance LB Series General Sales FAQ

Q-Balancer Range FAQ The Q-Balance LB Series General Sales FAQ Q-Balancer Range FAQ The Q-Balance LB Series The Q-Balance Balance Series is designed for Small and medium enterprises (SMEs) to provide cost-effective solutions for link resilience and load balancing

More information

What is New in Cisco ACE 4710 Application Control Engine Software Release 3.1

What is New in Cisco ACE 4710 Application Control Engine Software Release 3.1 What is New in Cisco ACE 4710 Application Control Engine Software Release 3.1 PB478675 Product Overview The Cisco ACE Application Control Engine 4710 represents the next generation of application switches

More information

Never Drop a Call With TecInfo SIP Proxy White Paper

Never Drop a Call With TecInfo SIP Proxy White Paper Innovative Solutions. Trusted Performance. Intelligently Engineered. Never Drop a Call With TecInfo SIP Proxy White Paper TecInfo SD-WAN product - PowerLink - enables real time traffic like VoIP, video

More information

SANGFOR AD Product Series

SANGFOR AD Product Series SANGFOR Application Delivery (AD) Product Series provides customers with the global server load balance(gslb), inbound/outbound load balance, server load balance, SSL off-load and anti-ddos solutions for

More information

HP VPN Firewall Appliances

HP VPN Firewall Appliances HP VPN Firewall Appliances High Availability Configuration Guide Part number: 5998-4169 Software version: F1000-A-EI/F1000-S-EI (Feature 3726) F1000-E (Release 3177) F5000 (Feature 3211) F5000-S/F5000-C

More information

Application Delivery Product Technology White Paper 2015 V2. Venusense 2015 V2

Application Delivery Product Technology White Paper 2015 V2. Venusense 2015 V2 Application Delivery Product Technology White Paper 2015 V2 Venusense 2015 V2 Abbreviations ADN ADC SLB LLB GSLB DSR RTT SNAT SNMP SSL URI URL VRRP HA DNS LDNS Venusense Application Delivery Product Technology

More information

Stateful Failover Technology White Paper

Stateful Failover Technology White Paper Stateful Failover Technology White Paper Keywords: Stateful failover, master/backup mode, load balancing mode, data synchronization, link switching Abstract: A firewall device is usually the access point

More information

HUAWEI USG6000 Series Next-Generation Firewall Technical White Paper VPN HUAWEI TECHNOLOGIES CO., LTD. Issue 1.1. Date

HUAWEI USG6000 Series Next-Generation Firewall Technical White Paper VPN HUAWEI TECHNOLOGIES CO., LTD. Issue 1.1. Date HUAWEI USG6000 Series Next-Generation Firewall Technical White Paper VPN Issue 1.1 Date 2014-03-14 HUAWEI TECHNOLOGIES CO., LTD. 2014. All rights reserved. No part of this document may be reproduced or

More information

HP A-F1000-A-EI_A-F1000-S-EI VPN Firewalls

HP A-F1000-A-EI_A-F1000-S-EI VPN Firewalls HP A-F1000-A-EI_A-F1000-S-EI VPN Firewalls NAT Configuration Guide Part number:5998-2649 Document version: 6PW100-20110909 Legal and notice information Copyright 2011 Hewlett-Packard Development Company,

More information

Configuring NAT Policies

Configuring NAT Policies Configuring NAT Policies Rules > NAT Policies About NAT in SonicOS About NAT Load Balancing About NAT64 Viewing NAT Policy Entries Adding or Editing NAT or NAT64 Policies Deleting NAT Policies Creating

More information

Internet Load Balancing Guide. Peplink Balance Series. Peplink Balance. Internet Load Balancing Solution Guide

Internet Load Balancing Guide. Peplink Balance Series. Peplink Balance. Internet Load Balancing Solution Guide Peplink Balance Internet Load Balancing Solution Guide http://www.peplink.com Copyright 2010 Peplink Internet Load Balancing Instant Improvement to Your Network Introduction Introduction Understanding

More information

Distributed Systems. 27. Firewalls and Virtual Private Networks Paul Krzyzanowski. Rutgers University. Fall 2013

Distributed Systems. 27. Firewalls and Virtual Private Networks Paul Krzyzanowski. Rutgers University. Fall 2013 Distributed Systems 27. Firewalls and Virtual Private Networks Paul Krzyzanowski Rutgers University Fall 2013 November 25, 2013 2013 Paul Krzyzanowski 1 Network Security Goals Confidentiality: sensitive

More information

Citrix NetScaler LLB Deployment Guide

Citrix NetScaler LLB Deployment Guide Deployment Guide Citrix NetScaler Citrix NetScaler LLB Deployment Guide Deployment Guide for Using a NetScaler Appliance for Outbound Link Load Balancing www.citrix.com Contents Introduction... 3 Solution

More information

Routing Overview. Information About Routing CHAPTER

Routing Overview. Information About Routing CHAPTER 21 CHAPTER This chapter describes underlying concepts of how routing behaves within the ASA, and the routing protocols that are supported. This chapter includes the following sections: Information About

More information

BIG-IP Local Traffic Management: Basics. Version 12.1

BIG-IP Local Traffic Management: Basics. Version 12.1 BIG-IP Local Traffic Management: Basics Version 12.1 Table of Contents Table of Contents Introduction to Local Traffic Management...7 About local traffic management...7 About the network map...7 Viewing

More information

Configuring Virtual Servers

Configuring Virtual Servers 3 CHAPTER This section provides an overview of server load balancing and procedures for configuring virtual servers for load balancing on an ACE appliance. Note When you use the ACE CLI to configure named

More information

HP Firewalls and UTM Devices

HP Firewalls and UTM Devices HP Firewalls and UTM Devices NAT and ALG Configuration Guide Part number: 5998-4166 Software version: F1000-A-EI: Feature 3722 F1000-S-EI: Feature 3722 F5000: Feature 3211 F1000-E: Feature 3174 Firewall

More information

Configuring Transparent Redirection for Standalone Content Engines

Configuring Transparent Redirection for Standalone Content Engines CHAPTER 6 Configuring Transparent Redirection for Standalone Content Engines This chapter discusses the following methods for transparently redirecting content requests to standalone Content Engines: Web

More information

HP Load Balancing Module

HP Load Balancing Module HP Load Balancing Module High Availability Configuration Guide Part number: 5998-2687 Document version: 6PW101-20120217 Legal and notice information Copyright 2012 Hewlett-Packard Development Company,

More information

INBOUND AND OUTBOUND NAT

INBOUND AND OUTBOUND NAT INBOUND AND OUTBOUND NAT Network Address Translation Course # 2011 1 Overview! Network Address Translation (NAT)! Aliases! Static Address Mappings! Inbound Tunnels! Advanced Tunnel Option SYN Cookies Authentication

More information

SD-WAN Deployment Guide (CVD)

SD-WAN Deployment Guide (CVD) SD-WAN Deployment Guide (CVD) All Cisco Meraki security appliances are equipped with SD-WAN capabilities that enable administrators to maximize network resiliency and bandwidth efficiency. This guide introduces

More information

vcloud Air - Virtual Private Cloud OnDemand Networking Guide

vcloud Air - Virtual Private Cloud OnDemand Networking Guide vcloud Air - Virtual Private Cloud OnDemand Networking Guide vcloud Air This document supports the version of each product listed and supports all subsequent versions until the document is replaced by

More information

Fundamental Questions to Answer About Computer Networking, Jan 2009 Prof. Ying-Dar Lin,

Fundamental Questions to Answer About Computer Networking, Jan 2009 Prof. Ying-Dar Lin, Fundamental Questions to Answer About Computer Networking, Jan 2009 Prof. Ying-Dar Lin, ydlin@cs.nctu.edu.tw Chapter 1: Introduction 1. How does Internet scale to billions of hosts? (Describe what structure

More information

Network Configuration Guide

Network Configuration Guide Cloud VoIP Network Configuration PURPOSE This document outlines the recommended VoIP configuration settings for customer provided Firewalls and internet bandwidth requirements to support Mitel phones.

More information

Networks Fall This exam consists of 10 problems on the following 13 pages.

Networks Fall This exam consists of 10 problems on the following 13 pages. CSCI 466 Final Networks Fall 2011 Name: This exam consists of 10 problems on the following 13 pages. You may use your two- sided hand- written 8 ½ x 11 note sheet during the exam and a calculator. No other

More information

CSCI Networking Name:

CSCI Networking Name: CSCI 3335- Networking Name: Final Exam Problem 1: Error Checking and TCP (15 Points) (a) True or false: [2.5 points for circling correct answers, -1 points for each wrong answer] i. CRC can both correct

More information

vserver vserver virtserver-name no vserver virtserver-name Syntax Description

vserver vserver virtserver-name no vserver virtserver-name Syntax Description Chapter 2 vserver vserver To identify a virtual server, and then enter the virtual server configuration submode, use the vserver command. To remove a virtual server from the configuration, use the no form

More information

Oracle E-Business Suite 11i with Cisco ACE Series Application Control Engine Deployment Guide, Version 1.0

Oracle E-Business Suite 11i with Cisco ACE Series Application Control Engine Deployment Guide, Version 1.0 Design Guide Oracle E-Business Suite 11i with Cisco ACE Series Application Control Engine Deployment Guide, Version 1.0 This design guide describes how to deploy the Cisco Application Control Engine (Cisco

More information

H3C SecPath Series High-End Firewalls

H3C SecPath Series High-End Firewalls H3C SecPath Series High-End Firewalls NAT and ALG Configuration Guide Hangzhou H3C Technologies Co., Ltd. http://www.h3c.com Software version: SECPATHF1000SAI&F1000AEI&F1000ESI-CMW520-R3721 SECPATH5000FA-CMW520-F3210

More information

Systrome Next Gen Firewalls

Systrome Next Gen Firewalls N E T K S Systrome Next Gen Firewalls Systrome s Next Generation Firewalls provides comprehensive security protection from layer 2 to layer 7 for the mobile Internet era. The new next generation security

More information

CCNA Course Access Control Lists

CCNA Course Access Control Lists CCNA Course Access Control Lists Access Control Lists (ACL) Traffic Filtering Permit or deny packets moving through router Permit or deny (VTY) access to or from a router Traffic Identifying for special

More information

CyberP3i Course Module Series

CyberP3i Course Module Series CyberP3i Course Module Series Spring 2017 Designer: Dr. Lixin Wang, Associate Professor Firewall Configuration Firewall Configuration Learning Objectives 1. Be familiar with firewalls and types of firewalls

More information

Configuring Cache Services Using the Web Cache Communication Protocol

Configuring Cache Services Using the Web Cache Communication Protocol Configuring Cache Services Using the Web Cache Communication Protocol Finding Feature Information, page 1 Prerequisites for WCCP, page 1 Restrictions for WCCP, page 2 Information About WCCP, page 3 How

More information

Peplink Balance Multi-WAN Routers

Peplink Balance Multi-WAN Routers Peplink Balance Multi-WAN Routers Model 20/30/210/310/380/390/580/710/1350 User Manual Firmware 5.1 September 10 Copyright & Trademarks Specifications are subject to change without prior notice. Copyright

More information

HP High-End Firewalls

HP High-End Firewalls HP High-End Firewalls Attack Protection Configuration Guide Part number: 5998-2650 Software version: F1000-A-EI&F1000-S-EI: R3721 F5000: F3210 F1000-E: F3171 Firewall module: F3171 Document version: 6PW101-20120719

More information

Information About Routing

Information About Routing 19 CHAPTER This chapter describes underlying concepts of how routing behaves within the adaptive security appliance, and the routing protocols that are supported. The chapter includes the following sections:,

More information

Barracuda Link Balancer

Barracuda Link Balancer Barracuda Networks Technical Documentation Barracuda Link Balancer Administrator s Guide Version 2.3 RECLAIM YOUR NETWORK Copyright Notice Copyright 2004-2011, Barracuda Networks www.barracuda.com v2.3-111215-01-1215

More information

BIG-IP DNS: Monitors Reference. Version 12.1

BIG-IP DNS: Monitors Reference. Version 12.1 BIG-IP DNS: Monitors Reference Version 12.1 Table of Contents Table of Contents Monitors Concepts...5 Purpose of monitors...5 Benefits of monitors...5 Methods of monitoring...5 Comparison of monitoring

More information

BIG-IQ Centralized Management: ADC. Version 5.0

BIG-IQ Centralized Management: ADC. Version 5.0 BIG-IQ Centralized Management: ADC Version 5.0 Table of Contents Table of Contents BIG-IQ Application Delivery Controller: Overview...5 What is Application Delivery Controller?...5 Managing Device Resources...7

More information

CISCO EXAM QUESTIONS & ANSWERS

CISCO EXAM QUESTIONS & ANSWERS CISCO 642-618 EXAM QUESTIONS & ANSWERS Number: 642-618 Passing Score: 800 Time Limit: 120 min File Version: 39.6 http://www.gratisexam.com/ CISCO 642-618 EXAM QUESTIONS & ANSWERS Exam Name: Deploying Cisco

More information

History Page. Barracuda NextGen Firewall F

History Page. Barracuda NextGen Firewall F The Firewall > History page is very useful for troubleshooting. It provides information for all traffic that has passed through the Barracuda NG Firewall. It also provides messages that state why traffic

More information

Load Balancing Microsoft IIS. Deployment Guide v Copyright Loadbalancer.org

Load Balancing Microsoft IIS. Deployment Guide v Copyright Loadbalancer.org Load Balancing Microsoft IIS Deployment Guide v1.6.4 Copyright Loadbalancer.org Table of Contents 1. About this Guide...4 2. Loadbalancer.org Appliances Supported...4 3. Loadbalancer.org Software Versions

More information

firewalls perimeter firewall systems firewalls security gateways secure Internet gateways

firewalls perimeter firewall systems firewalls security gateways secure Internet gateways Firewalls 1 Overview In old days, brick walls (called firewalls ) built between buildings to prevent fire spreading from building to another Today, when private network (i.e., intranet) connected to public

More information

HP Load Balancing Module

HP Load Balancing Module HP Load Balancing Module Security Configuration Guide Part number: 5998-2686 Document version: 6PW101-20120217 Legal and notice information Copyright 2012 Hewlett-Packard Development Company, L.P. No part

More information

Configuring Real Servers and Server Farms

Configuring Real Servers and Server Farms CHAPTER2 Configuring Real Servers and Server Farms This chapter describes the functions of real servers and server farms in load balancing and how to configure them on the ACE module. It contains the following

More information

Paper solution Subject: Computer Networks (TE Computer pattern) Marks : 30 Date: 5/2/2015

Paper solution Subject: Computer Networks (TE Computer pattern) Marks : 30 Date: 5/2/2015 Paper solution Subject: Computer Networks (TE Computer- 2012 pattern) Marks : 30 Date: 5/2/2015 Q1 a) What is difference between persistent and non persistent HTTP? Also Explain HTTP message format. [6]

More information

Transparent or Routed Firewall Mode

Transparent or Routed Firewall Mode This chapter describes how to set the firewall mode to routed or transparent, as well as how the firewall works in each firewall mode. You can set the firewall mode independently for each context in multiple

More information

20-CS Cyber Defense Overview Fall, Network Basics

20-CS Cyber Defense Overview Fall, Network Basics 20-CS-5155 6055 Cyber Defense Overview Fall, 2017 Network Basics Who Are The Attackers? Hackers: do it for fun or to alert a sysadmin Criminals: do it for monetary gain Malicious insiders: ignores perimeter

More information

PrepAwayExam. High-efficient Exam Materials are the best high pass-rate Exam Dumps

PrepAwayExam.   High-efficient Exam Materials are the best high pass-rate Exam Dumps PrepAwayExam http://www.prepawayexam.com/ High-efficient Exam Materials are the best high pass-rate Exam Dumps Exam : 642-618 Title : Deploying Cisco ASA Firewall Solutions (FIREWALL v2.0) Vendors : Cisco

More information

Configuring Web Cache Services By Using WCCP

Configuring Web Cache Services By Using WCCP CHAPTER 44 Configuring Web Cache Services By Using WCCP This chapter describes how to configure your Catalyst 3560 switch to redirect traffic to wide-area application engines (such as the Cisco Cache Engine

More information

Features of a proxy server: - Nowadays, by using TCP/IP within local area networks, the relaying role that the proxy

Features of a proxy server: - Nowadays, by using TCP/IP within local area networks, the relaying role that the proxy Que: -Proxy server Introduction: Proxy simply means acting on someone other s behalf. A Proxy acts on behalf of the client or user to provide access to a network service, and it shields each side from

More information

Unified Load Balance. User Guide. Issue 04 Date

Unified Load Balance. User Guide. Issue 04 Date Issue 04 Date 2017-09-06 Contents Contents 1 Overview... 1 1.1 Basic Concepts... 1 1.1.1 Unified Load Balance...1 1.1.2 Listener... 1 1.1.3 Health Check... 2 1.1.4 Region...2 1.1.5 Project...2 1.2 Functions...

More information

Firepower Threat Defense Cluster for the Firepower 4100/9300

Firepower Threat Defense Cluster for the Firepower 4100/9300 Firepower Threat Defense Cluster for the Firepower 4100/9300 Clustering lets you group multiple Firepower Threat Defense units together as a single logical device. Clustering is only supported for the

More information

TCP/IP Networking. Training Details. About Training. About Training. What You'll Learn. Training Time : 9 Hours. Capacity : 12

TCP/IP Networking. Training Details. About Training. About Training. What You'll Learn. Training Time : 9 Hours. Capacity : 12 TCP/IP Networking Training Details Training Time : 9 Hours Capacity : 12 Prerequisites : There are no prerequisites for this course. About Training About Training TCP/IP is the globally accepted group

More information

F5 BIG-IQ Centralized Management: Local Traffic & Network. Version 5.2

F5 BIG-IQ Centralized Management: Local Traffic & Network. Version 5.2 F5 BIG-IQ Centralized Management: Local Traffic & Network Version 5.2 Table of Contents Table of Contents BIG-IQ Local Traffic & Network: Overview... 5 What is Local Traffic & Network?... 5 Understanding

More information

Distributed Systems. 29. Firewalls. Paul Krzyzanowski. Rutgers University. Fall 2015

Distributed Systems. 29. Firewalls. Paul Krzyzanowski. Rutgers University. Fall 2015 Distributed Systems 29. Firewalls Paul Krzyzanowski Rutgers University Fall 2015 2013-2015 Paul Krzyzanowski 1 Network Security Goals Confidentiality: sensitive data & systems not accessible Integrity:

More information

H3C SecPath Series High-End Firewalls

H3C SecPath Series High-End Firewalls H3C SecPath Series High-End Firewalls NAT and ALG Configuration Guide Hangzhou H3C Technologies Co., Ltd. http://www.h3c.com Software version: SECPATH1000FE&SECBLADEII-CMW520-R3166 SECPATH5000FA-CMW520-R3206

More information

Chapter 4: outline. 4.5 routing algorithms link state distance vector hierarchical routing. 4.6 routing in the Internet RIP OSPF BGP

Chapter 4: outline. 4.5 routing algorithms link state distance vector hierarchical routing. 4.6 routing in the Internet RIP OSPF BGP Chapter 4: outline 4.1 introduction 4.2 virtual circuit and datagram networks 4.3 what s inside a router 4.4 IP: Internet Protocol datagram format IPv4 addressing ICMP 4.5 routing algorithms link state

More information

Configuring Traffic Policies for Server Load Balancing

Configuring Traffic Policies for Server Load Balancing CHAPTER3 Configuring Traffic Policies for Server Load Balancing This chapter describes how to configure the ACE appliance to use classification (class) maps and policy maps to filter and match interesting

More information

Configuring the Catena Solution

Configuring the Catena Solution This chapter describes how to configure Catena on a Cisco NX-OS device. This chapter includes the following sections: About the Catena Solution, page 1 Licensing Requirements for Catena, page 2 Guidelines

More information

Yealink VCS Network Deployment Solution

Yealink VCS Network Deployment Solution Yealink VCS Network Deployment Solution Oct. 2015 V10.6 Yealink Network Deployment Solution Table of Contents Table of Contents... iii Network Requirements... 1 Bandwidth Requirements... 1 Calculating

More information

Internet Layers. Physical Layer. Application. Application. Transport. Transport. Network. Network. Network. Network. Link. Link. Link.

Internet Layers. Physical Layer. Application. Application. Transport. Transport. Network. Network. Network. Network. Link. Link. Link. Internet Layers Application Application Transport Transport Network Network Network Network Link Link Link Link Ethernet Fiber Optics Physical Layer Wi-Fi ARP requests and responses IP: 192.168.1.1 MAC:

More information

Technical White Paper for NAT Traversal

Technical White Paper for NAT Traversal V300R002 Technical White Paper for NAT Traversal Issue 01 Date 2016-01-15 HUAWEI TECHNOLOGIES CO., LTD. 2016. All rights reserved. No part of this document may be reproduced or transmitted in any form

More information

H3C S9500 QoS Technology White Paper

H3C S9500 QoS Technology White Paper H3C Key words: QoS, quality of service Abstract: The Ethernet technology is widely applied currently. At present, Ethernet is the leading technology in various independent local area networks (LANs), and

More information

SecBlade Firewall Cards Attack Protection Configuration Example

SecBlade Firewall Cards Attack Protection Configuration Example SecBlade Firewall Cards Attack Protection Configuration Example Keywords: Attack protection, scanning, blacklist Abstract: This document describes the attack protection functions of the SecBlade firewall

More information

CCNA Exploration Network Fundamentals. Chapter 06 Addressing the Network IPv4

CCNA Exploration Network Fundamentals. Chapter 06 Addressing the Network IPv4 CCNA Exploration Network Fundamentals Chapter 06 Addressing the Network IPv4 Updated: 20/05/2008 1 6.0.1 Introduction Addressing is a key function of Network layer protocols that enables data communication

More information

Multihoming with BGP and NAT

Multihoming with BGP and NAT Eliminating ISP as a single point of failure www.noction.com Table of Contents Introduction 1. R-NAT Configuration 1.1 NAT Configuration 5. ISPs Routers Configuration 3 15 7 7 5.1 ISP-A Configuration 5.2

More information

Deployment Scenarios for Standalone Content Engines

Deployment Scenarios for Standalone Content Engines CHAPTER 3 Deployment Scenarios for Standalone Content Engines This chapter introduces some sample scenarios for deploying standalone Content Engines in enterprise and service provider environments. This

More information

Cisco CCIE Security Written.

Cisco CCIE Security Written. Cisco 400-251 CCIE Security Written http://killexams.com/pass4sure/exam-detail/400-251 QUESTION: 193 Which two of the following ICMP types and code should be allowed in a firewall to enable traceroute?

More information

4. The transport layer

4. The transport layer 4.1 The port number One of the most important information contained in the header of a segment are the destination and the source port numbers. The port numbers are necessary to identify the application

More information

Vorlesung Kommunikationsnetze

Vorlesung Kommunikationsnetze Picture 15 13 Vorlesung Kommunikationsnetze Prof. Dr. H. P. Großmann mit B. Wiegel sowie A. Schmeiser und M. Rabel Sommersemester 2009 Institut für Organisation und Management von Informationssystemen

More information

Define TCP/IP and describe its advantages on Windows Describe how the TCP/IP protocol suite maps to a four-layer model

Define TCP/IP and describe its advantages on Windows Describe how the TCP/IP protocol suite maps to a four-layer model [Previous] [Next] Chapter 2 Implementing TCP/IP About This Chapter This chapter gives you an overview of Transmission Control Protocol/Internet Protocol (TCP/IP). The lessons provide a brief history of

More information

Network Protocols - Revision

Network Protocols - Revision Network Protocols - Revision Luke Anderson luke@lukeanderson.com.au 18 th May 2018 University Of Sydney Overview 1. The Layers 1.1 OSI Model 1.2 Layer 1: Physical 1.3 Layer 2: Data Link MAC Addresses 1.4

More information

CSC 401 Data and Computer Communications Networks

CSC 401 Data and Computer Communications Networks CSC 401 Data and Computer Communications Networks Link Layer, Switches, VLANS, MPLS, Data Centers Sec 6.4 to 6.7 Prof. Lina Battestilli Fall 2017 Chapter 6 Outline Link layer and LANs: 6.1 introduction,

More information

Ingate Firewall & SIParator Product Training. SIP Trunking Focused

Ingate Firewall & SIParator Product Training. SIP Trunking Focused Ingate Firewall & SIParator Product Training SIP Trunking Focused Common SIP Applications SIP Trunking Remote Desktop Ingate Product Training Common SIP Applications SIP Trunking A SIP Trunk is a concurrent

More information

Gigabit SSL VPN Security Router

Gigabit SSL VPN Security Router As Internet becomes essential for business, the crucial solution to prevent your Internet connection from failure is to have more than one connection. PLANET is the ideal to help the SMBs increase the

More information

PLEASE READ CAREFULLY BEFORE YOU START

PLEASE READ CAREFULLY BEFORE YOU START Page 1 of 20 MIDTERM EXAMINATION #1 - B COMPUTER NETWORKS : 03-60-367-01 U N I V E R S I T Y O F W I N D S O R S C H O O L O F C O M P U T E R S C I E N C E Fall 2008-75 minutes This examination document

More information

PLEASE READ CAREFULLY BEFORE YOU START

PLEASE READ CAREFULLY BEFORE YOU START Page 1 of 20 MIDTERM EXAMINATION #1 - A COMPUTER NETWORKS : 03-60-367-01 U N I V E R S I T Y O F W I N D S O R S C H O O L O F C O M P U T E R S C I E N C E Fall 2008-75 minutes This examination document

More information

Network Address Translation (NAT)

Network Address Translation (NAT) The following topics explain and how to configure it. Why Use NAT?, page 1 NAT Basics, page 2 Guidelines for NAT, page 8 Configure NAT, page 12 Translating IPv6 Networks, page 40 Monitoring NAT, page 51

More information

Seven Criteria for a Sound Investment in WAN Optimization

Seven Criteria for a Sound Investment in WAN Optimization Seven Criteria for a Sound Investment in WAN Optimization Introduction WAN optimization technology brings three important business benefits to IT organizations: Reduces branch office infrastructure costs

More information

ICS 351: Networking Protocols

ICS 351: Networking Protocols ICS 351: Networking Protocols IP packet forwarding application layer: DNS, HTTP transport layer: TCP and UDP network layer: IP, ICMP, ARP data-link layer: Ethernet, WiFi 1 Networking concepts each protocol

More information

Load Balancing Microsoft Remote Desktop Services. Deployment Guide v Copyright Loadbalancer.org, Inc

Load Balancing Microsoft Remote Desktop Services. Deployment Guide v Copyright Loadbalancer.org, Inc Load Balancing Microsoft Remote Desktop Services Deployment Guide v2.2 Copyright 2002 2017 Loadbalancer.org, Inc Table of Contents About this Guide...4 2. Loadbalancer.org Appliances Supported...4 3. Loadbalancer.org

More information

HP High-End Firewalls

HP High-End Firewalls HP High-End Firewalls Attack Protection Configuration Guide Part number: 5998-2630 Software version: F1000-E/Firewall module: R3166 F5000-A5: R3206 Document version: 6PW101-20120706 Legal and notice information

More information

Router and ACL ACL Filter traffic ACL: The Three Ps One ACL per protocol One ACL per direction One ACL per interface

Router and ACL ACL Filter traffic ACL: The Three Ps One ACL per protocol One ACL per direction One ACL per interface CCNA4 Chapter 5 * Router and ACL By default, a router does not have any ACLs configured and therefore does not filter traffic. Traffic that enters the router is routed according to the routing table. *

More information

HP A-F1000-A-EI_A-F1000-S-EI VPN Firewalls

HP A-F1000-A-EI_A-F1000-S-EI VPN Firewalls HP A-F1000-A-EI_A-F1000-S-EI VPN Firewalls VPN Configuration Guide Part number:5998-2652 Document version: 6PW100-20110909 Legal and notice information Copyright 2011 Hewlett-Packard Development Company,

More information

Configuring Health Monitoring

Configuring Health Monitoring CHAPTER1 This chapter describes how to configure health monitoring on the ACE to track the state of a server by sending out probes. Also referred to as out-of-band health monitoring, the ACE verifies the

More information

On Distributed Communications, Rand Report RM-3420-PR, Paul Baran, August 1964

On Distributed Communications, Rand Report RM-3420-PR, Paul Baran, August 1964 The requirements for a future all-digital-data distributed network which provides common user service for a wide range of users having different requirements is considered. The use of a standard format

More information

Your Name: Your student ID number:

Your Name: Your student ID number: CSC 573 / ECE 573 Internet Protocols October 11, 2005 MID-TERM EXAM Your Name: Your student ID number: Instructions Allowed o A single 8 ½ x11 (front and back) study sheet, containing any info you wish

More information

Internet. 1) Internet basic technology (overview) 3) Quality of Service (QoS) aspects

Internet. 1) Internet basic technology (overview) 3) Quality of Service (QoS) aspects Internet 1) Internet basic technology (overview) 2) Mobility aspects 3) Quality of Service (QoS) aspects Relevant information: these slides (overview) course textbook (Part H) www.ietf.org (details) IP

More information

HP Instant Support Enterprise Edition (ISEE) Security overview

HP Instant Support Enterprise Edition (ISEE) Security overview HP Instant Support Enterprise Edition (ISEE) Security overview Advanced Configuration A.03.50 Mike Brandon Interex 03 / 30, 2004 2003 Hewlett-Packard Development Company, L.P. The information contained

More information

Configuring Traffic Interception

Configuring Traffic Interception 4 CHAPTER This chapter describes the WAAS software support for intercepting all TCP traffic in an IP-based network, based on the IP and TCP header information, and redirecting the traffic to wide area

More information

HP High-End Firewalls

HP High-End Firewalls HP High-End Firewalls NAT and ALG Command Reference Part number: 5998-2639 Software version: F1000-E/Firewall module: R3166 F5000-A5: R3206 Document version: 6PW101-20120706 Legal and notice information

More information

Research and Implementation of Server Load Balancing Strategy in Service System

Research and Implementation of Server Load Balancing Strategy in Service System Journal of Electronics and Information Science (2018) 3: 16-21 Clausius Scientific Press, Canada Research and Implementation of Server Load Balancing Strategy in Service System Yunpeng Zhang a, Liwei Liu

More information

Configuring Traffic Policies for Server Load Balancing

Configuring Traffic Policies for Server Load Balancing CHAPTER3 Configuring Traffic Policies for Server Load Balancing This chapter describes how to configure the ACE module to use classification (class) maps and policy maps to filter and match interesting

More information

EEC-684/584 Computer Networks

EEC-684/584 Computer Networks EEC-684/584 Computer Networks Lecture 14 wenbing@ieee.org (Lecture nodes are based on materials supplied by Dr. Louise Moser at UCSB and Prentice-Hall) Outline 2 Review of last lecture Internetworking

More information

Lecture 11: Networks & Networking

Lecture 11: Networks & Networking Lecture 11: Networks & Networking Contents Distributed systems Network types Network standards ISO and TCP/IP network models Internet architecture IP addressing IP datagrams AE4B33OSS Lecture 11 / Page

More information

Chapter 12 Network Protocols

Chapter 12 Network Protocols Chapter 12 Network Protocols 1 Outline Protocol: Set of defined rules to allow communication between entities Open Systems Interconnection (OSI) Transmission Control Protocol/Internetworking Protocol (TCP/IP)

More information

Access Rules. Controlling Network Access

Access Rules. Controlling Network Access This chapter describes how to control network access through or to the ASA using access rules. You use access rules to control network access in both routed and transparent firewall modes. In transparent

More information