Cisco ace load balancer. Cisco ace load balancer

How to migrate from Cisco ACE to Zevenet

Cisco ACE was one of the most popular vendor of hardware load balancers in the market years ago. After the decision of Cisco to abandon its load balancer line, users need to find a suitable solution easy to migrate their services.

This guide describes how easy is to migrate from Cisco ACE configurations to Zevenet with simple steps using use case configuration examples.

Basic Concepts

probe: Defines health checks configured against the backends or real servers in order to determine if the application is responding correctly. In Zevenet, health checks are configured through FarmGuardian which is a health checks scheduler and could be customized for every farm. rserver: Defines every host backend. In Zevenet, the backends are configured in a list through the web GUI section Manage Farms Edit. serverfarm: Defines every farm, associating health checks and a list of rservers. In Zevenet, the farm creation is performed through Manage Farms New Farm and then, edit the global parameters. class-map: Defines the listening address and port. In Zevenet, it’s defined when created the farm through Manage Farms New Farm. policy-map: Defines the load balancing scheduler for every farm and behavior. In Zevenet, it’s configured through Manage Farms Edit. interface: Defines the network interfaces configuration. In Zevenet, the networking configuration is in the section Settings Interfaces. route: Defines the routing configuration. In Zevenet, the routing configuration is in the section Settings Interfaces.

Example of a UDP Probe Load-Balancing Configuration

Please follow he following steps to migrate to Zevenet:

Create a new virtual interface for the new farm. Go to the section Settings Interfaces and click on the icon add vlan network interface. Select 120 in the interface name and the IP address 192.168.120.114 with netmask 255.255.255.0, like is selected in the example.

Go to Manage Farms and click the button Add new Farm. In the field Farm description name will be in our example SFARM1 and as Profile select L4xNAT. Click on the button Save continue.

Select the Virtual IP the VLAN interface 120 created before and Virtual Port 53. Click on the button Save.

Click on the farm actions Edit the SFARM1 farm to edit the global parameters. Configure Protocol Type with UDP and enable the option Use FarmGuardian to check Backend Servers. Check interval to 10 and Command to check something like:

check_udp.w 5.c 9.H HOST.p PORT

Remain the HOST and PORT tokens as is, during the health check they’ll be substituted by the backends IP servers.

Insert the backends servers clicking in the icon Add real server.

Example of an RDP Load-Balancing Configuration

Please follow he following steps to migrate to Zevenet:

Create a new virtual interface for the new farm. Go to the section Settings Interfaces and click on the icon add vlan network interface. Select 10 in the interface name and the IP address 10.6.252.19 with netmask 255.255.255.0, like is selected in the example.

Go to Manage Farms and click the button Add new Farm. In the field Farm description name will be in our example SF1 and as Profile select L4xNAT. Click on the button Save continue.

Select the Virtual IP the VLAN interface 10 created before and Virtual Port 3389. Click on the button Save.

Click on the farm actions Edit the SF1 farm to edit the global parameters.

Insert the backends servers clicking in the icon Add real server like in the example: 10.6.252.245 and 10.6.252.246.

NOTE: In this example, there is no health check configured.

Examples of RADIUS Load-Balancing Configurations

Currently, RADIUS is only available to configure with IP persistence. Please follow he following steps to migrate to Zevenet:

Create a new virtual interface for the new farm. Go to the section Settings Interfaces and click on the icon add vlan network interface. Select 10 in the interface name and the IP address 12.1.1.11 with netmask 255.255.255.0, like is selected in the example.

Go to Manage Farms and click the button Add new Farm. In the field Farm description name will be in our example SF1 and as Profile select L4xNAT. Click on the button Save continue.

Select the Virtual IP the VLAN interface 10 created before and Virtual Ports 1812,1813. Click on the button Save.

Click on the farm actions Edit the SF1 farm to edit the global parameters. Configure Protocol Type with UDP, configure IP persistence as Persistence mode and press the button Modify.

Insert the backends servers clicking in the icon Add real server like in the example: 10.6.252.245 and 10.6.252.246.

NOTE: In this example, there is no health check configured.

Examples of SIP Load-Balancing Configurations

SIP protocol could be handled both in raw TCP/UDP or inspecting the headers. Please follow he following steps to migrate to Zevenet:

Create a new virtual interface for the new farm. Go to the section Settings Interfaces and click on the icon add vlan network interface. Select 10 in the interface name and the IP address 192.168.12.15 with netmask 255.255.255.0, like is selected in the example.

Go to Manage Farms and click the button Add new Farm. In the field Farm description name will be in our example SF3 and as Profile select L4xNAT. Click on the button Save continue.

Select the Virtual IP the VLAN interface 10 created before and Virtual Ports 5060. Click on the button Save.

Click on the farm actions Edit the SF3 farm to edit the global parameters. Configure Protocol Type with SIP and press the button Modify.

Insert the backends servers clicking in the icon Add real server like in the example: 10.6.252.245 and 10.6.252.246.

NOTE: In this example, there is no health check configured.

Example of an HTTP-Header Sticky Configuration

Please follow he following steps to migrate to Zevenet:

cisco, load, balancer

Create a new virtual interface for the new farm. Go to the section Settings Interfaces and click on the icon add vlan network interface. Select 193 in the interface name and the IP address 192.168.12.15 with netmask 255.255.255.0, like is selected in the example.

Go to Manage Farms and click the button Add new Farm. In the field Farm description name will be in our example SFARM1 and as Profile select HTTP. Click on the button Save continue.

Select the Virtual IP the VLAN interface 193 created before and Virtual Port 80. Click on the button Save.

Click on the farm actions Edit the SFARM1 farm to edit the global parameters. Add a new service called ciscosrv as an example, and press the button Add. In the service section, set the Virtual Host to cisco.com, enable Cookie insertion check and press the Modify button. To finish the service configuration, set the Domain field to cisco.com, set the TTL field to 720 and then press the button Modify in order to apply the changes.

Insert the backends servers clicking in the icon Add real server like in the example: 192.168.12.15 and 192.168.12.16. Restart the farm in order to apply the changes.

DevCentral

Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

Converting a Cisco ACE configuration file to F5 BIG-IP Format

cisco, load, balancer

In September, Cisco announced that it was ceasing development and pulling back on sales of its Application Control Engine (ACE) load balancing modules. Customers of Cisco’s ACE product line will now have to look for a replacement product to solve their load balancing and application delivery needs.

One of the first questions that will come up when a customer starts looking into replacement products surrounds the issue of upgradability. Will the customer be able to import their current configuration into the new technology or will they have to start with the new product from scratch. For smaller businesses, starting over can be a refreshing way to clean up some of the things you’ve been meaning to but weren’t able to for one reason or another. But, for a large majority of the users out there, starting over from nothing with a new product is a daunting task.

To help with those users considering a move to the F5 universe, DevCentral has included several scripts to assist with the configuration migration process. In our Codeshare section we created some scripts useful in converting ACE configurations into their respective F5 counterparts.

In this article, I’m going to FOCUS on the ace2f5-tmsh” in the ace2f5.zip script library.

The script takes as input an ACE configuration and creates a TMSH script to create the corresponding F5 BIG-IP objects.

ace2f5-tmsh.pl

perl ace2f5-tmsh.pl ace_config tmsh_script

We could leave it at that, but I’ll use this article to discuss the components of the ACE configuration and how they map to F5 objects.

ip

The IP object in the ACE configuration is defined like this:

ip route 0.0.0.0 0.0.0.0 10.211.143.1

equates to a tmsh “net route” command.

rserver

An “rserver” is basically a node containing a server address including an optional “inservice” attribute indicating whether it’s active or not.

rserver host R190-JOEINC0060 IP address 10.213.240.85 rserver host R191-JOEINC0061 IP address 10.213.240.86 inservice rserver host R192-JOEINC0062 IP address 10.213.240.88 inservice rserver host R193-JOEINC0063 IP address 10.213.240.89 inservice

It will be used to find the IP address for a given rserver hostname.

serverfarm

A serverfarm is a LTM pool except that it doesn’t have a port assigned to it yet.

serverfarm host MySite-JoeInc predictor hash url rserver R190-JOEINC0060 inservice rserver R191-JOEINC0061 inservice rserver R192-JOEINC0062 inservice rserver R193-JOEINC0063 inservice
ltm pool Insiteqa-JoeInc load-balancing-mode predictive-node members 10.213.240.86:any address 10.213.240.86 members 10.213.240.88:any address 10.213.240.88 members 10.213.240.89:any address 10.213.240.89

probe

a “probe” is a LTM monitor except that it does not have a port.

probe tcp MySite-JoeInc interval 5 faildetect 2 passdetect interval 10 passdetect count 2

will map to the TMSH “ltm monitor” command.

ltm monitor Insiteqa-JoeInc

sticky

The “sticky” object is a way to create a persistence profile. First you tie the serverfarm to the persist profile, then you tie the profile to the Virtual Server.

sticky ip-netmask 255.255.255.255 address source MySite-JoeInc-sticky timeout 60 replicate sticky serverfarm MySite-JoeInc

class-map

A “class-map” assigns a listener, or Virtual IP address and port number which is used for the clientside and serverside of the connection.

class-map match-any vip-MySite-JoeInc-12345 2 match virtual-address 10.213.238.140 tcp eq 12345 class-map match-any vip-MySite-JoeInc-1433 2 match virtual-address 10.213.238.140 tcp eq 1433 class-map match-any vip-MySite-JoeInc-31314 2 match virtual-address 10.213.238.140 tcp eq 31314 class-map match-any vip-MySite-JoeInc-8080 2 match virtual-address 10.213.238.140 tcp eq 8080 class-map match-any vip-MySite-JoeInc-http 2 match virtual-address 10.213.238.140 tcp eq www class-map match-any vip-MySite-JoeInc-https 2 match virtual-address 10.213.238.140 tcp eq https

policy-map

a policy-map of type loadbalance simply ties the persistence profile to the Virtual. the “multi-match” attribute constructs the virtual server by tying a bunch of objects together.

policy-map type loadbalance first-match vip-pol-MySite-JoeInc class class-default sticky-serverfarm MySite-JoeInc-sticky policy-map multi-match lb-MySite-JoeInc class vip-MySite-JoeInc-http loadbalance vip inservice loadbalance policy vip-pol-MySite-JoeInc loadbalance vip icmp-reply class vip-MySite-JoeInc-https loadbalance vip inservice loadbalance vip icmp-reply class vip-MySite-JoeInc-12345 loadbalance vip inservice loadbalance policy vip-pol-MySite-JoeInc loadbalance vip icmp-reply class vip-MySite-JoeInc-31314 loadbalance vip inservice loadbalance policy vip-pol-MySite-JoeInc loadbalance vip icmp-reply class vip-MySite-JoeInc-1433 loadbalance vip inservice loadbalance policy vip-pol-MySite-JoeInc loadbalance vip icmp-reply class reals nat dynamic 1 vlan 240 class vip-MySite-JoeInc-8080 loadbalance vip inservice loadbalance policy vip-pol-MySite-JoeInc loadbalance vip icmp-reply
ltm virtual vip-Insiteqa-JoeInc-12345 destination 10.213.238.140:12345 pool Insiteqa-JoeInc persist my_source_addr profiles tcp ltm virtual vip-Insiteqa-JoeInc-1433 destination 10.213.238.140:1433 pool Insiteqa-JoeInc persist my_source_addr profiles tcp ltm virtual vip-Insiteqa-JoeInc-31314 destination 10.213.238.140:31314 pool Insiteqa-JoeInc persist my_source_addr profiles tcp ltm virtual vip-Insiteqa-JoeInc-8080 destination 10.213.238.140:8080 pool Insiteqa-JoeInc persist my_source_addr profiles tcp ltm virtual vip-Insiteqa-JoeInc-http destination 10.213.238.140:http pool Insiteqa-JoeInc persist my_source_addr profiles tcp http ltm virtual vip-Insiteqa-JoeInc-https destination 10.213.238.140:https profiles tcp

Conclusion

If you are considering migrating from Cicso’s ACE to F5, I’d consider you take a look at the Cisco conversion scripts to assist with the conversion.

Cisco ace load balancer

BONUS: We have discussed this topic with an expert in the PHP community in our podcast:

cisco, load, balancer

Building and deploying PHP applications on one server is a, relatively, straightforward process. However, what about deploying a PHP application across multiple servers? In this article, I’m going to discuss four key considerations to bear in mind when deploying PHP applications when doing so.

Load Balancing

Load balancing is where requests are distributed uniformly across servers in a server pool. Load balancers receive user requests and determine which server in a server pool to forward the request for final processing. They can either be hardware- (e.g., F5 Big-IP, and Cisco ACE) or software-based (e.g., HAProxy, Traefik, and Nginx).

Simple load balancer diagram

Using them increases network efficiency, and application reliability and capacity, by adding servers on a planned basis or to meet short-term demand. In applications that use them, users never know that the same server isn’t handling their requests every time. All they know is that their requests are handled.

Load balancers typically use one of six methods for determining the server to pass a given request. These are:

  • Round Robin: Requests are distributed evenly across all servers in the pool.
  • Least Connections: Requests are sent to the server with the least number of currently active requests.
  • IP Hash: Requests are routed based on the client’s IP Address.
  • Generic Hash: Requests are routed based on a user-defined key.
  • Least Time: Requests are sent to the server with the lowest latency.
  • Random: Requests are distributed randomly across all servers in the pool.

While the benefits are many, migrating a load balanced-architecture requires a number of factors to be considered, which include:

  • Does each server in the cluster have the same physical capacity?
  • How Are Nodes Upgraded?
  • What Happens if a Node Is Unreachable or Fails?
  • What Kind of Monitoring Is Required?

These, and other questions, need answering before choosing and implementing the correct load balancer. That said, if you’re going to setup load balancing yourself, I strongly encourage you to use NGINX, as it’s likely the most used option. You can find out how to get started with NGINX’s load balancing documentation.

Sessions

Now that we’ve considered load balancing, the next logical consideration is: how are sessions handled? Sessions allow applications to get around HTTP’s stateless nature and preserve information across multiple requests (e.g., login status and shopping cart items).

By default, PHP stores sessions on the filesystem of the server which handles the user’s request. For example, if User A makes a request to Server B, then User A’s session is created and stored on Server B.

However, when requests are shared across multiple servers, this configuration likely results in broken functionality. For example:

  • Users may be part-way through a shopping cart and find that their cart is unexpectedly empty
  • Users may be randomly redirected to the login form
  • Users may be part-way through a survey only to see that all their answers have been lost

There are two options to prevent this:

Centrally Stored Sessions

Sessions can be centrally stored by using a caching server (e.g., Redis or Memcached), a database (e.g., MySQL or PostgreSQL), or a shared filesystem (e.g., NFS or [Glusterfs]). Of these options, the best is a caching server. This is for two reasons:

  • They’re a key-value, in-memory storage solution, which gives them greater responsiveness than an SQL database.
  • As sessions are always written when a request ends, SQL databases must write to the database on every request. This requirement can easily can lead to table locking and slow writes.

When storing sessions centrally, you need to be careful that the session store doesn’t become a single point of failure. This can be avoided by setting up the store in a clustered configuration. That way, if one server in the cluster goes down, it’s not the end of the world, as another can be added to replace it.

Sticky Sessions

An alternative to session caching is Session Stickiness (or Session Persistence). This is where user requests are directed to the same server for the lifetime of their session. It may sound like an excellent idea at first, but there several potential drawbacks, including:

  • Will cold and hot spots develop within the cluster?
  • What happens when a server isn’t available, is over-burdened, or has to be upgraded?

For these reasons, and others, I don’t recommend this approach.

Shared Files

How will shared files be updated? To make things just that much trickier, there are, effectively, two types of shared files:

These need to be handled in different ways. Let’s start with code files and templates.

These types of shared files need to be deployed to all servers when a new release or patch is made available. Failing to do this will see any number of unexpected functionality breaks.

The question is: what’s the best approach to deploy them? You can’t take down and update nodes individually. Why? What happens if users are directed to a server with new code on one request and directed to a server with the old code on a subsequent request? Answer: broken functionality.

One solution is to change the load balancer method to stop directing requests to nodes during updates. After nodes are updated, they’re allowed to accept new session requests.

This could then be repeated until all nodes within the cluster have been upgraded or patched. It’s workable, but it’s also, potentially, quite complicated and time-consuming.

Now let’s look at user provided and one-off files. These types of files, include images, such as a profile image PDF invoices, and company/organizational reports. These types of files need a shared filesystem, whether local or remote (such as S3).

Some potential solutions are a clustered filesystem such as Glusterfs, or a SaaS solution, such as Amazon S3, or Google Cloud Storage.

A shared (clustered) filesystem simplifies deployment processes, as new releases and patches, only need to update files in one location. The deployment process likely doesn’t need to handle file replication to each node within the filesystem cluster, as the service should provide that.

However, like centrally stored sessions, the filesystem has the potential to become a single point of failure, if it goes down or becomes inaccessible. So, this needs to be considered and planned for as well.

Implementing one of these solutions allow files to be centrally located, where each node can directly access it, as and when required. Many of PHP’s major frameworks (including Laravel, Symfony, and Zend Expressive) natively support this approach. Alternatively, packages such as Flysystem can help you implement this functionality in your application.

Job Automation

It’s quite common in modern applications — especially in PHP — to use Cron to automate regular tasks. These can be for any number of reasons, including file cleanup, cleanup abandoned shopping carts, email processing, and user account maintenance.

However, if the application is composed of multiple nodes, there are several questions to consider. For example:

  • Which server does Cron run on?
  • Does the Cron service run on a separate server from the web application?
  • If one server is dedicated to running Cron tasks:
  • What happens when it goes down, say because of a hardware failure?
  • What happens when it’s taken down for maintenance?
  • What happens when it’s not accessible?

You could roll your own solution, or you could use one of several existing solutions, such as Dkron or Apache Mesos and Airbnb Chronos. Each of these has its pros and cons.

If you roll your own solution, it may be a lot of work in addition to your existing application. It may lead you to experience the same multi-server considerations that we’re currently discussing. Alternatively, if you use one of the above solutions, you will need to plan out the implementation and maintenance of that server and how to best integrate it with your application.

All of these are viable approaches. It’s just important to consider this in advance.

In Conclusion

Those are four key considerations to keep in mind when transitioning from a single to a multi-server setup. There are others in addition to these four.

However, these are four of the most important. I hope that they provide a sound foundation for helping you to understand the potential changes and pitfalls involved.

Introduction to Content Switching. Application Virtual Server Load Balancing via Deep Packet Inspection

Content switches (also sometimes called application switches) is a class of network device that is becoming increasingly common in medium to large sized data centres and web-facing infrastructures.

The devices we traditionally call switches work at Layer 2 of the OSI model and simply direct incoming frames to the appropriate exit port based on their destination MAC address. Content switches, however, also inspect the contents of the data packet all the way from Layer 4 right up to Layer 7 and can be configured to do all sorts of clever things depending on what they find.

An increasing number of vendors are offering these products. Cisco’s CSM (Content Switching Module) and ACE (Application Control Engine) module will slot into its 6500 Series switches and 7600 Series routers and Cisco provides standalone appliances such as the ACE 4710. F5 Networks is another major contender with its BigIP LTM (Local Traffic Manager) and GTM (Global Traffic Manager) range of appliances.

In functional terms what you get here is a dedicated computer running a real OS (the BigIP LTMs run a variant of Linux) with added hardware to handle the packet manipulation and switching aspects. The content switching application, running on top of the OS and interacting with the hardware, provides both in-depth control and powerful traffic processing facilities.

So what can these devices do? We’ll consider that by looking at an example. Suppose you have a number of end-user PCs out on the internet that need to access an application running on a server farm in a data centre:

Obviously you have a few things to do here. Firstly you need to somehow provide a single ‘target’ IP address for those users to aim at, as opposed to publishing all the addresses of all the individual servers. Secondly you need some method of routing all those incoming sessions through to your server farm and sharing them evenly across your servers while still providing isolation between your internal network and the outside world. And, thirdly, you need it to be resilient so it doesn’t fall over the moment one of your servers goes off-line or something changes. A content switch can do all of this for you and much more:

Let’s look at each of these aspects in more detail.

Virtual Servers

This is one of the most basic things you can do with these devices. On your content switch you can define a virtual server which the switch will then ‘offer’ to the outside world. This is more, though, than just a virtual IP address. you can specify the ports served, the protocols accepted, the source(s) allowed and a whole heap of other parameters. And because it’s your content switch the users are accessing now, you can take all this overhead away from your back-end servers and leave them to do what they do best. serve up data.

For example, if this is a secure connection, why make each server labour with the complexities of certificate management and SSL session decryption? Let the content switch handle the SSL termination and manage the client and server certificates for you. This reduces the server load, the application complexity and your administration overhead. Do you need users to authenticate to gain access to the application? Again, do it at the virtual server within the content switch and everything becomes much easier. Timeouts, session limits and all sorts of other things can be defined and controlled at this level too.

Load Balancing

Once our users have successfully accessed the virtual server, what then? Typically you would define a resource pool (of servers) on your content switch and then define the members (individual servers) within it. Here you can address issues such as how you want the pool to share the work across the member servers (round robin or quietest first), and what should happen if things go wrong.

This second point is important – you might need two servers as a minimum sharing the load to provide a good enough service to your users, but what happens if there’s only one? And what happens if a server goes down while your service is running? You can take care of all this inside the content switch within the configuration of your pool. For example, you could say that if there is fewer than two servers up then the device should stop offering the virtual service to new clients until the situation improves. And you can set up monitors (Cisco calls them probes) so that the switch will check for application availability (again, not just simple pings) across its pool members and adjust itself accordingly. And all this will happen automatically while you sit back and sip your coffee.

You also need to consider established sessions. If a user opens a new session and their initial request is handled by server 1, you need to make sure that all subsequent communications from that user also go to server 1 as opposed to servers 2 or 3 which have no record of the data the user has already entered. This is called persistence, and the content switch can handle that for you as well.

Deep Packet Inspection

Content switches do all the above because they inspect the contents of incoming packets right up to Layer 7. They know the protocol in use, for example, and can pull the username and password out of the data entered by the user and use those to grant or deny access. This ability unleashes the ultimate power of the device – you can inspect the whole of the packet including the data and basically have your switch do anything you want.

Perhaps you feel malicious and want to deny all users named “Fred”. It’s a silly example, but you could do it. Maybe you’d like the switch to look in the packet’s data and change every instance of “Fred” to “idiot” as the data passes through. Again, you could do it. The value of this becomes clearer when you think of global enterprises (Microsoft Update is a prime example) where they want to know, perhaps, what OS you’re running or which continent you’re on so you can be silently rerouted to the server farm most appropriate for your needs. Your content switch can literally inspect and modify the incoming data on the fly to facilitate intelligent traffic-based routing, seamless redirects or disaster recovery scenarios that would be hard to achieve by conventional means. Want to inspect the HTTP header fields and use, say, the browser version to decide which back-end server farm the user should hit? Want to check for a cookie on the user’s machine and make a routing decision based on that? Want to create a cookie if there isn’t one? You can do all of this and more.

With the higher-end devices this really is limited only by your creativity. For example, the F5 and Cisco devices offer a whole programming language in which you can implement whatever custom processing you need. Once written, you simply apply these scripts (called I-Rules on the F5) at the appropriate points in the virtual server topology and off they go.

Scalability

What happens if you suddenly double your user base overnight and now need six back-end servers to handle the load instead of three? No problem – just add three more servers into the server pool on your content switch, plug them into your back-end network and they’re ready to go. And remember all you need here are simple application servers because all the clever stuff is being handled for them by the content switch. With an architecture like this server power becomes a commodity you can add or remove at will, and it’s also very easy to make application-level changes across all your servers because it’s all defined in the content switch.

Topologies

How might you see a content switch physically deployed? Well, these are switches so you might well see one in the traditional ‘straight through’ arrangement:

Or since they are VLAN aware you might also see a ‘content-switch-on-a-stick’:

You’ll also often find them in resilient pairs or clusters offering various failover options to ensure high availability. And it’s worth pointing out here that failover means just that – the session data and persistence information is constantly passed across to the standby so that if failover occurs even the in-flight sessions can be taken up and the end users won’t even notice.

And, finally, you can now even get virtual content switches that you can integrate with other virtual modules to provide a complete application service-set within a single high-end switch or router chassis. Data centre in a box, anyone?

Summary

Content switches go far beyond the connectivity and packet-routing services offered by traditional layer 2 and 3 switches. By inspecting the whole packet right up to Layer 7 including the end-user data they can:

  • Intelligently load balance traffic across multiple servers based on the availability of the content and the load on the servers.
  • Monitor the health of each server and provide automatic failover by re-routing user sessions to the remaining devices in the server farm.
  • Provide intelligent traffic management capabilities and differentiated services.
  • Handle SSL termination and certificate management, user access control, quality-of-service and bandwidth management.
  • Provide increased application resilience and improve scalability and flexibility.
  • Allow content to be flexibly located and support virtual hosting.

Further information

Links to Cisco webpages

Links to F5 webpages