A common question facing service providers that use XenApp to provide published applications to multiple clients is whether to use a single, large farm for all clients, or to dedicate a separate farm to each client.
Often the degree to which infrastructure is shared among clients is a matter of philosophy and preference. Some organizations will dedicate separate storage, networking, even virtual machine hosts for each client. The decision to share or not share infrastructure could be based on a variety of things, including client preference, past experiences, or the proverbial “that’s-just-the-way-it’s-always-been.”
The decision whether to share infrastructure among clients, and how much, must take into account many factors, not just the technical ones. If a client wants a dedicated virtual machine host, and is willing to pay for it, technical reasons probably won’t matter. But in some cases technical considerations leave little room for the client’s or organization’s preference. Sharing XenApp farms among multiple clients is one of those cases.
Prior to XenApp 6.5, the Citrix Independent Management Architecture (IMA) operated in a hub-and-spoke configuration, where each XenApp server communicated directly with the data store, which is often a SQL database. This many-to-one configuration works fine for a relatively low number of servers, but it does not scale well. As the number of clients in a single XenApp farm increases, so does the load on the poor data store, as does the amount of time it takes the IMA service to start. And until the IMA service starts on a XenApp server, it will not be able to service end users.
Placing multiple clients in a single farm also means that Client A’s XenApp servers could be sharing dynamic farm information with Client B’s XenApp servers. This can be mitigated with dedicated Zone Data Collector servers (ZDCs), however, only one ZDC can be active at a time. This configuration also does not scale well as more clients are added to the farm. Too many servers in a farm is also one of the leading causes of slow discovery in the Citrix Delivery Services/Access Management/Presentation Server Console.
One question that sometimes comes up during this thought-exercise is, “Why not just use multiple zones in a single farm?” The answer is that zones are the solution to a specific problem. Traffic between XenApp servers and the ZDCs can be voluminous. If a XenApp farm is spanning two or more geographically separate sites, connected via a WAN link, the WAN link will be saturated unless it’s very robust. Zones were developed as a way to logically separate servers so that XenApp servers only talk to the ZDCs close to themselves. The ZDCs then share information with each other across the WAN in a bandwidth-friendly fashion. Citrix recommends minimizing the number of zones in a farm.
XenApp 6.5 includes changes to the IMA architecture that mostly eliminate the above considerations. However, even in XenApp 6.5 there are good business and technical reasons to use a separate farm for each client.
A single farm for all clients means a single point of failure at the data store level. A SQL server failure can be mitigated with database mirroring, however, a corrupt data store has only two solutions: restore from backup, or start over. Both options require down time for every client in the farm. This is probably the most significant reason to separate clients into separate farms. Multiple data stores means multiple databases, and this is desirable. Separate databases for each client allow distribution of load across SQL servers without clustering, as well as database-level snapshots, backups, and restores. Imagine if someone accidentally deleted a client’s published applications from the farm and the quickest option was to restore the database. You can explain to that client why you require downtime to perform the restore, but the other clients that are affected will not be so understanding. In a best-case scenario, the actual downtime would be only a few seconds, and it would not affect existing connections. But best-case scenarios are not the rule of the day, and if that risk can be reasonably avoided, it should be.
Single farm, multi-client deployments will likely have only one zone (as they should, unless there is a geographic separation of sites). Since only one ZDC can be active at a time in a zone, a ZDC failure can cause a service outage for all clients. In theory, another XenApp server should take over immediately in the case of a ZDC failure, but until this happens, all new connections to the farm will be impacted.
In addition to technical considerations, there are good business reasons to separate clients into separate farms. Giving each client its own farm facilitates change control, requiring only the client and the service provider to coordinate changes. If all clients share a single farm, changes to the farm would require buy-in from all clients on the farm, assuming such a change control is in place. Separate farms also allow for more farm customization for each client. XenApp 6 has largely moved many farm-level settings into Active Directory Group Policy Objects (GPOs) which allow changes to be applied at an Organization Unit (OU) level. However, some settings are still applied on a per-farm basis.
Placing all clients in a single farm can also impede disaster recovery (DR) time objectives (RTO). In the event of a disaster, separate farms allow an organization to move one client at a time. A disaster recovery plan could involve two live sites simultaneously servicing clients (active-active) or one live site with failover (active-passive). In an active-active scenario, a single farm deployment will likely not pose a problem. However, in an active-passive scenario, the entire farm must be completely moved to the disaster recovery site before any client can be back up and running. This could lead to a significant client satisfaction issue (above and beyond the expected “What do you mean everything is down?”) if a particular client needs to access their applications immediately, while other clients could wait until the next morning.
The question is not “Can all clients reside in a single farm?” but “Should they all reside in a single farm?” The answer will depend on the needs, preferences, and priorities of both the service provider and the clients. But more often than not, using separate farms is the best long-term strategy. XenApp 6.5 offers significant advantages for a single-farm, multi-client deployment, so if you decide to go the single-farm route, XenApp 6.5 should be an absolute requirement. If you’re using a single-farm deployment, and decide to break it out into separate farms, the good news is that it can be done gradually, client-by-client, and you will likely see increasingly better performance and stability as each client is migrated to its own farm.