Load balancing Storage Made Easy File Fabric
Benefits of load balancing Storage Made Easy File Fabric
Load balancing Storage Made Easy (SME) File Fabric offers the following benefits:
- High Availability (HA): Load balancing is essential for an SME File Fabric deployment to achieve High Availability. If one of the SME Web or SQL servers fails (due to hardware issues, software crashes, or maintenance), the load balancer automatically detects the failure. It instantly redirects all user traffic and internal requests to the remaining healthy servers. This prevents a single point of failure, ensuring that users maintain continuous access to the unified data plane, which is critical for remote work and enterprise file services.
- Optimized performance: The File Fabric manages connections to multiple cloud and on-premises storage sources. By distributing the load—including client web portal access, SFTP requests, and database lookups (SQL)—across multiple server instances, the load balancer prevents any single server from being overwhelmed. This optimized resource utilization leads to faster file access, quicker web interface response times, and higher throughput for activities like file synchronization and content searching across the federated storage.
- Scalability: As the user base grows or the usage patterns increase (e.g., more users, more files, or increased use of features like deep content search), load balancing allows more SME File Fabric server instances (Web and SQL) to be seamlessly added to the cluster. The load balancer automatically integrates the new resources and distributes the workload, enabling the File Fabric solution to scale its capacity horizontally to meet fluctuating or long-term growth in enterprise file management demand.
About Storage Made Easy
Storage Made Easy provides an on-premises Enterprise File Fabric solution that is a storage agnostic and can be used either with a single storage back-end or multiple public/private storage systems. In the event of the latter, the File Fabric unifies the view across all access clients and implements a common control and governance policies through the use of its cloud control features.
The product is supplied as a software ‘appliance’ which is run inside of a hypervisor and consists of a preconfigured, ‘hardened’ operating system (CentOS) and the File Fabric Application provided by Storage Made Easy.
Why Loadbalancer.org for Storage Made Easy File Fabric?
Loadbalancer’s intuitive Enterprise Application Delivery Controller (ADC) is designed to save time and money with a clever, not complex, WebUI.
Easily configure, deploy, manage, and maintain our Enterprise load balancer, reducing complexity and the risk of human error. For a difference you can see in just minutes.
And with WAF and GSLB included straight out-of-the-box, there’s no hidden costs, so the prices you see on our website are fully transparent.
More on what’s possible with Loadbalancer.org.
How to load balance Storage Made Easy
The load balancer can be deployed in 4 fundamental ways: Layer 4 DR mode, Layer 4 NAT mode, Layer 4 SNAT mode, and Layer 7 Reverse Proxy (Layer 7 SNAT mode).
For File Fabric, using a combination of Layer 4 DR mode and Layer 7 Reverse Proxy is recommended.
It is also possible to only use Layer 7 Reverse Proxy, however the performance of this set up is not as great and client source IP addresses are not passed through to the SME servers on the back end.
Virtual service (VIP) requirements
To provide load balancing and HA for File Fabric, the following VIPs are required:
- Web portal access
- SQL
- Memcache
- SFTP
Ports
The following table shows the ports that are load balanced:
| Port | Protocols | Use |
|---|---|---|
| 80 | TCP/HTTP | Web Portal Access over HTTP |
| 443 | TCP/HTTPS | Web Portal Access over HTTPS |
| 3306 | TCP/SQL | SQL Service |
| 2200 | TCP/SFTP | SFTP Service |
| 12211 | TCP/Memcache | Memcache Service |
Load balancing deployment concept
To deploy File Fabric as an HA deployment, 4 SME File Fabric instances are needed. When configured as per the Storage Made Easy guides, the topology will be as follows:
- 2 SME Web servers
- 2 SME SQL servers

Note
The load balancer can be deployed as a single unit, although Loadbalancer.org recommends a clustered pair for resilience and high availability.
About Layer 4 DR mode
One-arm direct routing (DR) mode is a very high performance solution that requires little change to your existing infrastructure.

DR mode works by changing the destination MAC address of the incoming packet to match the selected Real Server on the fly which is very fast.
When the packet reaches the Real Server it expects the Real Server to own the Virtual Services IP address (VIP). This means that you need to ensure that the Real Server (and the load balanced application) respond to both the Real Server’s own IP address and the VIP.
The Real Servers should not respond to ARP requests for the VIP. Only the load balancer should do this. Configuring the Real Servers in this way is referred to as Solving the ARP problem.
On average, DR mode is 8 times quicker than NAT for HTTP, 50 times quicker for Terminal Services and much, much faster for streaming media or FTP.
The load balancer must have an Interface in the same subnet as the Real Servers to ensure Layer 2 connectivity required for DR mode to work.
The VIP can be brought up on the same subnet as the Real Servers, or on a different subnet provided that the load balancer has an interface in that subnet.
Port translation is not possible with DR mode, e.g. VIP:80 → RIP:8080 is not supported. DR mode is transparent, i.e. the Real Server will see the source IP address of the client.
About Layer 7 Reverse Proxy load balancing
Layer 7 Reverse Proxy uses a proxy (HAProxy) at the application layer. Inbound requests are terminated on the load balancer and HAProxy generates a new corresponding request to the chosen Real Server. As a result, Layer 7 is typically not as fast as the Layer 4 methods.
Layer 7 is typically chosen when enhanced options such as SSL termination, cookie based persistence, URL rewriting, header insertion/deletion etc. are required, or when the network topology prohibits the use of the Layer 4 methods.

Because Layer 7 Reverse Proxy is a full proxy, any server in the cluster can be on any accessible subnet, including across the Internet or WAN.
Layer 7 Reverse Proxy is not transparent by default i.e. the Real Servers will not see the source IP address of the client, they will see the load balancer’s own IP address by default, or any other local appliance IP address if preferred (e.g. the VIP address). This can be configured per Layer 7 VIP.
If required, the load balancer can be configured to provide the actual client IP address to the Real Servers in two ways:
- Either by inserting a header that contains the client’s source IP address, or
- By modifying the Source Address field of the IP packets and replacing the IP address of the load balancer with the IP address of the client.
Layer 7 Reverse Proxy mode can be deployed using either a one-arm or two-arm configuration. For two-arm deployments, eth0 is normally used for the internal network and eth1 is used for the external network, although this is not mandatory.
No mode-specific configuration changes to the load balanced Real Servers are required.
Port translation is possible with Layer 7 Reverse Proxy e.g. VIP:80 → RIP:8080 is supported. You should not use the same RIP:PORT combination for Layer 7 Reverse Proxy VIPs and Layer 4 SNAT mode VIPs because the required firewall rules conflict.

