For a home server or lab setup, you simply do not need to have a high availability load balancer. However, if you wanted to set one up, you could build one without a great deal of expense or effort.
A load balancer are a pair of computers which are configured to listed to an IP address and port(s), and route traffic to one of a series of servers based upon a metric. With a load balanced website when a connection is made, it will utilize server resources. A load balancer will balance this across several servers.
I first worked professionally with load balancers about twenty years ago. At that time, you either purchased a very expensive load balancer from Barracuda, or you jumped into the Linux world and compiled one. My first load balancer consisted of two, 1U servers running keepalived and ha-proxy. The servers were relatively cheap at the time, about $1,000 for each one, and we need two server in order to allow us to reboot one of the servers in the load balancer without bring the website down. There were about the cheapest you could get at the time. So, my first load balancer cost about $2K and it took about 2 weeks to set in up, and configure it. This homebrew load balancer worked extremely well and ran for about 2 years before we upgrade them with some from Barracuda. A pair of Barracuda load balancers cost about $20,000 at that time.
These days, with the abundance of Raspberry Pi SBC or cheap mini PCs, you can easily setup a load balancer for the home lab. First and foremost, a Raspberry PI, or mini PC uses very little power, and therefor, the cost of running power on these devices does not add significantly to your power bill.
Operations
Our load balancer will consist of two computers, Load1 and Load 2, running KeepAlived and HAProxy. These two computers are configured to share and “keepalive” a virtual IP address. This IP Address will remain as long as one of the two load balancer computers is online.
The firewall needs to forward ports 80 and 443 to the virtual IP address maintained by the load balancers. The HAProxy software is configured on both computers to proxy server traffic to your web server. There is no requirement that you have multiple web servers, however, you can configure a sorry server on your balancer. A sorry server is an expensive server which will server web pages only when the primary web servers are offline.
Utilizing this configuration, you can restart either of the load balancers at any time. During a restart of one load balancer your web traffic will not be effected. If you operated multiple web servers you can also restart them as needed. If all of the web servers are offline, the HAProxy software traffic to the sorry server. This server is to display an “server maintenance” web page, until of the production web servers comes back online.
Hardware Requirements
You can build an inexpensive pair of load balancers on a pair of Raspberry Pi single board computers. If you happen to run a virtual machine server, you could simply spin up two very small virtual machines. Regardless, you can build a load balancer on some very light weight hardware. I built mine on two ASUS EEEPC boxes which are about 10 years old. At some point, I may replace them with a dell mini PC. However, the point remains the same, you don’t need a lot of CPU to run a home lab load balancer.
Software Requirements
A load balancer runs two pieces of software running on a Linux System. I typically use the Debian operating system, however, this software should run on almost any version of Linux you choose. Please verify the version of Linux you choose offers prebuilt packages of KeepAlived and HAProxy. Otherwise you, may be custom compiling software.
Debian is available for free download at https://www.debian.org.
Keepalived
The main goal of the Keepalived software is to provide simple and robust facilities for load balancing and high-availability on Linux based operating systems. Load balancing framework relies on well-known and widely used Linux Virtual Server (IPVS) kernel module providing Layer4 load balancing.
For our purposes, Keepalived is used to create a virtual IP Address on your computer network. The software ideally is installed on at least two different computers and allows you to keep the IP Address active when one the servers fails or is restarted. The Keepalived software is configured to communicate between the two computers and when the primary computer fails, the slave computer will promote itself to keep the IP address alive.
Installation
To install keepaliaved on a Debian based operating system, from the terminal run the following command.
foo@load1:/home/foo$ sudo apt install keepalived
Configuration
To configure keepalived, the process is pretty simple. The configuration files on Debian are stored in:
/etc/keepaliaved/keepalived.conf
You can edit this file with your favorite text editor and add or modify the following lines:
vrrp_instance VI_1 {
state MASTER
interface enp1s0
virtual_router_id 101
priority 101
advert_int 1
authentication {
auth_type PASS
auth_pass XXXX
}
virtual_ipaddress {
AAA.BBB.CCC.100
}
}
The three items you will need to change are:
- interface enp1s0 – The interface parameter needs to match the ethernet interface you wish to bind configure. This parameter can be found with the command: ip addr list
- auth_pass XXXX – The auth_pass parameter simply needs to match between the two balancers
- virtual_ipaddress – The virtual_ipaddress parameter needs to be the internal IP address you wish to “keep alive”. This is the address that you port forward from you router and configured into your HAProxy server.
Testing
The test the changes to keepalived, restart the service
foo@load1:/home/foo# sudo systemctl stop keepalived
Once the service is restarted, check and ensure the service started with the command:
foo@load1:/home/foo# sudo ip addr list
From the output of the ip addr list command, you should see your new IP address listed under the interface configured.
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 90:e6:ba:c2:7a:dc brd ff:ff:ff:ff:ff:ff
inet AAA.BBB.CCC/24 brd 192.168.0.255 scope global enp1s0
valid_lft forever preferred_lft forever
inet AAA.BBB.CCC.100/32 scope global enp1s0
valid_lft forever preferred_lft forever
inet6 fe80::92e6:baff:fec2:7adc/64 scope link
valid_lft forever preferred_lft forever
This process needs to be done with both systems in your load balancer. You do not need to have identical computers for your load balancer, but you do need to pay attention to the interface parameter.
HAProxy
HAProxy is a fast and reliable load balancing reverse proxy service for Linux. The HAProxy service allows you to configured multiple websites on multiple ports. You can have man webservers, or just one. For my homelab implementation, I am running one webserver along with a backup sorry server.
Installation
To install HAProxy on a Debian based operating system, from the terminal run the following command.
foo@load1:/home/foo$ sudo apt install haproxy
Configuration
To configure HAProxy, the process is a bit more complicated. The configuration files on Debian are located in:
/etc/haproxy/haproxy.cfg
You can edit this file with your favorite text editor and add or modify the following lines:
frontend www_http
bind *:80
mode tcp
option tcplog
default_backend http_backend_servers
backend http_backend_servers
mode tcp
balance roundrobin
server s1 AAA.BBB.CCC.101:80 check
server s2 AAA.BBB.CCC.110:80 check backup
frontend www_https
bind *:443
mode tcp
option tcplog
default_backend https_backend_servers
backend https_backend_servers
mode tcp
balance roundrobin
option ssl-hello-chk
server s1 AAA.BBB.CCC.101:443 check
server s2 AAA.BBB.CCC.110:443 check backup
Parameters
The frontend parameter allows you to create and name your proxy instance. In our example, I created a front end named www_http. This instance listens on TCP port 80 and will forward traffic to the backend instance http_backend_servers.
frontend www_http
bind *:80
mode tcp
option tcplog
default_backend http_backend_servers
The backend parameter allows you the define how the traffic to your frontend proxy instance is routed. In our example, we are balancing as a round robin instance, meaning the web server is randomly picked from the server parameter.
backend http_backend_servers
mode tcp
balance roundrobin
server s1 AAA.BBB.CCC.101:80 check
server s2 AAA.BBB.CCC.110:80 check backup
The parameter server s1 AAA.BBB.CCC.101:80 check defines the primary server to route traffic. You can define more than one if your installation warrants this amount of traffic. The parameter server s2 AAA.BBB.CCC.110:80 check backup defines the sorry server which is displayed when the main servers are offline.
This process must be done with both systems in your load balancer.
Testing
To test your HAProxy instances, simply restart the HA Proxy service.
foo@load1:/home/foo$ sudo systemctl status haproxy
The check the status of your HAProxy status and you should see the service handling your traffic.
foo@load1:/home/foo$ sudo systemctl status haproxy
● haproxy.service - HAProxy Load Balancer
Loaded: loaded (/lib/systemd/system/haproxy.service; enabled; preset: enabled)
Active: active (running) since Sat 2024-12-28 15:33:41 PST; 21h ago
Docs: man:haproxy(1)
file:/usr/share/doc/haproxy/configuration.txt.gz
Main PID: 380 (haproxy)
Tasks: 3 (limit: 2293)
Memory: 48.3M
CPU: 3min 32.448s
CGroup: /system.slice/haproxy.service
├─380 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -S /run/haproxy-master.sock
└─427 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -S /run/haproxy-master.sock
Dec 29 12:01:55 load1 haproxy[427]: 175.27.212.157:51306 [29/Dec/2024:12:01:54.570] www_http http_backend_servers/s1 1/0/769 509 -- 2/1/0/0/0 0/0
Dec 29 12:01:55 load1 haproxy[427]: 66.249.65.168:37240 [29/Dec/2024:12:01:48.622] www_https https_backend_servers/s1 1/0/6782 64203 -- 1/1/0/0/0 0/0
Dec 29 12:02:07 load1 haproxy[427]: 175.27.212.157:51877 [29/Dec/2024:12:02:05.495] www_http http_backend_servers/s1 1/0/2139 509 -- 3/1/0/0/0 0/0
Once the configuration is complete, you need to test it out to ensure if functions as intended. Open up your web browser and load up your site. Down a load balancer and check it again. Practice bringing your site up and down and verify it performs to your expectations. And have fun tweaking your custom load balancer