eworldproblems
  • Home
  • About
  • Awesome Ideas That Somebody Else Already Thought Of
  • Perl defects
  • Books & Resources
Follow

HAProxy “backup” and “option redispatch”



I’m testing out my new infrastructure-in-progress for redundant web hosting, described in my last post. The quick outline of the components again, is a total of four VMs:

  • Two small & inexpensive instances in a tier 4 datacenter running Apache Traffic Server + HAProxy.
  • One bigger, rather spendy backend VM in a tier 4 datacenter running Apache, webapp code, databases, etc. Call it hosting1.
  • One VM roughly equivalent in specs to the backend VM above, but running on a host machine in my home office so its marginal cost to me is $0. Call it hosting2.

The thinking is that hosting1 is going to be available 99%+ of the time, so “free” is a nice price for the backup VM relative to the bandwidth / latency hit I’ll very occasionally take by serving requests over my home Internet connection. Apache Traffic Server will return cached copies of lots of the big static media anyway. But this plan requires getting HAProxy to get with the program – don’t EVER proxy to hosting2 unless hosting1 is down.

HAProxy does have a configuration for that – you simply mark the server(s) you want to use as backups with, sensibly enough, “backup,” and they’ll only see traffic if all other servers are failed. However, when testing this, at least under HAProxy 1.4.24, there’s a little problem: option redispatch doesn’t work. Option redispatch is supposed to smooth things over for requests that happen to come in during the interval after the backend has failed but before the health checks have declared it down, by reissuing the proxied request to another server in the pool. Instead, when you lose the last (and for me, only) backend in the proper load balance pool, requests received during this interval wait until the health checks establish the non-backup backend as down, and then return a 503, “No servers are available to handle your request.”

I did a quick test and reconfigured hosting1 and hosting2 to both be regular load balance backends. With this more typical configuration, option redispatch worked as advertised, but I now ran the risk of traffic being directed at hosting2 when it didn’t need to be.

Through some experimentation, I’ve come up with a configuration that my testing indicates gives the effect of “backup” but puts both servers in the regular load balance pool, so option redispatch works. The secret, I think, is a “stick match” based on destination port — so matching all incoming requests — combined with a very imbalanced weight favoring hosting1.

Here’s a complete config that is working out for me:

global
  chroot  /var/lib/haproxy
  daemon
  group  haproxy
  log  10.2.3.4 local0
  maxconn  4000
  pidfile  /var/run/haproxy.pid
  stats  socket /var/lib/haproxy/stats
  user  haproxy

defaults
  log  global
  maxconn  8000
  option  redispatch
  retries  3
  stats  enable
  timeout  http-request 10s
  timeout  queue 1m
  timeout  connect 10s
  timeout  client 1m
  timeout  server 1m
  timeout  check 10s

listen puppet00
  bind 127.0.0.1:8100
  mode http
  balance static-rr
  option redispatch
  retries 2
  stick match dst_port
  stick-table type integer size 100 expire 96h
  server hosting1 10.0.1.100:80 check weight 100
  server hosting2 10.0.2.2:80 check weight 1

I tested this by putting different index.html’s on my backend servers and hitting HAProxy with cURL every 0.5 seconds for about an hour, using the `watch` command to highlight any difference in the data returned:

watch -x --d=permanent -n 0.5 curl -H 'Host: www.mysite.com' http://104.156.201.58  --stderr /dev/null

It didn’t return the index.html from hosting2 a single time, until I shutdown apache on hosting1 after about an hour.

There’s a small amount of magic here that I unfortunately don’t understand, but will resign myself to being happy that it works as desired. Once failed over to hosting2, I would kind of expect the stick table to update and cause everything to stay with hosting2 even after hosting1 comes back. In actuality, it returns to hosting1. So cool, I guess.

Posted in devops
SHARE THIS Twitter Facebook Delicious StumbleUpon E-mail
← Top gotchas: Creating virtual machines under KVM with virt-install
Automatic security updates and a support lifecycle: the missing links to Backdrop affordability →

No Comments Yet

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recent Posts

  • Reset connection rate limit in pfSense
  • Connecting to University of Minnesota VPN with Ubuntu / NetworkManager native client
  • Running nodes against multiple puppetmasters as an upgrade strategy
  • The easiest way to (re)start MySQL replication
  • Keeping up on one’s OpenSSL cipher configurations without being a fulltime sysadmin

Categories

  • Computing tips
    • Big Storage @ Home
    • Linux
  • dev
    • devops
    • Drupal
    • lang
      • HTML
      • JavaScript
      • PHP
    • SignalR
  • Product Reviews
  • Uncategorized

Tags

Apache iframe malware performance Security SignalR YWZmaWQ9MDUyODg=

Archives

  • June 2018
  • January 2018
  • August 2017
  • January 2017
  • December 2016
  • November 2016
  • July 2016
  • February 2016
  • January 2016
  • September 2015
  • March 2015
  • February 2015
  • November 2014
  • August 2014
  • July 2014
  • April 2014
  • February 2014
  • January 2014
  • October 2013
  • August 2013
  • June 2013
  • January 2013
  • December 2012
  • November 2012
  • September 2012
  • August 2012
  • July 2012

Blogroll

  • A Ph.D doing DevOps (and lots else)
  • gavinj.net – interesting dev blog
  • Louwrentius.com – zfs@home with 4x the budget, other goodies
  • Me on github
  • My old edulogon.com blog
  • My old GSOC blog
  • My wife started baking a lot
  • Now it's official, my wife is a foodie

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org

EvoLve theme by Theme4Press  •  Powered by WordPress eworldproblems