blob: c37632f379f04b17dcc17acc38ca2cf786bd98ad [file] [log] [blame]
willy tarreau0174f312005-12-18 01:02:42 +01001 -------------------
Willy Tarreau94b45912006-05-31 06:40:15 +02002 HAProxy
willy tarreau0174f312005-12-18 01:02:42 +01003 Architecture Guide
4 -------------------
willy tarreau065f1c02006-01-29 22:10:07 +01005 version 1.1.34
willy tarreau0174f312005-12-18 01:02:42 +01006 willy tarreau
willy tarreau065f1c02006-01-29 22:10:07 +01007 2006/01/29
willy tarreau0174f312005-12-18 01:02:42 +01008
9
10This document provides real world examples with working configurations.
11Please note that except stated otherwise, global configuration parameters
12such as logging, chrooting, limits and time-outs are not described here.
13
14===================================================
151. Simple HTTP load-balancing with cookie insertion
16===================================================
17
18A web application often saturates the front-end server with high CPU loads,
19due to the scripting language involved. It also relies on a back-end database
20which is not much loaded. User contexts are stored on the server itself, and
21not in the database, so that simply adding another server with simple IP/TCP
22load-balancing would not work.
23
24 +-------+
25 |clients| clients and/or reverse-proxy
26 +---+---+
27 |
28 -+-----+--------+----
29 | _|_db
30 +--+--+ (___)
31 | web | (___)
32 +-----+ (___)
33 192.168.1.1 192.168.1.2
34
35
36Replacing the web server with a bigger SMP system would cost much more than
37adding low-cost pizza boxes. The solution is to buy N cheap boxes and install
38the application on them. Install haproxy on the old one which will spread the
39load across the new boxes.
40
41 192.168.1.1 192.168.1.11-192.168.1.14 192.168.1.2
42 -------+-----------+-----+-----+-----+--------+----
43 | | | | | _|_db
44 +--+--+ +-+-+ +-+-+ +-+-+ +-+-+ (___)
45 | LB1 | | A | | B | | C | | D | (___)
46 +-----+ +---+ +---+ +---+ +---+ (___)
47 haproxy 4 cheap web servers
48
49
50Config on haproxy (LB1) :
51-------------------------
52
willy tarreauc5f73ed2005-12-18 01:26:38 +010053 listen webfarm 192.168.1.1:80
willy tarreau0174f312005-12-18 01:02:42 +010054 mode http
55 balance roundrobin
56 cookie SERVERID insert indirect
57 option httpchk HEAD /index.html HTTP/1.0
58 server webA 192.168.1.11:80 cookie A check
59 server webB 192.168.1.12:80 cookie B check
60 server webC 192.168.1.13:80 cookie C check
61 server webD 192.168.1.14:80 cookie D check
62
63
64Description :
65-------------
66 - LB1 will receive clients requests.
67 - if a request does not contain a cookie, it will be forwarded to a valid
68 server
69 - in return, a cookie "SERVERID" will be inserted in the response holding the
70 server name (eg: "A").
71 - when the client comes again with the cookie "SERVERID=A", LB1 will know that
72 it must be forwarded to server A. The cookie will be removed so that the
73 server does not see it.
74 - if server "webA" dies, the requests will be sent to another valid server
75 and a cookie will be reassigned.
76
77
78Flows :
79-------
80
81(client) (haproxy) (server A)
82 >-- GET /URI1 HTTP/1.0 ------------> |
83 ( no cookie, haproxy forwards in load-balancing mode. )
84 | >-- GET /URI1 HTTP/1.0 ---------->
85 | <-- HTTP/1.0 200 OK -------------<
86 ( the proxy now adds the server cookie in return )
87 <-- HTTP/1.0 200 OK ---------------< |
88 Set-Cookie: SERVERID=A |
89 >-- GET /URI2 HTTP/1.0 ------------> |
90 Cookie: SERVERID=A |
91 ( the proxy sees the cookie. it forwards to server A and deletes it )
92 | >-- GET /URI2 HTTP/1.0 ---------->
93 | <-- HTTP/1.0 200 OK -------------<
94 ( the proxy does not add the cookie in return because the client knows it )
95 <-- HTTP/1.0 200 OK ---------------< |
96 >-- GET /URI3 HTTP/1.0 ------------> |
97 Cookie: SERVERID=A |
98 ( ... )
99
100
101Limits :
102--------
103 - if clients use keep-alive (HTTP/1.1), only the first response will have
104 a cookie inserted, and only the first request of each session will be
105 analyzed. This does not cause trouble in insertion mode because the cookie
106 is put immediately in the first response, and the session is maintained to
107 the same server for all subsequent requests in the same session. However,
108 the cookie will not be removed from the requests forwarded to the servers,
109 so the server must not be sensitive to unknown cookies. If this causes
110 trouble, you can disable keep-alive by adding the following option :
111
112 option httpclose
113
114 - if for some reason the clients cannot learn more than one cookie (eg: the
115 clients are indeed some home-made applications or gateways), and the
116 application already produces a cookie, you can use the "prefix" mode (see
117 below).
118
119 - LB1 becomes a very sensible server. If LB1 dies, nothing works anymore.
Willy Tarreau4fb20ff2007-03-17 21:55:50 +0100120 => you can back it up using keepalived (see below)
willy tarreau0174f312005-12-18 01:02:42 +0100121
122 - if the application needs to log the original client's IP, use the
123 "forwardfor" option which will add an "X-Forwarded-For" header with the
124 original client's IP address. You must also use "httpclose" to ensure
125 that you will rewrite every requests and not only the first one of each
126 session :
127
128 option httpclose
129 option forwardfor
130
Maik Broemme2850cb42009-04-17 18:53:21 +0200131 - if the application needs to log the original destination IP, use the
132 "originalto" option which will add an "X-Original-To" header with the
133 original destination IP address. You must also use "httpclose" to ensure
134 that you will rewrite every requests and not only the first one of each
135 session :
136
137 option httpclose
138 option originalto
139
willy tarreau0174f312005-12-18 01:02:42 +0100140 The web server will have to be configured to use this header instead.
141 For example, on apache, you can use LogFormat for this :
142
143 LogFormat "%{X-Forwarded-For}i %l %u %t \"%r\" %>s %b " combined
144 CustomLog /var/log/httpd/access_log combined
145
Willy Tarreau4fb20ff2007-03-17 21:55:50 +0100146Hints :
147-------
148Sometimes on the internet, you will find a few percent of the clients which
149disable cookies on their browser. Obviously they have troubles everywhere on
150the web, but you can still help them access your site by using the "source"
151balancing algorithm instead of the "roundrobin". It ensures that a given IP
152address always reaches the same server as long as the number of servers remains
153unchanged. Never use this behind a proxy or in a small network, because the
154distribution will be unfair. However, in large internal networks, and on the
155internet, it works quite well. Clients which have a dynamic address will not
156be affected as long as they accept the cookie, because the cookie always has
157precedence over load balancing :
158
159 listen webfarm 192.168.1.1:80
160 mode http
161 balance source
162 cookie SERVERID insert indirect
163 option httpchk HEAD /index.html HTTP/1.0
164 server webA 192.168.1.11:80 cookie A check
165 server webB 192.168.1.12:80 cookie B check
166 server webC 192.168.1.13:80 cookie C check
167 server webD 192.168.1.14:80 cookie D check
168
willy tarreau0174f312005-12-18 01:02:42 +0100169
170==================================================================
1712. HTTP load-balancing with cookie prefixing and high availability
172==================================================================
173
174Now you don't want to add more cookies, but rather use existing ones. The
175application already generates a "JSESSIONID" cookie which is enough to track
176sessions, so we'll prefix this cookie with the server name when we see it.
177Since the load-balancer becomes critical, it will be backed up with a second
willy tarreauc5f73ed2005-12-18 01:26:38 +0100178one in VRRP mode using keepalived under Linux.
willy tarreau0174f312005-12-18 01:02:42 +0100179
180Download the latest version of keepalived from this site and install it
181on each load-balancer LB1 and LB2 :
182
183 http://www.keepalived.org/
184
185You then have a shared IP between the two load-balancers (we will still use the
186original IP). It is active only on one of them at any moment. To allow the
willy tarreauc5f73ed2005-12-18 01:26:38 +0100187proxy to bind to the shared IP on Linux 2.4, you must enable it in /proc :
willy tarreau0174f312005-12-18 01:02:42 +0100188
189# echo 1 >/proc/sys/net/ipv4/ip_nonlocal_bind
190
191
192 shared IP=192.168.1.1
193 192.168.1.3 192.168.1.4 192.168.1.11-192.168.1.14 192.168.1.2
194 -------+------------+-----------+-----+-----+-----+--------+----
195 | | | | | | _|_db
196 +--+--+ +--+--+ +-+-+ +-+-+ +-+-+ +-+-+ (___)
197 | LB1 | | LB2 | | A | | B | | C | | D | (___)
198 +-----+ +-----+ +---+ +---+ +---+ +---+ (___)
199 haproxy haproxy 4 cheap web servers
200 keepalived keepalived
201
202
203Config on both proxies (LB1 and LB2) :
204--------------------------------------
205
willy tarreauc5f73ed2005-12-18 01:26:38 +0100206 listen webfarm 192.168.1.1:80
willy tarreau0174f312005-12-18 01:02:42 +0100207 mode http
208 balance roundrobin
209 cookie JSESSIONID prefix
210 option httpclose
211 option forwardfor
212 option httpchk HEAD /index.html HTTP/1.0
213 server webA 192.168.1.11:80 cookie A check
214 server webB 192.168.1.12:80 cookie B check
215 server webC 192.168.1.13:80 cookie C check
216 server webD 192.168.1.14:80 cookie D check
217
218
219Notes: the proxy will modify EVERY cookie sent by the client and the server,
220so it is important that it can access to ALL cookies in ALL requests for
221each session. This implies that there is no keep-alive (HTTP/1.1), thus the
222"httpclose" option. Only if you know for sure that the client(s) will never
willy tarreauc5f73ed2005-12-18 01:26:38 +0100223use keep-alive (eg: Apache 1.3 in reverse-proxy mode), you can remove this
224option.
willy tarreau0174f312005-12-18 01:02:42 +0100225
Willy Tarreau4fb20ff2007-03-17 21:55:50 +0100226
227Configuration for keepalived on LB1/LB2 :
228-----------------------------------------
229
230 vrrp_script chk_haproxy { # Requires keepalived-1.1.13
231 script "killall -0 haproxy" # cheaper than pidof
232 interval 2 # check every 2 seconds
233 weight 2 # add 2 points of prio if OK
234 }
235
236 vrrp_instance VI_1 {
237 interface eth0
238 state MASTER
239 virtual_router_id 51
240 priority 101 # 101 on master, 100 on backup
241 virtual_ipaddress {
242 192.168.1.1
243 }
244 track_script {
245 chk_haproxy
246 }
247 }
248
willy tarreau0174f312005-12-18 01:02:42 +0100249
250Description :
251-------------
Willy Tarreau4fb20ff2007-03-17 21:55:50 +0100252 - LB1 is VRRP master (keepalived), LB2 is backup. Both monitor the haproxy
253 process, and lower their prio if it fails, leading to a failover to the
254 other node.
willy tarreau0174f312005-12-18 01:02:42 +0100255 - LB1 will receive clients requests on IP 192.168.1.1.
256 - both load-balancers send their checks from their native IP.
257 - if a request does not contain a cookie, it will be forwarded to a valid
258 server
259 - in return, if a JESSIONID cookie is seen, the server name will be prefixed
Ilya Shipitsin2a950d02020-03-06 13:07:38 +0500260 into it, followed by a delimiter ('~')
willy tarreau0174f312005-12-18 01:02:42 +0100261 - when the client comes again with the cookie "JSESSIONID=A~xxx", LB1 will
262 know that it must be forwarded to server A. The server name will then be
263 extracted from cookie before it is sent to the server.
264 - if server "webA" dies, the requests will be sent to another valid server
265 and a cookie will be reassigned.
266
267
268Flows :
269-------
270
271(client) (haproxy) (server A)
272 >-- GET /URI1 HTTP/1.0 ------------> |
273 ( no cookie, haproxy forwards in load-balancing mode. )
274 | >-- GET /URI1 HTTP/1.0 ---------->
275 | X-Forwarded-For: 10.1.2.3
276 | <-- HTTP/1.0 200 OK -------------<
277 ( no cookie, nothing changed )
278 <-- HTTP/1.0 200 OK ---------------< |
279 >-- GET /URI2 HTTP/1.0 ------------> |
280 ( no cookie, haproxy forwards in lb mode, possibly to another server. )
281 | >-- GET /URI2 HTTP/1.0 ---------->
282 | X-Forwarded-For: 10.1.2.3
283 | <-- HTTP/1.0 200 OK -------------<
284 | Set-Cookie: JSESSIONID=123
285 ( the cookie is identified, it will be prefixed with the server name )
286 <-- HTTP/1.0 200 OK ---------------< |
287 Set-Cookie: JSESSIONID=A~123 |
288 >-- GET /URI3 HTTP/1.0 ------------> |
289 Cookie: JSESSIONID=A~123 |
290 ( the proxy sees the cookie, removes the server name and forwards
291 to server A which sees the same cookie as it previously sent )
292 | >-- GET /URI3 HTTP/1.0 ---------->
293 | Cookie: JSESSIONID=123
294 | X-Forwarded-For: 10.1.2.3
295 | <-- HTTP/1.0 200 OK -------------<
296 ( no cookie, nothing changed )
297 <-- HTTP/1.0 200 OK ---------------< |
298 ( ... )
299
Willy Tarreau4fb20ff2007-03-17 21:55:50 +0100300Hints :
301-------
302Sometimes, there will be some powerful servers in the farm, and some smaller
303ones. In this situation, it may be desirable to tell haproxy to respect the
304difference in performance. Let's consider that WebA and WebB are two old
305P3-1.2 GHz while WebC and WebD are shiny new Opteron-2.6 GHz. If your
306application scales with CPU, you may assume a very rough 2.6/1.2 performance
307ratio between the servers. You can inform haproxy about this using the "weight"
308keyword, with values between 1 and 256. It will then spread the load the most
309smoothly possible respecting those ratios :
310
311 server webA 192.168.1.11:80 cookie A weight 12 check
312 server webB 192.168.1.12:80 cookie B weight 12 check
313 server webC 192.168.1.13:80 cookie C weight 26 check
314 server webD 192.168.1.14:80 cookie D weight 26 check
willy tarreau0174f312005-12-18 01:02:42 +0100315
316
317========================================================
3182.1 Variations involving external layer 4 load-balancers
319========================================================
320
321Instead of using a VRRP-based active/backup solution for the proxies,
322they can also be load-balanced by a layer4 load-balancer (eg: Alteon)
323which will also check that the services run fine on both proxies :
324
325 | VIP=192.168.1.1
326 +----+----+
327 | Alteon |
328 +----+----+
329 |
330 192.168.1.3 | 192.168.1.4 192.168.1.11-192.168.1.14 192.168.1.2
331 -------+-----+------+-----------+-----+-----+-----+--------+----
332 | | | | | | _|_db
333 +--+--+ +--+--+ +-+-+ +-+-+ +-+-+ +-+-+ (___)
334 | LB1 | | LB2 | | A | | B | | C | | D | (___)
335 +-----+ +-----+ +---+ +---+ +---+ +---+ (___)
336 haproxy haproxy 4 cheap web servers
337
338
339Config on both proxies (LB1 and LB2) :
340--------------------------------------
341
willy tarreauc5f73ed2005-12-18 01:26:38 +0100342 listen webfarm 0.0.0.0:80
willy tarreau0174f312005-12-18 01:02:42 +0100343 mode http
344 balance roundrobin
345 cookie JSESSIONID prefix
346 option httpclose
347 option forwardfor
348 option httplog
349 option dontlognull
350 option httpchk HEAD /index.html HTTP/1.0
351 server webA 192.168.1.11:80 cookie A check
352 server webB 192.168.1.12:80 cookie B check
353 server webC 192.168.1.13:80 cookie C check
354 server webD 192.168.1.14:80 cookie D check
355
356The "dontlognull" option is used to prevent the proxy from logging the health
357checks from the Alteon. If a session exchanges no data, then it will not be
358logged.
359
willy tarreauc5f73ed2005-12-18 01:26:38 +0100360Config on the Alteon :
361----------------------
362
363 /c/slb/real 11
364 ena
365 name "LB1"
366 rip 192.168.1.3
367 /c/slb/real 12
368 ena
369 name "LB2"
370 rip 192.168.1.4
371 /c/slb/group 10
372 name "LB1-2"
373 metric roundrobin
374 health tcp
375 add 11
376 add 12
377 /c/slb/virt 10
378 ena
379 vip 192.168.1.1
380 /c/slb/virt 10/service http
381 group 10
382
383
384Note: the health-check on the Alteon is set to "tcp" to prevent the proxy from
385forwarding the connections. It can also be set to "http", but for this the
386proxy must specify a "monitor-net" with the Alteons' addresses, so that the
387Alteon can really check that the proxies can talk HTTP but without forwarding
388the connections to the end servers. Check next section for an example on how to
389use monitor-net.
390
391
392============================================================
3932.2 Generic TCP relaying and external layer 4 load-balancers
394============================================================
395
396Sometimes it's useful to be able to relay generic TCP protocols (SMTP, TSE,
397VNC, etc...), for example to interconnect private networks. The problem comes
398when you use external load-balancers which need to send periodic health-checks
399to the proxies, because these health-checks get forwarded to the end servers.
400The solution is to specify a network which will be dedicated to monitoring
401systems and must not lead to a forwarding connection nor to any log, using the
402"monitor-net" keyword. Note: this feature expects a version of haproxy greater
403than or equal to 1.1.32 or 1.2.6.
404
405
406 | VIP=172.16.1.1 |
407 +----+----+ +----+----+
408 | Alteon1 | | Alteon2 |
409 +----+----+ +----+----+
410 192.168.1.252 | GW=192.168.1.254 | 192.168.1.253
411 | |
412 ------+---+------------+--+-----------------> TSE farm : 192.168.1.10
413 192.168.1.1 | | 192.168.1.2
414 +--+--+ +--+--+
415 | LB1 | | LB2 |
416 +-----+ +-----+
417 haproxy haproxy
418
419
420Config on both proxies (LB1 and LB2) :
421--------------------------------------
422
423 listen tse-proxy
424 bind :3389,:1494,:5900 # TSE, ICA and VNC at once.
425 mode tcp
426 balance roundrobin
427 server tse-farm 192.168.1.10
428 monitor-net 192.168.1.252/31
429
430The "monitor-net" option instructs the proxies that any connection coming from
431192.168.1.252 or 192.168.1.253 will not be logged nor forwarded and will be
432closed immediately. The Alteon load-balancers will then see the proxies alive
433without perturbating the service.
434
willy tarreau0174f312005-12-18 01:02:42 +0100435Config on the Alteon :
436----------------------
437
willy tarreauc5f73ed2005-12-18 01:26:38 +0100438 /c/l3/if 1
439 ena
440 addr 192.168.1.252
441 mask 255.255.255.0
442 /c/slb/real 11
443 ena
444 name "LB1"
445 rip 192.168.1.1
446 /c/slb/real 12
447 ena
448 name "LB2"
449 rip 192.168.1.2
450 /c/slb/group 10
451 name "LB1-2"
452 metric roundrobin
453 health tcp
454 add 11
455 add 12
456 /c/slb/virt 10
457 ena
458 vip 172.16.1.1
459 /c/slb/virt 10/service 1494
460 group 10
461 /c/slb/virt 10/service 3389
462 group 10
463 /c/slb/virt 10/service 5900
464 group 10
willy tarreau0174f312005-12-18 01:02:42 +0100465
466
Willy Tarreau4fb20ff2007-03-17 21:55:50 +0100467Special handling of SSL :
468-------------------------
469Sometimes, you want to send health-checks to remote systems, even in TCP mode,
470in order to be able to failover to a backup server in case the first one is
471dead. Of course, you can simply enable TCP health-checks, but it sometimes
472happens that intermediate firewalls between the proxies and the remote servers
473acknowledge the TCP connection themselves, showing an always-up server. Since
474this is generally encountered on long-distance communications, which often
Willy Tarreau989222a2016-01-15 10:26:26 +0100475involve SSL, an SSL health-check has been implemented to work around this issue.
Willy Tarreau4fb20ff2007-03-17 21:55:50 +0100476It sends SSL Hello messages to the remote server, which in turns replies with
477SSL Hello messages. Setting it up is very easy :
478
479 listen tcp-syslog-proxy
480 bind :1514 # listen to TCP syslog traffic on this port (SSL)
481 mode tcp
482 balance roundrobin
483 option ssl-hello-chk
484 server syslog-prod-site 192.168.1.10 check
485 server syslog-back-site 192.168.2.10 check backup
486
487
willy tarreau0174f312005-12-18 01:02:42 +0100488=========================================================
4893. Simple HTTP/HTTPS load-balancing with cookie insertion
490=========================================================
491
492This is the same context as in example 1 above, but the web
493server uses HTTPS.
494
495 +-------+
496 |clients| clients
497 +---+---+
498 |
499 -+-----+--------+----
500 | _|_db
501 +--+--+ (___)
502 | SSL | (___)
503 | web | (___)
504 +-----+
505 192.168.1.1 192.168.1.2
506
507
508Since haproxy does not handle SSL, this part will have to be extracted from the
Adrian Bridgettafdb6e52012-03-19 23:36:42 +0000509servers (freeing even more resources) and installed on the load-balancer
willy tarreau0174f312005-12-18 01:02:42 +0100510itself. Install haproxy and apache+mod_ssl on the old box which will spread the
511load between the new boxes. Apache will work in SSL reverse-proxy-cache. If the
Joseph Herlant71b4b152018-11-13 16:55:16 -0800512application is correctly developed, it might even lower its load. However,
willy tarreau0174f312005-12-18 01:02:42 +0100513since there now is a cache between the clients and haproxy, some security
514measures must be taken to ensure that inserted cookies will not be cached.
515
516
517 192.168.1.1 192.168.1.11-192.168.1.14 192.168.1.2
518 -------+-----------+-----+-----+-----+--------+----
519 | | | | | _|_db
520 +--+--+ +-+-+ +-+-+ +-+-+ +-+-+ (___)
521 | LB1 | | A | | B | | C | | D | (___)
522 +-----+ +---+ +---+ +---+ +---+ (___)
523 apache 4 cheap web servers
524 mod_ssl
525 haproxy
526
527
528Config on haproxy (LB1) :
529-------------------------
530
531 listen 127.0.0.1:8000
532 mode http
533 balance roundrobin
534 cookie SERVERID insert indirect nocache
535 option httpchk HEAD /index.html HTTP/1.0
536 server webA 192.168.1.11:80 cookie A check
537 server webB 192.168.1.12:80 cookie B check
538 server webC 192.168.1.13:80 cookie C check
539 server webD 192.168.1.14:80 cookie D check
540
541
542Description :
543-------------
544 - apache on LB1 will receive clients requests on port 443
545 - it forwards it to haproxy bound to 127.0.0.1:8000
546 - if a request does not contain a cookie, it will be forwarded to a valid
547 server
548 - in return, a cookie "SERVERID" will be inserted in the response holding the
549 server name (eg: "A"), and a "Cache-control: private" header will be added
550 so that the apache does not cache any page containing such cookie.
551 - when the client comes again with the cookie "SERVERID=A", LB1 will know that
552 it must be forwarded to server A. The cookie will be removed so that the
553 server does not see it.
554 - if server "webA" dies, the requests will be sent to another valid server
555 and a cookie will be reassigned.
556
557Notes :
558-------
559 - if the cookie works in "prefix" mode, there is no need to add the "nocache"
560 option because it is an application cookie which will be modified, and the
561 application flags will be preserved.
562 - if apache 1.3 is used as a front-end before haproxy, it always disables
563 HTTP keep-alive on the back-end, so there is no need for the "httpclose"
564 option on haproxy.
565 - configure apache to set the X-Forwarded-For header itself, and do not do
566 it on haproxy if you need the application to know about the client's IP.
567
568
569Flows :
570-------
571
572(apache) (haproxy) (server A)
573 >-- GET /URI1 HTTP/1.0 ------------> |
574 ( no cookie, haproxy forwards in load-balancing mode. )
575 | >-- GET /URI1 HTTP/1.0 ---------->
576 | <-- HTTP/1.0 200 OK -------------<
577 ( the proxy now adds the server cookie in return )
578 <-- HTTP/1.0 200 OK ---------------< |
579 Set-Cookie: SERVERID=A |
580 Cache-Control: private |
581 >-- GET /URI2 HTTP/1.0 ------------> |
582 Cookie: SERVERID=A |
583 ( the proxy sees the cookie. it forwards to server A and deletes it )
584 | >-- GET /URI2 HTTP/1.0 ---------->
585 | <-- HTTP/1.0 200 OK -------------<
586 ( the proxy does not add the cookie in return because the client knows it )
587 <-- HTTP/1.0 200 OK ---------------< |
588 >-- GET /URI3 HTTP/1.0 ------------> |
589 Cookie: SERVERID=A |
590 ( ... )
591
592
593
594========================================
Willy Tarreau4fb20ff2007-03-17 21:55:50 +01005953.1. Alternate solution using Stunnel
596========================================
597
598When only SSL is required and cache is not needed, stunnel is a cheaper
599solution than Apache+mod_ssl. By default, stunnel does not process HTTP and
600does not add any X-Forwarded-For header, but there is a patch on the official
601haproxy site to provide this feature to recent stunnel versions.
602
603This time, stunnel will only process HTTPS and not HTTP. This means that
604haproxy will get all HTTP traffic, so haproxy will have to add the
605X-Forwarded-For header for HTTP traffic, but not for HTTPS traffic since
606stunnel will already have done it. We will use the "except" keyword to tell
607haproxy that connections from local host already have a valid header.
608
609
610 192.168.1.1 192.168.1.11-192.168.1.14 192.168.1.2
611 -------+-----------+-----+-----+-----+--------+----
612 | | | | | _|_db
613 +--+--+ +-+-+ +-+-+ +-+-+ +-+-+ (___)
614 | LB1 | | A | | B | | C | | D | (___)
615 +-----+ +---+ +---+ +---+ +---+ (___)
616 stunnel 4 cheap web servers
617 haproxy
618
619
620Config on stunnel (LB1) :
621-------------------------
622
623 cert=/etc/stunnel/stunnel.pem
624 setuid=stunnel
625 setgid=proxy
626
627 socket=l:TCP_NODELAY=1
628 socket=r:TCP_NODELAY=1
629
630 [https]
631 accept=192.168.1.1:443
632 connect=192.168.1.1:80
633 xforwardedfor=yes
634
635
636Config on haproxy (LB1) :
637-------------------------
638
639 listen 192.168.1.1:80
640 mode http
641 balance roundrobin
642 option forwardfor except 192.168.1.1
643 cookie SERVERID insert indirect nocache
644 option httpchk HEAD /index.html HTTP/1.0
645 server webA 192.168.1.11:80 cookie A check
646 server webB 192.168.1.12:80 cookie B check
647 server webC 192.168.1.13:80 cookie C check
648 server webD 192.168.1.14:80 cookie D check
649
650Description :
651-------------
652 - stunnel on LB1 will receive clients requests on port 443
653 - it forwards them to haproxy bound to port 80
654 - haproxy will receive HTTP client requests on port 80 and decrypted SSL
655 requests from Stunnel on the same port.
656 - stunnel will add the X-Forwarded-For header
657 - haproxy will add the X-Forwarded-For header for everyone except the local
658 address (stunnel).
659
660
661========================================
willy tarreau0174f312005-12-18 01:02:42 +01006624. Soft-stop for application maintenance
663========================================
664
willy tarreau22739ef2006-01-20 20:43:32 +0100665When an application is spread across several servers, the time to update all
willy tarreau0174f312005-12-18 01:02:42 +0100666instances increases, so the application seems jerky for a longer period.
667
Willy Tarreau714f3452021-05-09 06:47:26 +0200668HAProxy offers several solutions for this. Although it cannot be reconfigured
willy tarreauc5f73ed2005-12-18 01:26:38 +0100669without being stopped, nor does it offer any external command, there are other
willy tarreau0174f312005-12-18 01:02:42 +0100670working solutions.
671
672
673=========================================
6744.1 Soft-stop using a file on the servers
675=========================================
676
677This trick is quite common and very simple: put a file on the server which will
678be checked by the proxy. When you want to stop the server, first remove this
679file. The proxy will see the server as failed, and will not send it any new
680session, only the old ones if the "persist" option is used. Wait a bit then
681stop the server when it does not receive anymore connections.
682
683
684 listen 192.168.1.1:80
685 mode http
686 balance roundrobin
687 cookie SERVERID insert indirect
688 option httpchk HEAD /running HTTP/1.0
689 server webA 192.168.1.11:80 cookie A check inter 2000 rise 2 fall 2
690 server webB 192.168.1.12:80 cookie B check inter 2000 rise 2 fall 2
691 server webC 192.168.1.13:80 cookie C check inter 2000 rise 2 fall 2
692 server webD 192.168.1.14:80 cookie D check inter 2000 rise 2 fall 2
693 option persist
694 redispatch
695 contimeout 5000
696
697
698Description :
699-------------
700 - every 2 seconds, haproxy will try to access the file "/running" on the
701 servers, and declare the server as down after 2 attempts (4 seconds).
702 - only the servers which respond with a 200 or 3XX response will be used.
703 - if a request does not contain a cookie, it will be forwarded to a valid
704 server
705 - if a request contains a cookie for a failed server, haproxy will insist
Jackie Tapia749f74c2020-07-22 18:59:40 -0500706 on trying to reach the server anyway, to let the user finish what they were
willy tarreau0174f312005-12-18 01:02:42 +0100707 doing. ("persist" option)
708 - if the server is totally stopped, the connection will fail and the proxy
709 will rebalance the client to another server ("redispatch")
710
711Usage on the web servers :
712--------------------------
713- to start the server :
714 # /etc/init.d/httpd start
715 # touch /home/httpd/www/running
716
717- to soft-stop the server
718 # rm -f /home/httpd/www/running
719
720- to completely stop the server :
721 # /etc/init.d/httpd stop
722
723Limits
724------
725If the server is totally powered down, the proxy will still try to reach it
726for those clients who still have a cookie referencing it, and the connection
727attempt will expire after 5 seconds ("contimeout"), and only after that, the
728client will be redispatched to another server. So this mode is only useful
729for software updates where the server will suddenly refuse the connection
730because the process is stopped. The problem is the same if the server suddenly
731crashes. All of its users will be fairly perturbated.
732
733
734==================================
7354.2 Soft-stop using backup servers
736==================================
737
738A better solution which covers every situation is to use backup servers.
739Version 1.1.30 fixed a bug which prevented a backup server from sharing
740the same cookie as a standard server.
741
742
743 listen 192.168.1.1:80
744 mode http
745 balance roundrobin
746 redispatch
747 cookie SERVERID insert indirect
748 option httpchk HEAD / HTTP/1.0
749 server webA 192.168.1.11:80 cookie A check port 81 inter 2000
750 server webB 192.168.1.12:80 cookie B check port 81 inter 2000
751 server webC 192.168.1.13:80 cookie C check port 81 inter 2000
752 server webD 192.168.1.14:80 cookie D check port 81 inter 2000
753
754 server bkpA 192.168.1.11:80 cookie A check port 80 inter 2000 backup
755 server bkpB 192.168.1.12:80 cookie B check port 80 inter 2000 backup
756 server bkpC 192.168.1.13:80 cookie C check port 80 inter 2000 backup
757 server bkpD 192.168.1.14:80 cookie D check port 80 inter 2000 backup
758
759Description
760-----------
761Four servers webA..D are checked on their port 81 every 2 seconds. The same
762servers named bkpA..D are checked on the port 80, and share the exact same
763cookies. Those servers will only be used when no other server is available
764for the same cookie.
765
766When the web servers are started, only the backup servers are seen as
767available. On the web servers, you need to redirect port 81 to local
768port 80, either with a local proxy (eg: a simple haproxy tcp instance),
769or with iptables (linux) or pf (openbsd). This is because we want the
770real web server to reply on this port, and not a fake one. Eg, with
771iptables :
772
773 # /etc/init.d/httpd start
774 # iptables -t nat -A PREROUTING -p tcp --dport 81 -j REDIRECT --to-port 80
775
776A few seconds later, the standard server is seen up and haproxy starts to send
777it new requests on its real port 80 (only new users with no cookie, of course).
778
779If a server completely crashes (even if it does not respond at the IP level),
780both the standard and backup servers will fail, so clients associated to this
781server will be redispatched to other live servers and will lose their sessions.
782
783Now if you want to enter a server into maintenance, simply stop it from
784responding on port 81 so that its standard instance will be seen as failed,
785but the backup will still work. Users will not notice anything since the
786service is still operational :
787
788 # iptables -t nat -D PREROUTING -p tcp --dport 81 -j REDIRECT --to-port 80
789
790The health checks on port 81 for this server will quickly fail, and the
791standard server will be seen as failed. No new session will be sent to this
792server, and existing clients with a valid cookie will still reach it because
793the backup server will still be up.
794
795Now wait as long as you want for the old users to stop using the service, and
796once you see that the server does not receive any traffic, simply stop it :
797
798 # /etc/init.d/httpd stop
799
800The associated backup server will in turn fail, and if any client still tries
Jackie Tapia749f74c2020-07-22 18:59:40 -0500801to access this particular server, they will be redispatched to any other valid
willy tarreau0174f312005-12-18 01:02:42 +0100802server because of the "redispatch" option.
803
804This method has an advantage : you never touch the proxy when doing server
805maintenance. The people managing the servers can make them disappear smoothly.
806
807
8084.2.1 Variations for operating systems without any firewall software
809--------------------------------------------------------------------
810
811The downside is that you need a redirection solution on the server just for
812the health-checks. If the server OS does not support any firewall software,
813this redirection can also be handled by a simple haproxy in tcp mode :
814
815 global
816 daemon
817 quiet
818 pidfile /var/run/haproxy-checks.pid
819 listen 0.0.0.0:81
820 mode tcp
821 dispatch 127.0.0.1:80
822 contimeout 1000
823 clitimeout 10000
824 srvtimeout 10000
825
826To start the web service :
827
828 # /etc/init.d/httpd start
829 # haproxy -f /etc/haproxy/haproxy-checks.cfg
830
831To soft-stop the service :
832
833 # kill $(</var/run/haproxy-checks.pid)
834
willy tarreauc5f73ed2005-12-18 01:26:38 +0100835The port 81 will stop responding and the load-balancer will notice the failure.
willy tarreau0174f312005-12-18 01:02:42 +0100836
837
8384.2.2 Centralizing the server management
839----------------------------------------
840
willy tarreauc5f73ed2005-12-18 01:26:38 +0100841If one finds it preferable to manage the servers from the load-balancer itself,
willy tarreau0174f312005-12-18 01:02:42 +0100842the port redirector can be installed on the load-balancer itself. See the
843example with iptables below.
844
845Make the servers appear as operational :
846 # iptables -t nat -A OUTPUT -d 192.168.1.11 -p tcp --dport 81 -j DNAT --to-dest :80
847 # iptables -t nat -A OUTPUT -d 192.168.1.12 -p tcp --dport 81 -j DNAT --to-dest :80
848 # iptables -t nat -A OUTPUT -d 192.168.1.13 -p tcp --dport 81 -j DNAT --to-dest :80
849 # iptables -t nat -A OUTPUT -d 192.168.1.14 -p tcp --dport 81 -j DNAT --to-dest :80
850
851Soft stop one server :
852 # iptables -t nat -D OUTPUT -d 192.168.1.12 -p tcp --dport 81 -j DNAT --to-dest :80
853
854Another solution is to use the "COMAFILE" patch provided by Alexander Lazic,
855which is available for download here :
856
857 http://w.ods.org/tools/haproxy/contrib/
858
859
8604.2.3 Notes :
861-------------
862 - Never, ever, start a fake service on port 81 for the health-checks, because
863 a real web service failure will not be detected as long as the fake service
864 runs. You must really forward the check port to the real application.
865
866 - health-checks will be sent twice as often, once for each standard server,
willy tarreau22739ef2006-01-20 20:43:32 +0100867 and once for each backup server. All this will be multiplicated by the
willy tarreauc5f73ed2005-12-18 01:26:38 +0100868 number of processes if you use multi-process mode. You will have to ensure
869 that all the checks sent to the server do not overload it.
willy tarreau0174f312005-12-18 01:02:42 +0100870
willy tarreau22739ef2006-01-20 20:43:32 +0100871=======================
8724.3 Hot reconfiguration
873=======================
874
875There are two types of haproxy users :
876 - those who can never do anything in production out of maintenance periods ;
877 - those who can do anything at any time provided that the consequences are
878 limited.
879
880The first ones have no problem stopping the server to change configuration
881because they got some maintenance periods during which they can break anything.
882So they will even prefer doing a clean stop/start sequence to ensure everything
883will work fine upon next reload. Since those have represented the majority of
884haproxy uses, there has been little effort trying to improve this.
885
886However, the second category is a bit different. They like to be able to fix an
887error in a configuration file without anyone noticing. This can sometimes also
888be the case for the first category because humans are not failsafe.
889
890For this reason, a new hot reconfiguration mechanism has been introduced in
891version 1.1.34. Its usage is very simple and works even in chrooted
892environments with lowered privileges. The principle is very simple : upon
893reception of a SIGTTOU signal, the proxy will stop listening to all the ports.
894This will release the ports so that a new instance can be started. Existing
895connections will not be broken at all. If the new instance fails to start,
896then sending a SIGTTIN signal back to the original processes will restore
897the listening ports. This is possible without any special privileges because
898the sockets will not have been closed, so the bind() is still valid. Otherwise,
899if the new process starts successfully, then sending a SIGUSR1 signal to the
900old one ensures that it will exit as soon as its last session ends.
901
902A hot reconfiguration script would look like this :
903
904 # save previous state
905 mv /etc/haproxy/config /etc/haproxy/config.old
906 mv /var/run/haproxy.pid /var/run/haproxy.pid.old
907
908 mv /etc/haproxy/config.new /etc/haproxy/config
909 kill -TTOU $(cat /var/run/haproxy.pid.old)
910 if haproxy -p /var/run/haproxy.pid -f /etc/haproxy/config; then
911 echo "New instance successfully loaded, stopping previous one."
912 kill -USR1 $(cat /var/run/haproxy.pid.old)
913 rm -f /var/run/haproxy.pid.old
914 exit 1
915 else
916 echo "New instance failed to start, resuming previous one."
917 kill -TTIN $(cat /var/run/haproxy.pid.old)
918 rm -f /var/run/haproxy.pid
919 mv /var/run/haproxy.pid.old /var/run/haproxy.pid
920 mv /etc/haproxy/config /etc/haproxy/config.new
921 mv /etc/haproxy/config.old /etc/haproxy/config
922 exit 0
923 fi
924
925After this, you can still force old connections to end by sending
926a SIGTERM to the old process if it still exists :
927
928 kill $(cat /var/run/haproxy.pid.old)
929 rm -f /var/run/haproxy.pid.old
930
931Be careful with this as in multi-process mode, some pids might already
932have been reallocated to completely different processes.
933
willy tarreau0174f312005-12-18 01:02:42 +0100934
935==================================================
9365. Multi-site load-balancing with local preference
937==================================================
938
9395.1 Description of the problem
940==============================
941
942Consider a world-wide company with sites on several continents. There are two
943production sites SITE1 and SITE2 which host identical applications. There are
944many offices around the world. For speed and communication cost reasons, each
945office uses the nearest site by default, but can switch to the backup site in
946the event of a site or application failure. There also are users on the
947production sites, which use their local sites by default, but can switch to the
948other site in case of a local application failure.
949
950The main constraints are :
951
952 - application persistence : although the application is the same on both
953 sites, there is no session synchronisation between the sites. A failure
954 of one server or one site can cause a user to switch to another server
955 or site, but when the server or site comes back, the user must not switch
956 again.
957
958 - communication costs : inter-site communication should be reduced to the
959 minimum. Specifically, in case of a local application failure, every
960 office should be able to switch to the other site without continuing to
961 use the default site.
962
9635.2 Solution
964============
965 - Each production site will have two haproxy load-balancers in front of its
966 application servers to balance the load across them and provide local HA.
967 We will call them "S1L1" and "S1L2" on site 1, and "S2L1" and "S2L2" on
968 site 2. These proxies will extend the application's JSESSIONID cookie to
969 put the server name as a prefix.
970
971 - Each production site will have one front-end haproxy director to provide
972 the service to local users and to remote offices. It will load-balance
973 across the two local load-balancers, and will use the other site's
974 load-balancers as backup servers. It will insert the local site identifier
975 in a SITE cookie for the local load-balancers, and the remote site
976 identifier for the remote load-balancers. These front-end directors will
977 be called "SD1" and "SD2" for "Site Director".
978
979 - Each office will have one haproxy near the border gateway which will direct
980 local users to their preference site by default, or to the backup site in
981 the event of a previous failure. It will also analyze the SITE cookie, and
982 direct the users to the site referenced in the cookie. Thus, the preferred
983 site will be declared as a normal server, and the backup site will be
984 declared as a backup server only, which will only be used when the primary
985 site is unreachable, or when the primary site's director has forwarded
986 traffic to the second site. These proxies will be called "OP1".."OPXX"
987 for "Office Proxy #XX".
988
989
9905.3 Network diagram
991===================
992
993Note : offices 1 and 2 are on the same continent as site 1, while
994 office 3 is on the same continent as site 3. Each production
995 site can reach the second one either through the WAN or through
996 a dedicated link.
997
998
999 Office1 Office2 Office3
1000 users users users
1001192.168 # # # 192.168 # # # # # #
1002.1.0/24 | | | .2.0/24 | | | 192.168.3.0/24 | | |
1003 --+----+-+-+- --+----+-+-+- ---+----+-+-+-
1004 | | .1 | | .1 | | .1
1005 | +-+-+ | +-+-+ | +-+-+
1006 | |OP1| | |OP2| | |OP3| ...
1007 ,-:-. +---+ ,-:-. +---+ ,-:-. +---+
1008 ( X ) ( X ) ( X )
1009 `-:-' `-:-' ,---. `-:-'
1010 --+---------------+------+----~~~( X )~~~~-------+---------+-
1011 | `---' |
1012 | |
1013 +---+ ,-:-. +---+ ,-:-.
1014 |SD1| ( X ) |SD2| ( X )
1015 ( SITE 1 ) +-+-+ `-:-' ( SITE 2 ) +-+-+ `-:-'
1016 |.1 | |.1 |
1017 10.1.1.0/24 | | ,---. 10.2.1.0/24 | |
1018 -+-+-+-+-+-+-+-----+-+--( X )------+-+-+-+-+-+-+-----+-+--
1019 | | | | | | | `---' | | | | | | |
1020 ...# # # # # |.11 |.12 ...# # # # # |.11 |.12
1021 Site 1 +-+--+ +-+--+ Site 2 +-+--+ +-+--+
1022 Local |S1L1| |S1L2| Local |S2L1| |S2L2|
1023 users +-+--+ +--+-+ users +-+--+ +--+-+
1024 | | | |
1025 10.1.2.0/24 -+-+-+--+--++-- 10.2.2.0/24 -+-+-+--+--++--
1026 |.1 |.4 |.1 |.4
1027 +-+-+ +-+-+ +-+-+ +-+-+
1028 |W11| ~~~ |W14| |W21| ~~~ |W24|
1029 +---+ +---+ +---+ +---+
1030 4 application servers 4 application servers
1031 on site 1 on site 2
1032
1033
1034
10355.4 Description
1036===============
1037
10385.4.1 Local users
1039-----------------
1040 - Office 1 users connect to OP1 = 192.168.1.1
1041 - Office 2 users connect to OP2 = 192.168.2.1
1042 - Office 3 users connect to OP3 = 192.168.3.1
1043 - Site 1 users connect to SD1 = 10.1.1.1
1044 - Site 2 users connect to SD2 = 10.2.1.1
1045
10465.4.2 Office proxies
1047--------------------
1048 - Office 1 connects to site 1 by default and uses site 2 as a backup.
1049 - Office 2 connects to site 1 by default and uses site 2 as a backup.
1050 - Office 3 connects to site 2 by default and uses site 1 as a backup.
1051
1052The offices check the local site's SD proxy every 30 seconds, and the
1053remote one every 60 seconds.
1054
1055
1056Configuration for Office Proxy OP1
1057----------------------------------
1058
1059 listen 192.168.1.1:80
1060 mode http
1061 balance roundrobin
1062 redispatch
1063 cookie SITE
1064 option httpchk HEAD / HTTP/1.0
1065 server SD1 10.1.1.1:80 cookie SITE1 check inter 30000
1066 server SD2 10.2.1.1:80 cookie SITE2 check inter 60000 backup
1067
1068
1069Configuration for Office Proxy OP2
1070----------------------------------
1071
1072 listen 192.168.2.1:80
1073 mode http
1074 balance roundrobin
1075 redispatch
1076 cookie SITE
1077 option httpchk HEAD / HTTP/1.0
1078 server SD1 10.1.1.1:80 cookie SITE1 check inter 30000
1079 server SD2 10.2.1.1:80 cookie SITE2 check inter 60000 backup
1080
1081
1082Configuration for Office Proxy OP3
1083----------------------------------
1084
1085 listen 192.168.3.1:80
1086 mode http
1087 balance roundrobin
1088 redispatch
1089 cookie SITE
1090 option httpchk HEAD / HTTP/1.0
1091 server SD2 10.2.1.1:80 cookie SITE2 check inter 30000
1092 server SD1 10.1.1.1:80 cookie SITE1 check inter 60000 backup
1093
1094
10955.4.3 Site directors ( SD1 and SD2 )
1096------------------------------------
1097The site directors forward traffic to the local load-balancers, and set a
1098cookie to identify the site. If no local load-balancer is available, or if
1099the local application servers are all down, it will redirect traffic to the
1100remote site, and report this in the SITE cookie. In order not to uselessly
1101load each site's WAN link, each SD will check the other site at a lower
1102rate. The site directors will also insert their client's address so that
1103the application server knows which local user or remote site accesses it.
1104
1105The SITE cookie which is set by these directors will also be understood
1106by the office proxies. This is important because if SD1 decides to forward
1107traffic to site 2, it will write "SITE2" in the "SITE" cookie, and on next
1108request, the office proxy will automatically and directly talk to SITE2 if
1109it can reach it. If it cannot, it will still send the traffic to SITE1
1110where SD1 will in turn try to reach SITE2.
1111
1112The load-balancers checks are performed on port 81. As we'll see further,
1113the load-balancers provide a health monitoring port 81 which reroutes to
1114port 80 but which allows them to tell the SD that they are going down soon
1115and that the SD must not use them anymore.
1116
1117
1118Configuration for SD1
1119---------------------
1120
1121 listen 10.1.1.1:80
1122 mode http
1123 balance roundrobin
1124 redispatch
1125 cookie SITE insert indirect
1126 option httpchk HEAD / HTTP/1.0
1127 option forwardfor
1128 server S1L1 10.1.1.11:80 cookie SITE1 check port 81 inter 4000
1129 server S1L2 10.1.1.12:80 cookie SITE1 check port 81 inter 4000
1130 server S2L1 10.2.1.11:80 cookie SITE2 check port 81 inter 8000 backup
1131 server S2L2 10.2.1.12:80 cookie SITE2 check port 81 inter 8000 backup
1132
1133Configuration for SD2
1134---------------------
1135
1136 listen 10.2.1.1:80
1137 mode http
1138 balance roundrobin
1139 redispatch
1140 cookie SITE insert indirect
1141 option httpchk HEAD / HTTP/1.0
1142 option forwardfor
1143 server S2L1 10.2.1.11:80 cookie SITE2 check port 81 inter 4000
1144 server S2L2 10.2.1.12:80 cookie SITE2 check port 81 inter 4000
1145 server S1L1 10.1.1.11:80 cookie SITE1 check port 81 inter 8000 backup
1146 server S1L2 10.1.1.12:80 cookie SITE1 check port 81 inter 8000 backup
1147
1148
11495.4.4 Local load-balancers S1L1, S1L2, S2L1, S2L2
1150-------------------------------------------------
1151Please first note that because SD1 and SD2 use the same cookie for both
1152servers on a same site, the second load-balancer of each site will only
1153receive load-balanced requests, but as soon as the SITE cookie will be
1154set, only the first LB will receive the requests because it will be the
1155first one to match the cookie.
1156
1157The load-balancers will spread the load across 4 local web servers, and
1158use the JSESSIONID provided by the application to provide server persistence
1159using the new 'prefix' method. Soft-stop will also be implemented as described
1160in section 4 above. Moreover, these proxies will provide their own maintenance
1161soft-stop. Port 80 will be used for application traffic, while port 81 will
1162only be used for health-checks and locally rerouted to port 80. A grace time
1163will be specified to service on port 80, but not on port 81. This way, a soft
1164kill (kill -USR1) on the proxy will only kill the health-check forwarder so
1165that the site director knows it must not use this load-balancer anymore. But
1166the service will still work for 20 seconds and as long as there are established
1167sessions.
1168
1169These proxies will also be the only ones to disable HTTP keep-alive in the
1170chain, because it is enough to do it at one place, and it's necessary to do
1171it with 'prefix' cookies.
1172
1173Configuration for S1L1/S1L2
1174---------------------------
1175
1176 listen 10.1.1.11:80 # 10.1.1.12:80 for S1L2
1177 grace 20000 # don't kill us until 20 seconds have elapsed
1178 mode http
1179 balance roundrobin
1180 cookie JSESSIONID prefix
1181 option httpclose
1182 option forwardfor
1183 option httpchk HEAD / HTTP/1.0
1184 server W11 10.1.2.1:80 cookie W11 check port 81 inter 2000
1185 server W12 10.1.2.2:80 cookie W12 check port 81 inter 2000
1186 server W13 10.1.2.3:80 cookie W13 check port 81 inter 2000
1187 server W14 10.1.2.4:80 cookie W14 check port 81 inter 2000
1188
1189 server B11 10.1.2.1:80 cookie W11 check port 80 inter 4000 backup
1190 server B12 10.1.2.2:80 cookie W12 check port 80 inter 4000 backup
1191 server B13 10.1.2.3:80 cookie W13 check port 80 inter 4000 backup
1192 server B14 10.1.2.4:80 cookie W14 check port 80 inter 4000 backup
1193
1194 listen 10.1.1.11:81 # 10.1.1.12:81 for S1L2
1195 mode tcp
1196 dispatch 10.1.1.11:80 # 10.1.1.12:80 for S1L2
1197
1198
1199Configuration for S2L1/S2L2
1200---------------------------
1201
1202 listen 10.2.1.11:80 # 10.2.1.12:80 for S2L2
1203 grace 20000 # don't kill us until 20 seconds have elapsed
1204 mode http
1205 balance roundrobin
1206 cookie JSESSIONID prefix
1207 option httpclose
1208 option forwardfor
1209 option httpchk HEAD / HTTP/1.0
1210 server W21 10.2.2.1:80 cookie W21 check port 81 inter 2000
1211 server W22 10.2.2.2:80 cookie W22 check port 81 inter 2000
1212 server W23 10.2.2.3:80 cookie W23 check port 81 inter 2000
1213 server W24 10.2.2.4:80 cookie W24 check port 81 inter 2000
1214
1215 server B21 10.2.2.1:80 cookie W21 check port 80 inter 4000 backup
1216 server B22 10.2.2.2:80 cookie W22 check port 80 inter 4000 backup
1217 server B23 10.2.2.3:80 cookie W23 check port 80 inter 4000 backup
1218 server B24 10.2.2.4:80 cookie W24 check port 80 inter 4000 backup
1219
1220 listen 10.2.1.11:81 # 10.2.1.12:81 for S2L2
1221 mode tcp
1222 dispatch 10.2.1.11:80 # 10.2.1.12:80 for S2L2
1223
1224
12255.5 Comments
1226------------
1227Since each site director sets a cookie identifying the site, remote office
1228users will have their office proxies direct them to the right site and stick
1229to this site as long as the user still uses the application and the site is
1230available. Users on production sites will be directed to the right site by the
1231site directors depending on the SITE cookie.
1232
1233If the WAN link dies on a production site, the remote office users will not
1234see their site anymore, so they will redirect the traffic to the second site.
1235If there are dedicated inter-site links as on the diagram above, the second
1236SD will see the cookie and still be able to reach the original site. For
1237example :
1238
1239Office 1 user sends the following to OP1 :
1240 GET / HTTP/1.0
1241 Cookie: SITE=SITE1; JSESSIONID=W14~123;
1242
1243OP1 cannot reach site 1 because its external router is dead. So the SD1 server
1244is seen as dead, and OP1 will then forward the request to SD2 on site 2,
1245regardless of the SITE cookie.
1246
1247SD2 on site 2 receives a SITE cookie containing "SITE1". Fortunately, it
1248can reach Site 1's load balancers S1L1 and S1L2. So it forwards the request
1249so S1L1 (the first one with the same cookie).
1250
1251S1L1 (on site 1) finds "W14" in the JSESSIONID cookie, so it can forward the
1252request to the right server, and the user session will continue to work. Once
1253the Site 1's WAN link comes back, OP1 will see SD1 again, and will not route
1254through SITE 2 anymore.
1255
1256However, when a new user on Office 1 connects to the application during a
1257site 1 failure, it does not contain any cookie. Since OP1 does not see SD1
1258because of the network failure, it will direct the request to SD2 on site 2,
1259which will by default direct the traffic to the local load-balancers, S2L1 and
1260S2L2. So only initial users will load the inter-site link, not the new ones.
1261
1262
1263===================
12646. Source balancing
1265===================
1266
1267Sometimes it may reveal useful to access servers from a pool of IP addresses
Ilya Shipitsin2a950d02020-03-06 13:07:38 +05001268instead of only one or two. Some equipment (NAT firewalls, load-balancers)
willy tarreau0174f312005-12-18 01:02:42 +01001269are sensible to source address, and often need many sources to distribute the
1270load evenly amongst their internal hash buckets.
1271
1272To do this, you simply have to use several times the same server with a
1273different source. Example :
1274
1275 listen 0.0.0.0:80
1276 mode tcp
1277 balance roundrobin
1278 server from1to1 10.1.1.1:80 source 10.1.2.1
1279 server from2to1 10.1.1.1:80 source 10.1.2.2
1280 server from3to1 10.1.1.1:80 source 10.1.2.3
1281 server from4to1 10.1.1.1:80 source 10.1.2.4
1282 server from5to1 10.1.1.1:80 source 10.1.2.5
1283 server from6to1 10.1.1.1:80 source 10.1.2.6
1284 server from7to1 10.1.1.1:80 source 10.1.2.7
1285 server from8to1 10.1.1.1:80 source 10.1.2.8
1286
Willy Tarreau4fb20ff2007-03-17 21:55:50 +01001287
1288=============================================
12897. Managing high loads on application servers
1290=============================================
1291
1292One of the roles often expected from a load balancer is to mitigate the load on
1293the servers during traffic peaks. More and more often, we see heavy frameworks
1294used to deliver flexible and evolutive web designs, at the cost of high loads
1295on the servers, or very low concurrency. Sometimes, response times are also
1296rather high. People developing web sites relying on such frameworks very often
1297look for a load balancer which is able to distribute the load in the most
1298evenly fashion and which will be nice with the servers.
1299
1300There is a powerful feature in haproxy which achieves exactly this : request
1301queueing associated with concurrent connections limit.
1302
1303Let's say you have an application server which supports at most 20 concurrent
1304requests. You have 3 servers, so you can accept up to 60 concurrent HTTP
1305connections, which often means 30 concurrent users in case of keep-alive (2
1306persistent connections per user).
1307
1308Even if you disable keep-alive, if the server takes a long time to respond,
1309you still have a high risk of multiple users clicking at the same time and
Willy Tarreau989222a2016-01-15 10:26:26 +01001310having their requests unserved because of server saturation. To work around
Willy Tarreau4fb20ff2007-03-17 21:55:50 +01001311the problem, you increase the concurrent connection limit on the servers,
1312but their performance stalls under higher loads.
1313
1314The solution is to limit the number of connections between the clients and the
1315servers. You set haproxy to limit the number of connections on a per-server
1316basis, and you let all the users you want connect to it. It will then fill all
1317the servers up to the configured connection limit, and will put the remaining
1318connections in a queue, waiting for a connection to be released on a server.
1319
1320This ensures five essential principles :
1321
1322 - all clients can be served whatever their number without crashing the
1323 servers, the only impact it that the response time can be delayed.
1324
1325 - the servers can be used at full throttle without the risk of stalling,
1326 and fine tuning can lead to optimal performance.
1327
1328 - response times can be reduced by making the servers work below the
1329 congestion point, effectively leading to shorter response times even
1330 under moderate loads.
1331
1332 - no domino effect when a server goes down or starts up. Requests will be
1333 queued more or less, always respecting servers limits.
1334
1335 - it's easy to achieve high performance even on memory-limited hardware.
1336 Indeed, heavy frameworks often consume huge amounts of RAM and not always
1337 all the CPU available. In case of wrong sizing, reducing the number of
1338 concurrent connections will protect against memory shortages while still
1339 ensuring optimal CPU usage.
1340
1341
1342Example :
1343---------
1344
Willy Tarreau714f3452021-05-09 06:47:26 +02001345HAProxy is installed in front of an application servers farm. It will limit
Willy Tarreau4fb20ff2007-03-17 21:55:50 +01001346the concurrent connections to 4 per server (one thread per CPU), thus ensuring
1347very fast response times.
1348
1349
1350 192.168.1.1 192.168.1.11-192.168.1.13 192.168.1.2
1351 -------+-------------+-----+-----+------------+----
1352 | | | | _|_db
1353 +--+--+ +-+-+ +-+-+ +-+-+ (___)
1354 | LB1 | | A | | B | | C | (___)
1355 +-----+ +---+ +---+ +---+ (___)
1356 haproxy 3 application servers
1357 with heavy frameworks
1358
1359
1360Config on haproxy (LB1) :
1361-------------------------
1362
1363 listen appfarm 192.168.1.1:80
1364 mode http
1365 maxconn 10000
1366 option httpclose
1367 option forwardfor
1368 balance roundrobin
1369 cookie SERVERID insert indirect
1370 option httpchk HEAD /index.html HTTP/1.0
1371 server railsA 192.168.1.11:80 cookie A maxconn 4 check
1372 server railsB 192.168.1.12:80 cookie B maxconn 4 check
1373 server railsC 192.168.1.13:80 cookie C maxconn 4 check
1374 contimeout 60000
1375
1376
1377Description :
1378-------------
1379The proxy listens on IP 192.168.1.1, port 80, and expects HTTP requests. It
1380can accept up to 10000 concurrent connections on this socket. It follows the
1381roundrobin algorithm to assign servers to connections as long as servers are
1382not saturated.
1383
1384It allows up to 4 concurrent connections per server, and will queue the
1385requests above this value. The "contimeout" parameter is used to set the
1386maximum time a connection may take to establish on a server, but here it
1387is also used to set the maximum time a connection may stay unserved in the
1388queue (1 minute here).
1389
1390If the servers can each process 4 requests in 10 ms on average, then at 3000
1391connections, response times will be delayed by at most :
1392
1393 3000 / 3 servers / 4 conns * 10 ms = 2.5 seconds
1394
1395Which is not that dramatic considering the huge number of users for such a low
1396number of servers.
1397
1398When connection queues fill up and application servers are starving, response
1399times will grow and users might abort by clicking on the "Stop" button. It is
1400very undesirable to send aborted requests to servers, because they will eat
1401CPU cycles for nothing.
1402
1403An option has been added to handle this specific case : "option abortonclose".
1404By specifying it, you tell haproxy that if an input channel is closed on the
1405client side AND the request is still waiting in the queue, then it is highly
1406likely that the user has stopped, so we remove the request from the queue
1407before it will get served.
1408
1409
1410Managing unfair response times
1411------------------------------
1412
1413Sometimes, the application server will be very slow for some requests (eg:
1414login page) and faster for other requests. This may cause excessive queueing
1415of expectedly fast requests when all threads on the server are blocked on a
1416request to the database. Then the only solution is to increase the number of
1417concurrent connections, so that the server can handle a large average number
1418of slow connections with threads left to handle faster connections.
1419
1420But as we have seen, increasing the number of connections on the servers can
1421be detrimental to performance (eg: Apache processes fighting for the accept()
1422lock). To improve this situation, the "minconn" parameter has been introduced.
1423When it is set, the maximum connection concurrency on the server will be bound
1424by this value, and the limit will increase with the number of clients waiting
1425in queue, till the clients connected to haproxy reach the proxy's maxconn, in
1426which case the connections per server will reach the server's maxconn. It means
1427that during low-to-medium loads, the minconn will be applied, and during surges
1428the maxconn will be applied. It ensures both optimal response times under
1429normal loads, and availability under very high loads.
1430
1431Example :
1432---------
1433
1434 listen appfarm 192.168.1.1:80
1435 mode http
1436 maxconn 10000
1437 option httpclose
1438 option abortonclose
1439 option forwardfor
1440 balance roundrobin
1441 # The servers will get 4 concurrent connections under low
1442 # loads, and 12 when there will be 10000 clients.
1443 server railsA 192.168.1.11:80 minconn 4 maxconn 12 check
1444 server railsB 192.168.1.12:80 minconn 4 maxconn 12 check
1445 server railsC 192.168.1.13:80 minconn 4 maxconn 12 check
1446 contimeout 60000
1447
1448