| ---------------------- |
| HAProxy |
| Configuration Manual |
| ---------------------- |
| version 2.8 |
| 2024/06/14 |
| |
| |
| This document covers the configuration language as implemented in the version |
| specified above. It does not provide any hints, examples, or advice. For such |
| documentation, please refer to the Reference Manual or the Architecture Manual. |
| The summary below is meant to help you find sections by name and navigate |
| through the document. |
| |
| Note to documentation contributors : |
| This document is formatted with 80 columns per line, with even number of |
| spaces for indentation and without tabs. Please follow these rules strictly |
| so that it remains easily printable everywhere. If a line needs to be |
| printed verbatim and does not fit, please end each line with a backslash |
| ('\') and continue on next line, indented by two characters. It is also |
| sometimes useful to prefix all output lines (logs, console outputs) with 3 |
| closing angle brackets ('>>>') in order to emphasize the difference between |
| inputs and outputs when they may be ambiguous. If you add sections, |
| please update the summary below for easier searching. |
| |
| |
| Summary |
| ------- |
| |
| 1. Quick reminder about HTTP |
| 1.1. The HTTP transaction model |
| 1.2. HTTP request |
| 1.2.1. The request line |
| 1.2.2. The request headers |
| 1.3. HTTP response |
| 1.3.1. The response line |
| 1.3.2. The response headers |
| |
| 2. Configuring HAProxy |
| 2.1. Configuration file format |
| 2.2. Quoting and escaping |
| 2.3. Environment variables |
| 2.4. Conditional blocks |
| 2.5. Time format |
| 2.6. Size format |
| 2.7. Examples |
| |
| 3. Global parameters |
| 3.1. Process management and security |
| 3.2. Performance tuning |
| 3.3. Debugging |
| 3.4. Userlists |
| 3.5. Peers |
| 3.6. Mailers |
| 3.7. Programs |
| 3.8. HTTP-errors |
| 3.9. Rings |
| 3.10. Log forwarding |
| 3.11. HTTPClient tuning |
| |
| 4. Proxies |
| 4.1. Proxy keywords matrix |
| 4.2. Alphabetically sorted keywords reference |
| |
| 5. Bind and server options |
| 5.1. Bind options |
| 5.2. Server and default-server options |
| 5.3. Server DNS resolution |
| 5.3.1. Global overview |
| 5.3.2. The resolvers section |
| |
| 6. Cache |
| 6.1. Limitation |
| 6.2. Setup |
| 6.2.1. Cache section |
| 6.2.2. Proxy section |
| |
| 7. Using ACLs and fetching samples |
| 7.1. ACL basics |
| 7.1.1. Matching booleans |
| 7.1.2. Matching integers |
| 7.1.3. Matching strings |
| 7.1.4. Matching regular expressions (regexes) |
| 7.1.5. Matching arbitrary data blocks |
| 7.1.6. Matching IPv4 and IPv6 addresses |
| 7.2. Using ACLs to form conditions |
| 7.3. Fetching samples |
| 7.3.1. Converters |
| 7.3.2. Fetching samples from internal states |
| 7.3.3. Fetching samples at Layer 4 |
| 7.3.4. Fetching samples at Layer 5 |
| 7.3.5. Fetching samples from buffer contents (Layer 6) |
| 7.3.6. Fetching HTTP samples (Layer 7) |
| 7.3.7. Fetching samples for developers |
| 7.4. Pre-defined ACLs |
| |
| 8. Logging |
| 8.1. Log levels |
| 8.2. Log formats |
| 8.2.1. Default log format |
| 8.2.2. TCP log format |
| 8.2.3. HTTP log format |
| 8.2.4. HTTPS log format |
| 8.2.5. Error log format |
| 8.2.6. Custom log format |
| 8.3. Advanced logging options |
| 8.3.1. Disabling logging of external tests |
| 8.3.2. Logging before waiting for the session to terminate |
| 8.3.3. Raising log level upon errors |
| 8.3.4. Disabling logging of successful connections |
| 8.4. Timing events |
| 8.5. Session state at disconnection |
| 8.6. Non-printable characters |
| 8.7. Capturing HTTP cookies |
| 8.8. Capturing HTTP headers |
| 8.9. Examples of logs |
| |
| 9. Supported filters |
| 9.1. Trace |
| 9.2. HTTP compression |
| 9.3. Stream Processing Offload Engine (SPOE) |
| 9.4. Cache |
| 9.5. fcgi-app |
| 9.6. OpenTracing |
| 9.7. Bandwidth limitation |
| |
| 10. FastCGI applications |
| 10.1. Setup |
| 10.1.1. Fcgi-app section |
| 10.1.2. Proxy section |
| 10.1.3. Example |
| 10.2. Default parameters |
| 10.3. Limitations |
| |
| 11. Address formats |
| 11.1. Address family prefixes |
| 11.2. Socket type prefixes |
| 11.3. Protocol prefixes |
| |
| 1. Quick reminder about HTTP |
| ---------------------------- |
| |
| When HAProxy is running in HTTP mode, both the request and the response are |
| fully analyzed and indexed, thus it becomes possible to build matching criteria |
| on almost anything found in the contents. |
| |
| However, it is important to understand how HTTP requests and responses are |
| formed, and how HAProxy decomposes them. It will then become easier to write |
| correct rules and to debug existing configurations. |
| |
| |
| 1.1. The HTTP transaction model |
| ------------------------------- |
| |
| The HTTP protocol is transaction-driven. This means that each request will lead |
| to one and only one response. Traditionally, a TCP connection is established |
| from the client to the server, a request is sent by the client through the |
| connection, the server responds, and the connection is closed. A new request |
| will involve a new connection : |
| |
| [CON1] [REQ1] ... [RESP1] [CLO1] [CON2] [REQ2] ... [RESP2] [CLO2] ... |
| |
| In this mode, called the "HTTP close" mode, there are as many connection |
| establishments as there are HTTP transactions. Since the connection is closed |
| by the server after the response, the client does not need to know the content |
| length. |
| |
| Due to the transactional nature of the protocol, it was possible to improve it |
| to avoid closing a connection between two subsequent transactions. In this mode |
| however, it is mandatory that the server indicates the content length for each |
| response so that the client does not wait indefinitely. For this, a special |
| header is used: "Content-length". This mode is called the "keep-alive" mode : |
| |
| [CON] [REQ1] ... [RESP1] [REQ2] ... [RESP2] [CLO] ... |
| |
| Its advantages are a reduced latency between transactions, and less processing |
| power required on the server side. It is generally better than the close mode, |
| but not always because the clients often limit their concurrent connections to |
| a smaller value. |
| |
| Another improvement in the communications is the pipelining mode. It still uses |
| keep-alive, but the client does not wait for the first response to send the |
| second request. This is useful for fetching large number of images composing a |
| page : |
| |
| [CON] [REQ1] [REQ2] ... [RESP1] [RESP2] [CLO] ... |
| |
| This can obviously have a tremendous benefit on performance because the network |
| latency is eliminated between subsequent requests. Many HTTP agents do not |
| correctly support pipelining since there is no way to associate a response with |
| the corresponding request in HTTP. For this reason, it is mandatory for the |
| server to reply in the exact same order as the requests were received. |
| |
| The next improvement is the multiplexed mode, as implemented in HTTP/2 and HTTP/3. |
| This time, each transaction is assigned a single stream identifier, and all |
| streams are multiplexed over an existing connection. Many requests can be sent in |
| parallel by the client, and responses can arrive in any order since they also |
| carry the stream identifier. |
| |
| |
| HTTP/3 is implemented over QUIC, itself implemented over UDP. QUIC solves the |
| head of line blocking at transport level by means of independently treated |
| streams. Indeed, when experiencing loss, an impacted stream does not affect the |
| other streams. QUIC also provides connection migration support but currently |
| haproxy does not support it. |
| |
| By default HAProxy operates in keep-alive mode with regards to persistent |
| connections: for each connection it processes each request and response, and |
| leaves the connection idle on both sides between the end of a response and the |
| start of a new request. When it receives HTTP/2 connections from a client, it |
| processes all the requests in parallel and leaves the connection idling, |
| waiting for new requests, just as if it was a keep-alive HTTP connection. |
| |
| HAProxy supports 4 connection modes : |
| - keep alive : all requests and responses are processed (default) |
| - tunnel : only the first request and response are processed, |
| everything else is forwarded with no analysis (deprecated). |
| - server close : the server-facing connection is closed after the response. |
| - close : the connection is actively closed after end of response. |
| |
| |
| |
| 1.2. HTTP request |
| ----------------- |
| |
| First, let's consider this HTTP request : |
| |
| Line Contents |
| number |
| 1 GET /serv/login.php?lang=en&profile=2 HTTP/1.1 |
| 2 Host: www.mydomain.com |
| 3 User-agent: my small browser |
| 4 Accept: image/jpeg, image/gif |
| 5 Accept: image/png |
| |
| |
| 1.2.1. The Request line |
| ----------------------- |
| |
| Line 1 is the "request line". It is always composed of 3 fields : |
| |
| - a METHOD : GET |
| - a URI : /serv/login.php?lang=en&profile=2 |
| - a version tag : HTTP/1.1 |
| |
| All of them are delimited by what the standard calls LWS (linear white spaces), |
| which are commonly spaces, but can also be tabs or line feeds/carriage returns |
| followed by spaces/tabs. The method itself cannot contain any colon (':') and |
| is limited to alphabetic letters. All those various combinations make it |
| desirable that HAProxy performs the splitting itself rather than leaving it to |
| the user to write a complex or inaccurate regular expression. |
| |
| The URI itself can have several forms : |
| |
| - A "relative URI" : |
| |
| /serv/login.php?lang=en&profile=2 |
| |
| It is a complete URL without the host part. This is generally what is |
| received by servers, reverse proxies and transparent proxies. |
| |
| - An "absolute URI", also called a "URL" : |
| |
| http://192.168.0.12:8080/serv/login.php?lang=en&profile=2 |
| |
| It is composed of a "scheme" (the protocol name followed by '://'), a host |
| name or address, optionally a colon (':') followed by a port number, then |
| a relative URI beginning at the first slash ('/') after the address part. |
| This is generally what proxies receive, but a server supporting HTTP/1.1 |
| must accept this form too. |
| |
| - a star ('*') : this form is only accepted in association with the OPTIONS |
| method and is not relayable. It is used to inquiry a next hop's |
| capabilities. |
| |
| - an address:port combination : 192.168.0.12:80 |
| This is used with the CONNECT method, which is used to establish TCP |
| tunnels through HTTP proxies, generally for HTTPS, but sometimes for |
| other protocols too. |
| |
| In a relative URI, two sub-parts are identified. The part before the question |
| mark is called the "path". It is typically the relative path to static objects |
| on the server. The part after the question mark is called the "query string". |
| It is mostly used with GET requests sent to dynamic scripts and is very |
| specific to the language, framework or application in use. |
| |
| HTTP/2 doesn't convey a version information with the request, so the version is |
| assumed to be the same as the one of the underlying protocol (i.e. "HTTP/2"). |
| |
| |
| 1.2.2. The request headers |
| -------------------------- |
| |
| The headers start at the second line. They are composed of a name at the |
| beginning of the line, immediately followed by a colon (':'). Traditionally, |
| an LWS is added after the colon but that's not required. Then come the values. |
| Multiple identical headers may be folded into one single line, delimiting the |
| values with commas, provided that their order is respected. This is commonly |
| encountered in the "Cookie:" field. A header may span over multiple lines if |
| the subsequent lines begin with an LWS. In the example in 1.2, lines 4 and 5 |
| define a total of 3 values for the "Accept:" header. |
| |
| Contrary to a common misconception, header names are not case-sensitive, and |
| their values are not either if they refer to other header names (such as the |
| "Connection:" header). In HTTP/2, header names are always sent in lower case, |
| as can be seen when running in debug mode. Internally, all header names are |
| normalized to lower case so that HTTP/1.x and HTTP/2 use the exact same |
| representation, and they are sent as-is on the other side. This explains why an |
| HTTP/1.x request typed with camel case is delivered in lower case. |
| |
| The end of the headers is indicated by the first empty line. People often say |
| that it's a double line feed, which is not exact, even if a double line feed |
| is one valid form of empty line. |
| |
| Fortunately, HAProxy takes care of all these complex combinations when indexing |
| headers, checking values and counting them, so there is no reason to worry |
| about the way they could be written, but it is important not to accuse an |
| application of being buggy if it does unusual, valid things. |
| |
| Important note: |
| As suggested by RFC7231, HAProxy normalizes headers by replacing line breaks |
| in the middle of headers by LWS in order to join multi-line headers. This |
| is necessary for proper analysis and helps less capable HTTP parsers to work |
| correctly and not to be fooled by such complex constructs. |
| |
| |
| 1.3. HTTP response |
| ------------------ |
| |
| An HTTP response looks very much like an HTTP request. Both are called HTTP |
| messages. Let's consider this HTTP response : |
| |
| Line Contents |
| number |
| 1 HTTP/1.1 200 OK |
| 2 Content-length: 350 |
| 3 Content-Type: text/html |
| |
| As a special case, HTTP supports so called "Informational responses" as status |
| codes 1xx. These messages are special in that they don't convey any part of the |
| response, they're just used as sort of a signaling message to ask a client to |
| continue to post its request for instance. In the case of a status 100 response |
| the requested information will be carried by the next non-100 response message |
| following the informational one. This implies that multiple responses may be |
| sent to a single request, and that this only works when keep-alive is enabled |
| (1xx messages are HTTP/1.1 only). HAProxy handles these messages and is able to |
| correctly forward and skip them, and only process the next non-100 response. As |
| such, these messages are neither logged nor transformed, unless explicitly |
| state otherwise. Status 101 messages indicate that the protocol is changing |
| over the same connection and that HAProxy must switch to tunnel mode, just as |
| if a CONNECT had occurred. Then the Upgrade header would contain additional |
| information about the type of protocol the connection is switching to. |
| |
| |
| 1.3.1. The response line |
| ------------------------ |
| |
| Line 1 is the "response line". It is always composed of 3 fields : |
| |
| - a version tag : HTTP/1.1 |
| - a status code : 200 |
| - a reason : OK |
| |
| The status code is always 3-digit. The first digit indicates a general status : |
| - 1xx = informational message to be skipped (e.g. 100, 101) |
| - 2xx = OK, content is following (e.g. 200, 206) |
| - 3xx = OK, no content following (e.g. 302, 304) |
| - 4xx = error caused by the client (e.g. 401, 403, 404) |
| - 5xx = error caused by the server (e.g. 500, 502, 503) |
| |
| Please refer to RFC7231 for the detailed meaning of all such codes. The |
| "reason" field is just a hint, but is not parsed by clients. Anything can be |
| found there, but it's a common practice to respect the well-established |
| messages. It can be composed of one or multiple words, such as "OK", "Found", |
| or "Authentication Required". |
| |
| HAProxy may emit the following status codes by itself : |
| |
| Code When / reason |
| 200 access to stats page, and when replying to monitoring requests |
| 301 when performing a redirection, depending on the configured code |
| 302 when performing a redirection, depending on the configured code |
| 303 when performing a redirection, depending on the configured code |
| 307 when performing a redirection, depending on the configured code |
| 308 when performing a redirection, depending on the configured code |
| 400 for an invalid or too large request |
| 401 when an authentication is required to perform the action (when |
| accessing the stats page) |
| 403 when a request is forbidden by a "http-request deny" rule |
| 404 when the requested resource could not be found |
| 408 when the request timeout strikes before the request is complete |
| 410 when the requested resource is no longer available and will not |
| be available again |
| 500 when HAProxy encounters an unrecoverable internal error, such as a |
| memory allocation failure, which should never happen |
| 501 when HAProxy is unable to satisfy a client request because of an |
| unsupported feature |
| 502 when the server returns an empty, invalid or incomplete response, or |
| when an "http-response deny" rule blocks the response. |
| 503 when no server was available to handle the request, or in response to |
| monitoring requests which match the "monitor fail" condition |
| 504 when the response timeout strikes before the server responds |
| |
| The error 4xx and 5xx codes above may be customized (see "errorloc" in section |
| 4.2). |
| |
| |
| 1.3.2. The response headers |
| --------------------------- |
| |
| Response headers work exactly like request headers, and as such, HAProxy uses |
| the same parsing function for both. Please refer to paragraph 1.2.2 for more |
| details. |
| |
| |
| 2. Configuring HAProxy |
| ---------------------- |
| |
| 2.1. Configuration file format |
| ------------------------------ |
| |
| HAProxy's configuration process involves 3 major sources of parameters : |
| |
| - the arguments from the command-line, which always take precedence |
| - the configuration file(s), whose format is described here |
| - the running process's environment, in case some environment variables are |
| explicitly referenced |
| |
| The configuration file follows a fairly simple hierarchical format which obey |
| a few basic rules: |
| |
| 1. a configuration file is an ordered sequence of statements |
| |
| 2. a statement is a single non-empty line before any unprotected "#" (hash) |
| |
| 3. a line is a series of tokens or "words" delimited by unprotected spaces or |
| tab characters |
| |
| 4. the first word or sequence of words of a line is one of the keywords or |
| keyword sequences listed in this document |
| |
| 5. all other words are all arguments of the first one, some being well-known |
| keywords listed in this document, others being values, references to other |
| parts of the configuration, or expressions |
| |
| 6. certain keywords delimit a section inside which only a subset of keywords |
| are supported |
| |
| 7. a section ends at the end of a file or on a special keyword starting a new |
| section |
| |
| This is all that is needed to know to write a simple but reliable configuration |
| generator, but this is not enough to reliably parse any configuration nor to |
| figure how to deal with certain corner cases. |
| |
| First, there are a few consequences of the rules above. Rule 6 and 7 imply that |
| the keywords used to define a new section are valid everywhere and cannot have |
| a different meaning in a specific section. These keywords are always a single |
| word (as opposed to a sequence of words), and traditionally the section that |
| follows them is designated using the same name. For example when speaking about |
| the "global section", it designates the section of configuration that follows |
| the "global" keyword. This usage is used a lot in error messages to help locate |
| the parts that need to be addressed. |
| |
| A number of sections create an internal object or configuration space, which |
| requires to be distinguished from other ones. In this case they will take an |
| extra word which will set the name of this particular section. For some of them |
| the section name is mandatory. For example "frontend foo" will create a new |
| section of type "frontend" named "foo". Usually a name is specific to its |
| section and two sections of different types may use the same name, but this is |
| not recommended as it tends to complexify configuration management. |
| |
| A direct consequence of rule 7 is that when multiple files are read at once, |
| each of them must start with a new section, and the end of each file will end |
| a section. A file cannot contain sub-sections nor end an existing section and |
| start a new one. |
| |
| Rule 1 mentioned that ordering matters. Indeed, some keywords create directives |
| that can be repeated multiple times to create ordered sequences of rules to be |
| applied in a certain order. For example "tcp-request" can be used to alternate |
| "accept" and "reject" rules on varying criteria. As such, a configuration file |
| processor must always preserve a section's ordering when editing a file. The |
| ordering of sections usually does not matter except for the global section |
| which must be placed before other sections, but it may be repeated if needed. |
| In addition, some automatic identifiers may automatically be assigned to some |
| of the created objects (e.g. proxies), and by reordering sections, their |
| identifiers will change. These ones appear in the statistics for example. As |
| such, the configuration below will assign "foo" ID number 1 and "bar" ID number |
| 2, which will be swapped if the two sections are reversed: |
| |
| listen foo |
| bind :80 |
| |
| listen bar |
| bind :81 |
| |
| Another important point is that according to rules 2 and 3 above, empty lines, |
| spaces, tabs, and comments following and unprotected "#" character are not part |
| of the configuration as they are just used as delimiters. This implies that the |
| following configurations are strictly equivalent: |
| |
| global#this is the global section |
| daemon#daemonize |
| frontend foo |
| mode http # or tcp |
| |
| and: |
| |
| global |
| daemon |
| |
| # this is the public web frontend |
| frontend foo |
| mode http |
| |
| The common practice is to align to the left only the keyword that initiates a |
| new section, and indent (i.e. prepend a tab character or a few spaces) all |
| other keywords so that it's instantly visible that they belong to the same |
| section (as done in the second example above). Placing comments before a new |
| section helps the reader decide if it's the desired one. Leaving a blank line |
| at the end of a section also visually helps spotting the end when editing it. |
| |
| Tabs are very convenient for indent but they do not copy-paste well. If spaces |
| are used instead, it is recommended to avoid placing too many (2 to 4) so that |
| editing in field doesn't become a burden with limited editors that do not |
| support automatic indent. |
| |
| In the early days it used to be common to see arguments split at fixed tab |
| positions because most keywords would not take more than two arguments. With |
| modern versions featuring complex expressions this practice does not stand |
| anymore, and is not recommended. |
| |
| |
| 2.2. Quoting and escaping |
| ------------------------- |
| |
| In modern configurations, some arguments require the use of some characters |
| that were previously considered as pure delimiters. In order to make this |
| possible, HAProxy supports character escaping by prepending a backslash ('\') |
| in front of the character to be escaped, weak quoting within double quotes |
| ('"') and strong quoting within single quotes ("'"). |
| |
| This is pretty similar to what is done in a number of programming languages and |
| very close to what is commonly encountered in Bourne shell. The principle is |
| the following: while the configuration parser cuts the lines into words, it |
| also takes care of quotes and backslashes to decide whether a character is a |
| delimiter or is the raw representation of this character within the current |
| word. The escape character is then removed, the quotes are removed, and the |
| remaining word is used as-is as a keyword or argument for example. |
| |
| If a backslash is needed in a word, it must either be escaped using itself |
| (i.e. double backslash) or be strongly quoted. |
| |
| Escaping outside quotes is achieved by preceding a special character by a |
| backslash ('\'): |
| |
| \ to mark a space and differentiate it from a delimiter |
| \# to mark a hash and differentiate it from a comment |
| \\ to use a backslash |
| \' to use a single quote and differentiate it from strong quoting |
| \" to use a double quote and differentiate it from weak quoting |
| |
| In addition, a few non-printable characters may be emitted using their usual |
| C-language representation: |
| |
| \n to insert a line feed (LF, character \x0a or ASCII 10 decimal) |
| \r to insert a carriage return (CR, character \x0d or ASCII 13 decimal) |
| \t to insert a tab (character \x09 or ASCII 9 decimal) |
| \xNN to insert character having ASCII code hex NN (e.g \x0a for LF). |
| |
| Weak quoting is achieved by surrounding double quotes ("") around the character |
| or sequence of characters to protect. Weak quoting prevents the interpretation |
| of: |
| |
| space or tab as a word separator |
| ' single quote as a strong quoting delimiter |
| # hash as a comment start |
| |
| Weak quoting permits the interpretation of environment variables (which are not |
| evaluated outside of quotes) by preceding them with a dollar sign ('$'). If a |
| dollar character is needed inside double quotes, it must be escaped using a |
| backslash. |
| |
| Strong quoting is achieved by surrounding single quotes ('') around the |
| character or sequence of characters to protect. Inside single quotes, nothing |
| is interpreted, it's the efficient way to quote regular expressions. |
| |
| As a result, here is the matrix indicating how special characters can be |
| entered in different contexts (unprintable characters are replaced with their |
| name within angle brackets). Note that some characters that may only be |
| represented escaped have no possible representation inside single quotes, |
| hence its absence there: |
| |
| Character | Unquoted | Weakly quoted | Strongly quoted |
| -----------+---------------+-----------------------------+----------------- |
| <TAB> | \<TAB>, \x09 | "<TAB>", "\<TAB>", "\x09" | '<TAB>' |
| -----------+---------------+-----------------------------+----------------- |
| <LF> | \n, \x0a | "\n", "\x0a" | |
| -----------+---------------+-----------------------------+----------------- |
| <CR> | \r, \x0d | "\r", "\x0d" | |
| -----------+---------------+-----------------------------+----------------- |
| <SPC> | \<SPC>, \x20 | "<SPC>", "\<SPC>", "\x20" | '<SPC>' |
| -----------+---------------+-----------------------------+----------------- |
| " | \", \x22 | "\"", "\x22" | '"' |
| -----------+---------------+-----------------------------+----------------- |
| # | \#, \x23 | "#", "\#", "\x23" | '#' |
| -----------+---------------+-----------------------------+----------------- |
| $ | $, \$, \x24 | "\$", "\x24" | '$' |
| -----------+---------------+-----------------------------+----------------- |
| ' | \', \x27 | "'", "\'", "\x27" | |
| -----------+---------------+-----------------------------+----------------- |
| \ | \\, \x5c | "\\", "\x5c" | '\' |
| -----------+---------------+-----------------------------+----------------- |
| |
| Example: |
| # those are all strictly equivalent: |
| log-format %{+Q}o\ %t\ %s\ %{-Q}r |
| log-format "%{+Q}o %t %s %{-Q}r" |
| log-format '%{+Q}o %t %s %{-Q}r' |
| log-format "%{+Q}o %t"' %s %{-Q}r' |
| log-format "%{+Q}o %t"' %s'\ %{-Q}r |
| |
| There is one particular case where a second level of quoting or escaping may be |
| necessary. Some keywords take arguments within parenthesis, sometimes delimited |
| by commas. These arguments are commonly integers or predefined words, but when |
| they are arbitrary strings, it may be required to perform a separate level of |
| escaping to disambiguate the characters that belong to the argument from the |
| characters that are used to delimit the arguments themselves. A pretty common |
| case is the "regsub" converter. It takes a regular expression in argument, and |
| if a closing parenthesis is needed inside, this one will require to have its |
| own quotes. |
| |
| The keyword argument parser is exactly the same as the top-level one regarding |
| quotes, except that the \#, \$, and \xNN escapes are not processed. But what is |
| not always obvious is that the delimiters used inside must first be escaped or |
| quoted so that they are not resolved at the top level. |
| |
| Let's take this example making use of the "regsub" converter which takes 3 |
| arguments, one regular expression, one replacement string and one set of flags: |
| |
| # replace all occurrences of "foo" with "blah" in the path: |
| http-request set-path %[path,regsub(foo,blah,g)] |
| |
| Here no special quoting was necessary. But if now we want to replace either |
| "foo" or "bar" with "blah", we'll need the regular expression "(foo|bar)". We |
| cannot write: |
| |
| http-request set-path %[path,regsub((foo|bar),blah,g)] |
| |
| because we would like the string to cut like this: |
| |
| http-request set-path %[path,regsub((foo|bar),blah,g)] |
| |---------|----|-| |
| arg1 _/ / / |
| arg2 __________/ / |
| arg3 ______________/ |
| |
| but actually what is passed is a string between the opening and closing |
| parenthesis then garbage: |
| |
| http-request set-path %[path,regsub((foo|bar),blah,g)] |
| |--------|--------| |
| arg1=(foo|bar _/ / |
| trailing garbage _________/ |
| |
| The obvious solution here seems to be that the closing parenthesis needs to be |
| quoted, but alone this will not work, because as mentioned above, quotes are |
| processed by the top-level parser which will resolve them before processing |
| this word: |
| |
| http-request set-path %[path,regsub("(foo|bar)",blah,g)] |
| ------------ -------- ---------------------------------- |
| word1 word2 word3=%[path,regsub((foo|bar),blah,g)] |
| |
| So we didn't change anything for the argument parser at the second level which |
| still sees a truncated regular expression as the only argument, and garbage at |
| the end of the string. By escaping the quotes they will be passed unmodified to |
| the second level: |
| |
| http-request set-path %[path,regsub(\"(foo|bar)\",blah,g)] |
| ------------ -------- ------------------------------------ |
| word1 word2 word3=%[path,regsub("(foo|bar)",blah,g)] |
| |---------||----|-| |
| arg1=(foo|bar) _/ / / |
| arg2=blah ___________/ / |
| arg3=g _______________/ |
| |
| Another approach consists in using single quotes outside the whole string and |
| double quotes inside (so that the double quotes are not stripped again): |
| |
| http-request set-path '%[path,regsub("(foo|bar)",blah,g)]' |
| ------------ -------- ---------------------------------- |
| word1 word2 word3=%[path,regsub("(foo|bar)",blah,g)] |
| |---------||----|-| |
| arg1=(foo|bar) _/ / / |
| arg2 ___________/ / |
| arg3 _______________/ |
| |
| When using regular expressions, it can happen that the dollar ('$') character |
| appears in the expression or that a backslash ('\') is used in the replacement |
| string. In this case these ones will also be processed inside the double quotes |
| thus single quotes are preferred (or double escaping). Example: |
| |
| http-request set-path '%[path,regsub("^/(here)(/|$)","my/\1",g)]' |
| ------------ -------- ----------------------------------------- |
| word1 word2 word3=%[path,regsub("^/(here)(/|$)","my/\1",g)] |
| |-------------| |-----||-| |
| arg1=(here)(/|$) _/ / / |
| arg2=my/\1 ________________/ / |
| arg3 ______________________/ |
| |
| Remember that backslashes are not escape characters within single quotes and |
| that the whole word above is already protected against them using the single |
| quotes. Conversely, if double quotes had been used around the whole expression, |
| single the dollar character and the backslashes would have been resolved at top |
| level, breaking the argument contents at the second level. |
| |
| Unfortunately, since single quotes can't be escaped inside of strong quoting, |
| if you need to include single quotes in your argument, you will need to escape |
| or quote them twice. There are a few ways to do this: |
| |
| http-request set-var(txn.foo) str("\\'foo\\'") |
| http-request set-var(txn.foo) str(\"\'foo\'\") |
| http-request set-var(txn.foo) str(\\\'foo\\\') |
| |
| When in doubt, simply do not use quotes anywhere, and start to place single or |
| double quotes around arguments that require a comma or a closing parenthesis, |
| and think about escaping these quotes using a backslash if the string contains |
| a dollar or a backslash. Again, this is pretty similar to what is used under |
| a Bourne shell when double-escaping a command passed to "eval". For API writers |
| the best is probably to place escaped quotes around each and every argument, |
| regardless of their contents. Users will probably find that using single quotes |
| around the whole expression and double quotes around each argument provides |
| more readable configurations. |
| |
| |
| 2.3. Environment variables |
| -------------------------- |
| |
| HAProxy's configuration supports environment variables. Those variables are |
| interpreted only within double quotes. Variables are expanded during the |
| configuration parsing. Variable names must be preceded by a dollar ("$") and |
| optionally enclosed with braces ("{}") similarly to what is done in Bourne |
| shell. Variable names can contain alphanumerical characters or the character |
| underscore ("_") but should not start with a digit. If the variable contains a |
| list of several values separated by spaces, it can be expanded as individual |
| arguments by enclosing the variable with braces and appending the suffix '[*]' |
| before the closing brace. It is also possible to specify a default value to |
| use when the variable is not set, by appending that value after a dash '-' |
| next to the variable name. Note that the default value only replaces non |
| existing variables, not empty ones. |
| |
| Example: |
| |
| bind "fd@${FD_APP1}" |
| |
| log "${LOCAL_SYSLOG-127.0.0.1}:514" local0 notice # send to local server |
| |
| user "$HAPROXY_USER" |
| |
| Some variables are defined by HAProxy, they can be used in the configuration |
| file, or could be inherited by a program (See 3.7. Programs): |
| |
| * HAPROXY_LOCALPEER: defined at the startup of the process which contains the |
| name of the local peer. (See "-L" in the management guide.) |
| |
| * HAPROXY_CFGFILES: list of the configuration files loaded by HAProxy, |
| separated by semicolons. Can be useful in the case you specified a |
| directory. |
| |
| * HAPROXY_HTTP_LOG_FMT: contains the value of the default HTTP log format as |
| defined in section 8.2.3 "HTTP log format". It can be used to override the |
| default log format without having to copy the whole original definition. |
| |
| Example: |
| # Add the rule that gave the final verdict to the log |
| log-format "${HAPROXY_TCP_LOG_FMT} lr=last_rule_file:last_rule_line" |
| |
| * HAPROXY_HTTPS_LOG_FMT: similar to HAPROXY_HTTP_LOG_FMT but for HTTPS log |
| format as defined in section 8.2.4 "HTTPS log format". |
| |
| * HAPROXY_TCP_LOG_FMT: similar to HAPROXY_HTTP_LOG_FMT but for TCP log format |
| as defined in section 8.2.2 "TCP log format". |
| |
| * HAPROXY_MWORKER: In master-worker mode, this variable is set to 1. |
| |
| * HAPROXY_CLI: configured listeners addresses of the stats socket for every |
| processes, separated by semicolons. |
| |
| * HAPROXY_MASTER_CLI: In master-worker mode, listeners addresses of the master |
| CLI, separated by semicolons. |
| |
| * HAPROXY_STARTUP_VERSION: contains the version used to start, in master-worker |
| mode this is the version which was used to start the master, even after |
| updating the binary and reloading. |
| |
| * HAPROXY_BRANCH: contains the HAProxy branch version (such as "2.8"). It does |
| not contain the full version number. It can be useful in case of migration |
| if resources (such as maps or certificates) are in a path containing the |
| branch number. |
| |
| In addition, some pseudo-variables are internally resolved and may be used as |
| regular variables. Pseudo-variables always start with a dot ('.'), and are the |
| only ones where the dot is permitted. The current list of pseudo-variables is: |
| |
| * .FILE: the name of the configuration file currently being parsed. |
| |
| * .LINE: the line number of the configuration file currently being parsed, |
| starting at one. |
| |
| * .SECTION: the name of the section currently being parsed, or its type if the |
| section doesn't have a name (e.g. "global"), or an empty string before the |
| first section. |
| |
| These variables are resolved at the location where they are parsed. For example |
| if a ".LINE" variable is used in a "log-format" directive located in a defaults |
| section, its line number will be resolved before parsing and compiling the |
| "log-format" directive, so this same line number will be reused by subsequent |
| proxies. |
| |
| This way it is possible to emit information to help locate a rule in variables, |
| logs, error statuses, health checks, header values, or even to use line numbers |
| to name some config objects like servers for example. |
| |
| See also "external-check command" for other variables. |
| |
| |
| 2.4. Conditional blocks |
| ----------------------- |
| |
| It may sometimes be convenient to be able to conditionally enable or disable |
| some arbitrary parts of the configuration, for example to enable/disable SSL or |
| ciphers, enable or disable some pre-production listeners without modifying the |
| configuration, or adjust the configuration's syntax to support two distinct |
| versions of HAProxy during a migration.. HAProxy brings a set of nestable |
| preprocessor-like directives which allow to integrate or ignore some blocks of |
| text. These directives must be placed on their own line and they act on the |
| lines that follow them. Two of them support an expression, the other ones only |
| switch to an alternate block or end a current level. The 4 following directives |
| are defined to form conditional blocks: |
| |
| - .if <condition> |
| - .elif <condition> |
| - .else |
| - .endif |
| |
| The ".if" directive nests a new level, ".elif" stays at the same level, ".else" |
| as well, and ".endif" closes a level. Each ".if" must be terminated by a |
| matching ".endif". The ".elif" may only be placed after ".if" or ".elif", and |
| there is no limit to the number of ".elif" that may be chained. There may be |
| only one ".else" per ".if" and it must always be after the ".if" or the last |
| ".elif" of a block. |
| |
| Comments may be placed on the same line if needed after a '#', they will be |
| ignored. The directives are tokenized like other configuration directives, and |
| as such it is possible to use environment variables in conditions. |
| |
| Conditions can also be evaluated on startup with the -cc parameter. |
| See "3. Starting HAProxy" in the management doc. |
| |
| The conditions are either an empty string (which then returns false), or an |
| expression made of any combination of: |
| |
| - the integer zero ('0'), always returns "false" |
| - a non-nul integer (e.g. '1'), always returns "true". |
| - a predicate optionally followed by argument(s) in parenthesis. |
| - a condition placed between a pair of parenthesis '(' and ')' |
| - an exclamation mark ('!') preceding any of the non-empty elements above, |
| and which will negate its status. |
| - expressions combined with a logical AND ('&&'), which will be evaluated |
| from left to right until one returns false |
| - expressions combined with a logical OR ('||'), which will be evaluated |
| from right to left until one returns true |
| |
| Note that like in other languages, the AND operator has precedence over the OR |
| operator, so that "A && B || C && D" evalues as "(A && B) || (C && D)". |
| |
| The list of currently supported predicates is the following: |
| |
| - defined(<name>) : returns true if an environment variable <name> |
| exists, regardless of its contents |
| |
| - feature(<name>) : returns true if feature <name> is listed as present |
| in the features list reported by "haproxy -vv" |
| (which means a <name> appears after a '+') |
| |
| - streq(<str1>,<str2>) : returns true only if the two strings are equal |
| - strneq(<str1>,<str2>) : returns true only if the two strings differ |
| - strstr(<str1>,<str2>) : returns true only if the second string is found in the first one |
| |
| - version_atleast(<ver>): returns true if the current haproxy version is |
| at least as recent as <ver> otherwise false. The |
| version syntax is the same as shown by "haproxy -v" |
| and missing components are assumed as being zero. |
| |
| - version_before(<ver>) : returns true if the current haproxy version is |
| strictly older than <ver> otherwise false. The |
| version syntax is the same as shown by "haproxy -v" |
| and missing components are assumed as being zero. |
| |
| - enabled(<opt>) : returns true if the option <opt> is enabled at |
| run-time. Only a subset of options are supported: |
| POLL, EPOLL, KQUEUE, EVPORTS, SPLICE, |
| GETADDRINFO, REUSEPORT, FAST-FORWARD, |
| SERVER-SSL-VERIFY-NONE |
| |
| Example: |
| |
| .if defined(HAPROXY_MWORKER) |
| listen mwcli_px |
| bind :1111 |
| ... |
| .endif |
| |
| .if strneq("$SSL_ONLY",yes) |
| bind :80 |
| .endif |
| |
| .if streq("$WITH_SSL",yes) |
| .if feature(OPENSSL) |
| bind :443 ssl crt ... |
| .endif |
| .endif |
| |
| .if feature(OPENSSL) && (streq("$WITH_SSL",yes) || streq("$SSL_ONLY",yes)) |
| bind :443 ssl crt ... |
| .endif |
| |
| .if version_atleast(2.4-dev19) |
| profiling.memory on |
| .endif |
| |
| .if !feature(OPENSSL) |
| .alert "SSL support is mandatory" |
| .endif |
| |
| Four other directives are provided to report some status: |
| |
| - .diag "message" : emit this message only when in diagnostic mode (-dD) |
| - .notice "message" : emit this message at level NOTICE |
| - .warning "message" : emit this message at level WARNING |
| - .alert "message" : emit this message at level ALERT |
| |
| Messages emitted at level WARNING may cause the process to fail to start if the |
| "strict-mode" is enabled. Messages emitted at level ALERT will always cause a |
| fatal error. These can be used to detect some inappropriate conditions and |
| provide advice to the user. |
| |
| Example: |
| |
| .if "${A}" |
| .if "${B}" |
| .notice "A=1, B=1" |
| .elif "${C}" |
| .notice "A=1, B=0, C=1" |
| .elif "${D}" |
| .warning "A=1, B=0, C=0, D=1" |
| .else |
| .alert "A=1, B=0, C=0, D=0" |
| .endif |
| .else |
| .notice "A=0" |
| .endif |
| |
| .diag "WTA/2021-05-07: replace 'redirect' with 'return' after switch to 2.4" |
| http-request redirect location /goaway if ABUSE |
| |
| |
| 2.5. Time format |
| ---------------- |
| |
| Some parameters involve values representing time, such as timeouts. These |
| values are generally expressed in milliseconds (unless explicitly stated |
| otherwise) but may be expressed in any other unit by suffixing the unit to the |
| numeric value. It is important to consider this because it will not be repeated |
| for every keyword. Supported units are : |
| |
| - us : microseconds. 1 microsecond = 1/1000000 second |
| - ms : milliseconds. 1 millisecond = 1/1000 second. This is the default. |
| - s : seconds. 1s = 1000ms |
| - m : minutes. 1m = 60s = 60000ms |
| - h : hours. 1h = 60m = 3600s = 3600000ms |
| - d : days. 1d = 24h = 1440m = 86400s = 86400000ms |
| |
| |
| 2.6. Size format |
| ---------------- |
| |
| Some parameters involve values representing size, such as bandwidth limits. |
| These values are generally expressed in bytes (unless explicitly stated |
| otherwise) but may be expressed in any other unit by suffixing the unit to the |
| numeric value. It is important to consider this because it will not be repeated |
| for every keyword. Supported units are case insensitive : |
| |
| - k : kilobytes. 1 kilobyte = 1024 bytes |
| - m : megabytes. 1 megabyte = 1048576 bytes |
| - g : gigabytes. 1 gigabyte = 1073741824 bytes |
| |
| Both time and size formats require integers, decimal notation is not allowed. |
| |
| |
| 2.7. Examples |
| ------------- |
| |
| # Simple configuration for an HTTP proxy listening on port 80 on all |
| # interfaces and forwarding requests to a single backend "servers" with a |
| # single server "server1" listening on 127.0.0.1:8000 |
| global |
| daemon |
| maxconn 256 |
| |
| defaults |
| mode http |
| timeout connect 5000ms |
| timeout client 50000ms |
| timeout server 50000ms |
| |
| frontend http-in |
| bind *:80 |
| default_backend servers |
| |
| backend servers |
| server server1 127.0.0.1:8000 maxconn 32 |
| |
| |
| # The same configuration defined with a single listen block. Shorter but |
| # less expressive, especially in HTTP mode. |
| global |
| daemon |
| maxconn 256 |
| |
| defaults |
| mode http |
| timeout connect 5000ms |
| timeout client 50000ms |
| timeout server 50000ms |
| |
| listen http-in |
| bind *:80 |
| server server1 127.0.0.1:8000 maxconn 32 |
| |
| |
| Assuming haproxy is in $PATH, test these configurations in a shell with: |
| |
| $ sudo haproxy -f configuration.conf -c |
| |
| |
| 3. Global parameters |
| -------------------- |
| |
| Parameters in the "global" section are process-wide and often OS-specific. They |
| are generally set once for all and do not need being changed once correct. Some |
| of them have command-line equivalents. |
| |
| The following keywords are supported in the "global" section : |
| |
| * Process management and security |
| - 51degrees-allow-unmatched |
| - 51degrees-cache-size |
| - 51degrees-data-file |
| - 51degrees-difference |
| - 51degrees-drift |
| - 51degrees-property-name-list |
| - 51degrees-property-separator |
| - 51degrees-use-performance-graph |
| - 51degrees-use-predictive-graph |
| - ca-base |
| - chroot |
| - cluster-secret |
| - cpu-map |
| - crt-base |
| - daemon |
| - default-path |
| - description |
| - deviceatlas-json-file |
| - deviceatlas-log-level |
| - deviceatlas-properties-cookie |
| - deviceatlas-separator |
| - expose-experimental-directives |
| - external-check |
| - fd-hard-limit |
| - gid |
| - grace |
| - group |
| - h1-accept-payload-with-any-method |
| - h1-case-adjust |
| - h1-case-adjust-file |
| - h2-workaround-bogus-websocket-clients |
| - hard-stop-after |
| - harden.reject-privileged-ports.tcp |
| - harden.reject-privileged-ports.quic |
| - insecure-fork-wanted |
| - insecure-setuid-wanted |
| - issuers-chain-path |
| - localpeer |
| - log |
| - log-send-hostname |
| - log-tag |
| - lua-load |
| - lua-load-per-thread |
| - lua-prepend-path |
| - mworker-max-reloads |
| - nbthread |
| - node |
| - numa-cpu-mapping |
| - pidfile |
| - pp2-never-send-local |
| - presetenv |
| - prealloc-fd |
| - resetenv |
| - set-dumpable |
| - set-var |
| - setenv |
| - ssl-default-bind-ciphers |
| - ssl-default-bind-ciphersuites |
| - ssl-default-bind-client-sigalgs |
| - ssl-default-bind-curves |
| - ssl-default-bind-options |
| - ssl-default-bind-sigalgs |
| - ssl-default-server-ciphers |
| - ssl-default-server-ciphersuites |
| - ssl-default-server-options |
| - ssl-dh-param-file |
| - ssl-propquery |
| - ssl-provider |
| - ssl-provider-path |
| - ssl-server-verify |
| - ssl-skip-self-issued-ca |
| - stats |
| - strict-limits |
| - uid |
| - ulimit-n |
| - unix-bind |
| - unsetenv |
| - user |
| - wurfl-cache-size |
| - wurfl-data-file |
| - wurfl-information-list |
| - wurfl-information-list-separator |
| |
| * Performance tuning |
| - busy-polling |
| - max-spread-checks |
| - maxcompcpuusage |
| - maxcomprate |
| - maxconn |
| - maxconnrate |
| - maxpipes |
| - maxsessrate |
| - maxsslconn |
| - maxsslrate |
| - maxzlibmem |
| - no-memory-trimming |
| - noepoll |
| - noevports |
| - nogetaddrinfo |
| - nokqueue |
| - nopoll |
| - noreuseport |
| - nosplice |
| - profiling.tasks |
| - server-state-base |
| - server-state-file |
| - spread-checks |
| - ssl-engine |
| - ssl-mode-async |
| - tune.buffers.limit |
| - tune.buffers.reserve |
| - tune.bufsize |
| - tune.comp.maxlevel |
| - tune.disable-fast-forward |
| - tune.fail-alloc |
| - tune.fd.edge-triggered |
| - tune.h2.be.glitches-threshold |
| - tune.h2.be.initial-window-size |
| - tune.h2.be.max-concurrent-streams |
| - tune.h2.fe.glitches-threshold |
| - tune.h2.fe.initial-window-size |
| - tune.h2.fe.max-concurrent-streams |
| - tune.h2.fe.max-total-streams |
| - tune.h2.header-table-size |
| - tune.h2.initial-window-size |
| - tune.h2.max-concurrent-streams |
| - tune.h2.max-frame-size |
| - tune.http.cookielen |
| - tune.http.logurilen |
| - tune.http.maxhdr |
| - tune.idle-pool.shared |
| - tune.idletimer |
| - tune.lua.forced-yield |
| - tune.lua.maxmem |
| - tune.lua.service-timeout |
| - tune.lua.session-timeout |
| - tune.lua.task-timeout |
| - tune.lua.log.loggers |
| - tune.lua.log.stderr |
| - tune.maxaccept |
| - tune.maxpollevents |
| - tune.maxrewrite |
| - tune.memory.hot-size |
| - tune.pattern.cache-size |
| - tune.peers.max-updates-at-once |
| - tune.pipesize |
| - tune.pool-high-fd-ratio |
| - tune.pool-low-fd-ratio |
| - tune.quic.frontend.conn-tx-buffers.limit |
| - tune.quic.frontend.max-idle-timeout |
| - tune.quic.frontend.max-streams-bidi |
| - tune.quic.max-frame-loss |
| - tune.quic.reorder-ratio |
| - tune.quic.retry-threshold |
| - tune.quic.socket-owner |
| - tune.rcvbuf.client |
| - tune.rcvbuf.server |
| - tune.recv_enough |
| - tune.runqueue-depth |
| - tune.sched.low-latency |
| - tune.sndbuf.client |
| - tune.sndbuf.server |
| - tune.stick-counters |
| - tune.ssl.cachesize |
| - tune.ssl.capture-buffer-size |
| - tune.ssl.capture-cipherlist-size (deprecated) |
| - tune.ssl.default-dh-param |
| - tune.ssl.force-private-cache |
| - tune.ssl.hard-maxrecord |
| - tune.ssl.keylog |
| - tune.ssl.lifetime |
| - tune.ssl.maxrecord |
| - tune.ssl.ssl-ctx-cache-size |
| - tune.ssl.ocsp-update.maxdelay |
| - tune.ssl.ocsp-update.mindelay |
| - tune.vars.global-max-size |
| - tune.vars.proc-max-size |
| - tune.vars.reqres-max-size |
| - tune.vars.sess-max-size |
| - tune.vars.txn-max-size |
| - tune.zlib.memlevel |
| - tune.zlib.windowsize |
| |
| * Debugging |
| - anonkey |
| - quiet |
| - zero-warning |
| |
| * HTTPClient |
| - httpclient.resolvers.disabled |
| - httpclient.resolvers.id |
| - httpclient.resolvers.prefer |
| - httpclient.retries |
| - httpclient.ssl.ca-file |
| - httpclient.ssl.verify |
| - httpclient.timeout.connect |
| |
| 3.1. Process management and security |
| ------------------------------------ |
| |
| 51degrees-data-file <file path> |
| The path of the 51Degrees data file to provide device detection services. The |
| file should be unzipped and accessible by HAProxy with relevant permissions. |
| |
| Please note that this option is only available when HAProxy has been |
| compiled with USE_51DEGREES. |
| |
| 51degrees-property-name-list [<string> ...] |
| A list of 51Degrees property names to be load from the dataset. A full list |
| of names is available on the 51Degrees website: |
| https://51degrees.com/resources/property-dictionary |
| |
| Please note that this option is only available when HAProxy has been |
| compiled with USE_51DEGREES. |
| |
| 51degrees-property-separator <char> |
| A char that will be appended to every property value in a response header |
| containing 51Degrees results. If not set that will be set as ','. |
| |
| Please note that this option is only available when HAProxy has been |
| compiled with USE_51DEGREES. |
| |
| 51degrees-cache-size <number> |
| Sets the size of the 51Degrees converter cache to <number> entries. This |
| is an LRU cache which reminds previous device detections and their results. |
| By default, this cache is disabled. |
| |
| Please note that this option is only available when HAProxy has been |
| compiled with USE_51DEGREES. |
| |
| 51degrees-use-performance-graph { on | off } |
| Enables ('on') or disables ('off') the use of the performance graph in |
| the detection process. The default value depends on 51Degrees library. |
| |
| Please note that this option is only available when HAProxy has been |
| compiled with USE_51DEGREES and 51DEGREES_VER=4. |
| |
| 51degrees-use-predictive-graph { on | off } |
| Enables ('on') or disables ('off') the use of the predictive graph in |
| the detection process. The default value depends on 51Degrees library. |
| |
| Please note that this option is only available when HAProxy has been |
| compiled with USE_51DEGREES and 51DEGREES_VER=4. |
| |
| 51degrees-drift <number> |
| Sets the drift value that a detection can allow. |
| |
| Please note that this option is only available when HAProxy has been |
| compiled with USE_51DEGREES and 51DEGREES_VER=4. |
| |
| 51degrees-difference <number> |
| Sets the difference value that a detection can allow. |
| |
| Please note that this option is only available when HAProxy has been |
| compiled with USE_51DEGREES and 51DEGREES_VER=4. |
| |
| 51degrees-allow-unmatched { on | off } |
| Enables ('on') or disables ('off') the use of unmatched nodes in the |
| detection process. The default value depends on 51Degrees library. |
| |
| Please note that this option is only available when HAProxy has been |
| compiled with USE_51DEGREES and 51DEGREES_VER=4. |
| |
| ca-base <dir> |
| Assigns a default directory to fetch SSL CA certificates and CRLs from when a |
| relative path is used with "ca-file", "ca-verify-file" or "crl-file" |
| directives. Absolute locations specified in "ca-file", "ca-verify-file" and |
| "crl-file" prevail and ignore "ca-base". |
| |
| chroot <jail dir> |
| Changes current directory to <jail dir> and performs a chroot() there before |
| dropping privileges. This increases the security level in case an unknown |
| vulnerability would be exploited, since it would make it very hard for the |
| attacker to exploit the system. This only works when the process is started |
| with superuser privileges. It is important to ensure that <jail_dir> is both |
| empty and non-writable to anyone. |
| |
| close-spread-time <time> |
| Define a time window during which idle connections and active connections |
| closing is spread in case of soft-stop. After a SIGUSR1 is received and the |
| grace period is over (if any), the idle connections will all be closed at |
| once if this option is not set, and active HTTP or HTTP2 connections will be |
| ended after the next request is received, either by appending a "Connection: |
| close" line to the HTTP response, or by sending a GOAWAY frame in case of |
| HTTP2. When this option is set, connection closing will be spread over this |
| set <time>. |
| If the close-spread-time is set to "infinite", active connection closing |
| during a soft-stop will be disabled. The "Connection: close" header will not |
| be added to HTTP responses (or GOAWAY for HTTP2) anymore and idle connections |
| will only be closed once their timeout is reached (based on the various |
| timeouts set in the configuration). |
| |
| Arguments : |
| <time> is a time window (by default in milliseconds) during which |
| connection closing will be spread during a soft-stop operation, or |
| "infinite" if active connection closing should be disabled. |
| |
| It is recommended to set this setting to a value lower than the one used in |
| the "hard-stop-after" option if this one is used, so that all connections |
| have a chance to gracefully close before the process stops. |
| |
| See also: grace, hard-stop-after, idle-close-on-response |
| |
| cluster-secret <secret> |
| Define an ASCII string secret shared between several nodes belonging to the |
| same cluster. It could be used for different usages. It is at least used to |
| derive stateless reset tokens for all the QUIC connections instantiated by |
| this process. This is also the case to derive secrets used to encrypt Retry |
| tokens. |
| |
| If this parameter is not set, a random value will be selected on process |
| startup. This allows to use features which rely on it, albeit with some |
| limitations. |
| |
| cpu-map [auto:]<thread-group>[/<thread-set>] <cpu-set>[,...] [...] |
| On some operating systems, it is possible to bind a thread group or a thread |
| to a specific CPU set. This means that the designated threads will never run |
| on other CPUs. The "cpu-map" directive specifies CPU sets for individual |
| threads or thread groups. The first argument is a thread group range, |
| optionally followed by a thread set. These ranges have the following format: |
| |
| all | odd | even | number[-[number]] |
| |
| <number> must be a number between 1 and 32 or 64, depending on the machine's |
| word size. Any group IDs above 'thread-groups' and any thread IDs above the |
| machine's word size are ignored. All thread numbers are relative to the group |
| they belong to. It is possible to specify a range with two such number |
| delimited by a dash ('-'). It also is possible to specify all threads at once |
| using "all", only odd numbers using "odd" or even numbers using "even", just |
| like with the "thread" bind directive. The second and forthcoming arguments |
| are CPU sets. Each CPU set is either a unique number starting at 0 for the |
| first CPU or a range with two such numbers delimited by a dash ('-'). These |
| CPU numbers and ranges may be repeated by delimiting them with commas or by |
| passing more ranges as new arguments on the same line. Outside of Linux and |
| BSD operating systems, there may be a limitation on the maximum CPU index to |
| either 31 or 63. Multiple "cpu-map" directives may be specified, but each |
| "cpu-map" directive will replace the previous ones when they overlap. |
| |
| Ranges can be partially defined. The higher bound can be omitted. In such |
| case, it is replaced by the corresponding maximum value, 32 or 64 depending |
| on the machine's word size. |
| |
| The prefix "auto:" can be added before the thread set to let HAProxy |
| automatically bind a set of threads to a CPU by incrementing threads and |
| CPU sets. To be valid, both sets must have the same size. No matter the |
| declaration order of the CPU sets, it will be bound from the lowest to the |
| highest bound. Having both a group and a thread range with the "auto:" |
| prefix is not supported. Only one range is supported, the other one must be |
| a fixed number. |
| |
| Note that group ranges are supported for historical reasons. Nowadays, a lone |
| number designates a thread group and must be 1 if thread-groups are not used, |
| and specifying a thread range or number requires to prepend "1/" in front of |
| it if thread groups are not used. Finally, "1" is strictly equivalent to |
| "1/all" and designates all threads in the group. |
| |
| Examples: |
| cpu-map 1/all 0-3 # bind all threads of the first group on the |
| # first 4 CPUs |
| |
| cpu-map 1/1- 0- # will be replaced by "cpu-map 1/1-64 0-63" |
| # or "cpu-map 1/1-32 0-31" depending on the machine's |
| # word size. |
| |
| # all these lines bind thread 1 to the cpu 0, the thread 2 to cpu 1 |
| # and so on. |
| cpu-map auto:1/1-4 0-3 |
| cpu-map auto:1/1-4 0-1 2-3 |
| cpu-map auto:1/1-4 3 2 1 0 |
| cpu-map auto:1/1-4 3,2,1,0 |
| |
| # bind each thread to exactly one CPU using all/odd/even keyword |
| cpu-map auto:1/all 0-63 |
| cpu-map auto:1/even 0-31 |
| cpu-map auto:1/odd 32-63 |
| |
| # invalid cpu-map because thread and CPU sets have different sizes. |
| cpu-map auto:1/1-4 0 # invalid |
| cpu-map auto:1/1 0-3 # invalid |
| |
| # map 40 threads of those 4 groups to individual CPUs |
| cpu-map auto:1/1-10 0-9 |
| cpu-map auto:2/1-10 10-19 |
| cpu-map auto:3/1-10 20-29 |
| cpu-map auto:4/1-10 30-39 |
| |
| # Map 80 threads to one physical socket and 80 others to another socket |
| # without forcing assignment. These are split into 4 groups since no |
| # group may have more than 64 threads. |
| cpu-map 1/1-40 0-39,80-119 # node0, siblings 0 & 1 |
| cpu-map 2/1-40 0-39,80-119 |
| cpu-map 3/1-40 40-79,120-159 # node1, siblings 0 & 1 |
| cpu-map 4/1-40 40-79,120-159 |
| |
| |
| crt-base <dir> |
| Assigns a default directory to fetch SSL certificates from when a relative |
| path is used with "crtfile" or "crt" directives. Absolute locations specified |
| prevail and ignore "crt-base". |
| |
| daemon |
| Makes the process fork into background. This is the recommended mode of |
| operation. It is equivalent to the command line "-D" argument. It can be |
| disabled by the command line "-db" argument. This option is ignored in |
| systemd mode. |
| |
| default-path { current | config | parent | origin <path> } |
| By default HAProxy loads all files designated by a relative path from the |
| location the process is started in. In some circumstances it might be |
| desirable to force all relative paths to start from a different location |
| just as if the process was started from such locations. This is what this |
| directive is made for. Technically it will perform a temporary chdir() to |
| the designated location while processing each configuration file, and will |
| return to the original directory after processing each file. It takes an |
| argument indicating the policy to use when loading files whose path does |
| not start with a slash ('/'): |
| - "current" indicates that all relative files are to be loaded from the |
| directory the process is started in ; this is the default. |
| |
| - "config" indicates that all relative files should be loaded from the |
| directory containing the configuration file. More specifically, if the |
| configuration file contains a slash ('/'), the longest part up to the |
| last slash is used as the directory to change to, otherwise the current |
| directory is used. This mode is convenient to bundle maps, errorfiles, |
| certificates and Lua scripts together as relocatable packages. When |
| multiple configuration files are loaded, the directory is updated for |
| each of them. |
| |
| - "parent" indicates that all relative files should be loaded from the |
| parent of the directory containing the configuration file. More |
| specifically, if the configuration file contains a slash ('/'), ".." |
| is appended to the longest part up to the last slash is used as the |
| directory to change to, otherwise the directory is "..". This mode is |
| convenient to bundle maps, errorfiles, certificates and Lua scripts |
| together as relocatable packages, but where each part is located in a |
| different subdirectory (e.g. "config/", "certs/", "maps/", ...). |
| |
| - "origin" indicates that all relative files should be loaded from the |
| designated (mandatory) path. This may be used to ease management of |
| different HAProxy instances running in parallel on a system, where each |
| instance uses a different prefix but where the rest of the sections are |
| made easily relocatable. |
| |
| Each "default-path" directive instantly replaces any previous one and will |
| possibly result in switching to a different directory. While this should |
| always result in the desired behavior, it is really not a good practice to |
| use multiple default-path directives, and if used, the policy ought to remain |
| consistent across all configuration files. |
| |
| Warning: some configuration elements such as maps or certificates are |
| uniquely identified by their configured path. By using a relocatable layout, |
| it becomes possible for several of them to end up with the same unique name, |
| making it difficult to update them at run time, especially when multiple |
| configuration files are loaded from different directories. It is essential to |
| observe a strict collision-free file naming scheme before adopting relative |
| paths. A robust approach could consist in prefixing all files names with |
| their respective site name, or in doing so at the directory level. |
| |
| description <text> |
| Add a text that describes the instance. |
| |
| Please note that it is required to escape certain characters (# for example) |
| and this text is inserted into a html page so you should avoid using |
| "<" and ">" characters. |
| |
| deviceatlas-json-file <path> |
| Sets the path of the DeviceAtlas JSON data file to be loaded by the API. |
| The path must be a valid JSON data file and accessible by HAProxy process. |
| |
| deviceatlas-log-level <value> |
| Sets the level of information returned by the API. This directive is |
| optional and set to 0 by default if not set. |
| |
| deviceatlas-properties-cookie <name> |
| Sets the client cookie's name used for the detection if the DeviceAtlas |
| Client-side component was used during the request. This directive is optional |
| and set to DAPROPS by default if not set. |
| |
| deviceatlas-separator <char> |
| Sets the character separator for the API properties results. This directive |
| is optional and set to | by default if not set. |
| |
| expose-experimental-directives |
| This statement must appear before using directives tagged as experimental or |
| the config file will be rejected. |
| |
| external-check [preserve-env] |
| Allows the use of an external agent to perform health checks. This is |
| disabled by default as a security precaution, and even when enabled, checks |
| may still fail unless "insecure-fork-wanted" is enabled as well. If the |
| program launched makes use of a setuid executable (it should really not), |
| you may also need to set "insecure-setuid-wanted" in the global section. |
| By default, the checks start with a clean environment which only contains |
| variables defined in the "external-check" command in the backend section. It |
| may sometimes be desirable to preserve the environment though, for example |
| when complex scripts retrieve their extra paths or information there. This |
| can be done by appending the "preserve-env" keyword. In this case however it |
| is strongly advised not to run a setuid nor as a privileged user, as this |
| exposes the check program to potential attacks. See "option external-check", |
| and "insecure-fork-wanted", and "insecure-setuid-wanted" for extra details. |
| |
| fd-hard-limit <number> |
| Sets an upper bound to the maximum number of file descriptors that the |
| process will use, regardless of system limits. While "ulimit-n" and "maxconn" |
| may be used to enforce a value, when they are not set, the process will be |
| limited to the hard limit of the RLIMIT_NOFILE setting as reported by |
| "ulimit -n -H". But some modern operating systems are now allowing extremely |
| large values here (in the order of 1 billion), which will consume way too |
| much RAM for regular usage. The fd-hard-limit setting is provided to enforce |
| a possibly lower bound to this limit. This means that it will always respect |
| the system-imposed limits when they are below <number> but the specified |
| value will be used if system-imposed limits are higher. By default |
| fd-hard-limit is set to 1048576. This default could be changed via |
| DEFAULT_MAXFD compile-time variable, that could serve as the maximum (kernel) |
| system limit, if RLIMIT_NOFILE hard limit is extremely large. fd-hard-limit |
| set in global section allows to temporarily override the value provided via |
| DEFAULT_MAXFD at the build-time. In the example below, no other setting is |
| specified and the maxconn value will automatically adapt to the lower of |
| "fd-hard-limit" and the RLIMIT_NOFILE limit: |
| |
| global |
| # use as many FDs as possible but no more than 50000 |
| fd-hard-limit 50000 |
| |
| See also: ulimit-n, maxconn |
| |
| gid <number> |
| Changes the process's group ID to <number>. It is recommended that the group |
| ID is dedicated to HAProxy or to a small set of similar daemons. HAProxy must |
| be started with a user belonging to this group, or with superuser privileges. |
| Note that if HAProxy is started from a user having supplementary groups, it |
| will only be able to drop these groups if started with superuser privileges. |
| See also "group" and "uid". |
| |
| grace <time> |
| Defines a delay between SIGUSR1 and real soft-stop. |
| |
| Arguments : |
| <time> is an extra delay (by default in milliseconds) after receipt of the |
| SIGUSR1 signal that will be waited for before proceeding with the |
| soft-stop operation. |
| |
| This is used for compatibility with legacy environments where the haproxy |
| process needs to be stopped but some external components need to detect the |
| status before listeners are unbound. The principle is that the internal |
| "stopping" variable (which is reported by the "stopping" sample fetch |
| function) will be turned to true, but listeners will continue to accept |
| connections undisturbed, until the delay expires, after what the regular |
| soft-stop will proceed. This must not be used with processes that are |
| reloaded, or this will prevent the old process from unbinding, and may |
| prevent the new one from starting, or simply cause trouble. |
| |
| Example: |
| |
| global |
| grace 10s |
| |
| # Returns 200 OK until stopping is set via SIGUSR1 |
| frontend ext-check |
| bind :9999 |
| monitor-uri /ext-check |
| monitor fail if { stopping } |
| |
| Please note that a more flexible and durable approach would instead consist |
| for an orchestration system in setting a global variable from the CLI, use |
| that variable to respond to external checks, then after a delay send the |
| SIGUSR1 signal. |
| |
| Example: |
| |
| # Returns 200 OK until proc.stopping is set to non-zero. May be done |
| # from HTTP using set-var(proc.stopping) or from the CLI using: |
| # > set var proc.stopping int(1) |
| frontend ext-check |
| bind :9999 |
| monitor-uri /ext-check |
| monitor fail if { var(proc.stopping) -m int gt 0 } |
| |
| See also: hard-stop-after, monitor |
| |
| group <group name> |
| Similar to "gid" but uses the GID of group name <group name> from /etc/group. |
| See also "gid" and "user". |
| |
| h1-accept-payload-with-any-method |
| Does not reject HTTP/1.0 GET/HEAD/DELETE requests with a payload. |
| |
| While It is explicitly allowed in HTTP/1.1, HTTP/1.0 is not clear on this |
| point and some old servers don't expect any payload and never look for body |
| length (via Content-Length or Transfer-Encoding headers). It means that some |
| intermediaries may properly handle the payload for HTTP/1.0 GET/HEAD/DELETE |
| requests, while some others may totally ignore it. That may lead to security |
| issues because a request smuggling attack is possible. Thus, by default, |
| HAProxy rejects HTTP/1.0 GET/HEAD/DELETE requests with a payload. |
| |
| However, it may be an issue with some old clients. In this case, this global |
| option may be set. |
| |
| h1-case-adjust <from> <to> |
| Defines the case adjustment to apply, when enabled, to the header name |
| <from>, to change it to <to> before sending it to HTTP/1 clients or |
| servers. <from> must be in lower case, and <from> and <to> must not differ |
| except for their case. It may be repeated if several header names need to be |
| adjusted. Duplicate entries are not allowed. If a lot of header names have to |
| be adjusted, it might be more convenient to use "h1-case-adjust-file". |
| Please note that no transformation will be applied unless "option |
| h1-case-adjust-bogus-client" or "option h1-case-adjust-bogus-server" is |
| specified in a proxy. |
| |
| There is no standard case for header names because, as stated in RFC7230, |
| they are case-insensitive. So applications must handle them in a case- |
| insensitive manner. But some bogus applications violate the standards and |
| erroneously rely on the cases most commonly used by browsers. This problem |
| becomes critical with HTTP/2 because all header names must be exchanged in |
| lower case, and HAProxy follows the same convention. All header names are |
| sent in lower case to clients and servers, regardless of the HTTP version. |
| |
| Applications which fail to properly process requests or responses may require |
| to temporarily use such workarounds to adjust header names sent to them for |
| the time it takes the application to be fixed. Please note that an |
| application which requires such workarounds might be vulnerable to content |
| smuggling attacks and must absolutely be fixed. |
| |
| Example: |
| global |
| h1-case-adjust content-length Content-Length |
| |
| See "h1-case-adjust-file", "option h1-case-adjust-bogus-client" and |
| "option h1-case-adjust-bogus-server". |
| |
| h1-case-adjust-file <hdrs-file> |
| Defines a file containing a list of key/value pairs used to adjust the case |
| of some header names before sending them to HTTP/1 clients or servers. The |
| file <hdrs-file> must contain 2 header names per line. The first one must be |
| in lower case and both must not differ except for their case. Lines which |
| start with '#' are ignored, just like empty lines. Leading and trailing tabs |
| and spaces are stripped. Duplicate entries are not allowed. Please note that |
| no transformation will be applied unless "option h1-case-adjust-bogus-client" |
| or "option h1-case-adjust-bogus-server" is specified in a proxy. |
| |
| If this directive is repeated, only the last one will be processed. It is an |
| alternative to the directive "h1-case-adjust" if a lot of header names need |
| to be adjusted. Please read the risks associated with using this. |
| |
| See "h1-case-adjust", "option h1-case-adjust-bogus-client" and |
| "option h1-case-adjust-bogus-server". |
| |
| h2-workaround-bogus-websocket-clients |
| This disables the announcement of the support for h2 websockets to clients. |
| This can be use to overcome clients which have issues when implementing the |
| relatively fresh RFC8441, such as Firefox 88. To allow clients to |
| automatically downgrade to http/1.1 for the websocket tunnel, specify h2 |
| support on the bind line using "alpn" without an explicit "proto" keyword. If |
| this statement was previously activated, this can be disabled by prefixing |
| the keyword with "no'. |
| |
| hard-stop-after <time> |
| Defines the maximum time allowed to perform a clean soft-stop. |
| |
| Arguments : |
| <time> is the maximum time (by default in milliseconds) for which the |
| instance will remain alive when a soft-stop is received via the |
| SIGUSR1 signal. |
| |
| This may be used to ensure that the instance will quit even if connections |
| remain opened during a soft-stop (for example with long timeouts for a proxy |
| in tcp mode). It applies both in TCP and HTTP mode. |
| |
| Example: |
| global |
| hard-stop-after 30s |
| |
| See also: grace |
| |
| harden.reject-privileged-ports.tcp { on | off } |
| harden.reject-privileged-ports.quic { on | off } |
| Toggle per protocol protection which forbid communication with clients which |
| use privileged ports as their source port. This range of ports is defined |
| according to RFC 6335. Protection is inactive by default on both protocols. |
| |
| insecure-fork-wanted |
| By default HAProxy tries hard to prevent any thread and process creation |
| after it starts. Doing so is particularly important when using Lua files of |
| uncertain origin, and when experimenting with development versions which may |
| still contain bugs whose exploitability is uncertain. And generally speaking |
| it's good hygiene to make sure that no unexpected background activity can be |
| triggered by traffic. But this prevents external checks from working, and may |
| break some very specific Lua scripts which actively rely on the ability to |
| fork. This option is there to disable this protection. Note that it is a bad |
| idea to disable it, as a vulnerability in a library or within HAProxy itself |
| will be easier to exploit once disabled. In addition, forking from Lua or |
| anywhere else is not reliable as the forked process may randomly embed a lock |
| set by another thread and never manage to finish an operation. As such it is |
| highly recommended that this option is never used and that any workload |
| requiring such a fork be reconsidered and moved to a safer solution (such as |
| agents instead of external checks). This option supports the "no" prefix to |
| disable it. |
| |
| insecure-setuid-wanted |
| HAProxy doesn't need to call executables at run time (except when using |
| external checks which are strongly recommended against), and is even expected |
| to isolate itself into an empty chroot. As such, there basically is no valid |
| reason to allow a setuid executable to be called without the user being fully |
| aware of the risks. In a situation where HAProxy would need to call external |
| checks and/or disable chroot, exploiting a vulnerability in a library or in |
| HAProxy itself could lead to the execution of an external program. On Linux |
| it is possible to lock the process so that any setuid bit present on such an |
| executable is ignored. This significantly reduces the risk of privilege |
| escalation in such a situation. This is what HAProxy does by default. In case |
| this causes a problem to an external check (for example one which would need |
| the "ping" command), then it is possible to disable this protection by |
| explicitly adding this directive in the global section. If enabled, it is |
| possible to turn it back off by prefixing it with the "no" keyword. |
| |
| issuers-chain-path <dir> |
| Assigns a directory to load certificate chain for issuer completion. All |
| files must be in PEM format. For certificates loaded with "crt" or "crt-list", |
| if certificate chain is not included in PEM (also commonly known as |
| intermediate certificate), HAProxy will complete chain if the issuer of the |
| certificate corresponds to the first certificate of the chain loaded with |
| "issuers-chain-path". |
| A "crt" file with PrivateKey+Certificate+IntermediateCA2+IntermediateCA1 |
| could be replaced with PrivateKey+Certificate. HAProxy will complete the |
| chain if a file with IntermediateCA2+IntermediateCA1 is present in |
| "issuers-chain-path" directory. All other certificates with the same issuer |
| will share the chain in memory. |
| |
| The OCSP features are not able to use the completed chain from |
| 'issuers-chain-path', please use an additionnal .issuer file if you want to |
| achieve OCSP stapling. |
| |
| limited-quic |
| This setting must be used to explicitly enable the QUIC listener bindings when |
| haproxy is compiled against a TLS/SSL stack without QUIC support, typically |
| OpenSSL. It has no effect when haproxy is compiled against a TLS/SSL stack |
| with QUIC support, quictls for instance. Note that QUIC 0-RTT is not supported |
| when this setting is set. |
| |
| localpeer <name> |
| Sets the local instance's peer name. It will be ignored if the "-L" |
| command line argument is specified or if used after "peers" section |
| definitions. In such cases, a warning message will be emitted during |
| the configuration parsing. |
| |
| This option will also set the HAPROXY_LOCALPEER environment variable. |
| See also "-L" in the management guide and "peers" section below. |
| |
| log <address> [len <length>] [format <format>] [sample <ranges>:<sample_size>] |
| <facility> [max level [min level]] |
| Adds a global syslog server. Several global servers can be defined. They |
| will receive logs for starts and exits, as well as all logs from proxies |
| configured with "log global". |
| |
| <address> can be one of: |
| |
| - An IPv4 address optionally followed by a colon and a UDP port. If |
| no port is specified, 514 is used by default (the standard syslog |
| port). |
| |
| - An IPv6 address followed by a colon and optionally a UDP port. If |
| no port is specified, 514 is used by default (the standard syslog |
| port). |
| |
| - A filesystem path to a datagram UNIX domain socket, keeping in mind |
| considerations for chroot (be sure the path is accessible inside |
| the chroot) and uid/gid (be sure the path is appropriately |
| writable). |
| |
| - A file descriptor number in the form "fd@<number>", which may point |
| to a pipe, terminal, or socket. In this case unbuffered logs are used |
| and one writev() call per log is performed. This is a bit expensive |
| but acceptable for most workloads. Messages sent this way will not be |
| truncated but may be dropped, in which case the DroppedLogs counter |
| will be incremented. The writev() call is atomic even on pipes for |
| messages up to PIPE_BUF size, which POSIX recommends to be at least |
| 512 and which is 4096 bytes on most modern operating systems. Any |
| larger message may be interleaved with messages from other processes. |
| Exceptionally for debugging purposes the file descriptor may also be |
| directed to a file, but doing so will significantly slow HAProxy down |
| as non-blocking calls will be ignored. Also there will be no way to |
| purge nor rotate this file without restarting the process. Note that |
| the configured syslog format is preserved, so the output is suitable |
| for use with a TCP syslog server. See also the "short" and "raw" |
| format below. |
| |
| - "stdout" / "stderr", which are respectively aliases for "fd@1" and |
| "fd@2", see above. |
| |
| - A ring buffer in the form "ring@<name>", which will correspond to an |
| in-memory ring buffer accessible over the CLI using the "show events" |
| command, which will also list existing rings and their sizes. Such |
| buffers are lost on reload or restart but when used as a complement |
| this can help troubleshooting by having the logs instantly available. |
| |
| You may want to reference some environment variables in the address |
| parameter, see section 2.3 about environment variables. |
| |
| <length> is an optional maximum line length. Log lines larger than this value |
| will be truncated before being sent. The reason is that syslog |
| servers act differently on log line length. All servers support the |
| default value of 1024, but some servers simply drop larger lines |
| while others do log them. If a server supports long lines, it may |
| make sense to set this value here in order to avoid truncating long |
| lines. Similarly, if a server drops long lines, it is preferable to |
| truncate them before sending them. Accepted values are 80 to 65535 |
| inclusive. The default value of 1024 is generally fine for all |
| standard usages. Some specific cases of long captures or |
| JSON-formatted logs may require larger values. You may also need to |
| increase "tune.http.logurilen" if your request URIs are truncated. |
| |
| <format> is the log format used when generating syslog messages. It may be |
| one of the following : |
| |
| local Analog to rfc3164 syslog message format except that hostname |
| field is stripped. This is the default. |
| Note: option "log-send-hostname" switches the default to |
| rfc3164. |
| |
| rfc3164 The RFC3164 syslog message format. |
| (https://tools.ietf.org/html/rfc3164) |
| |
| rfc5424 The RFC5424 syslog message format. |
| (https://tools.ietf.org/html/rfc5424) |
| |
| priority A message containing only a level plus syslog facility between |
| angle brackets such as '<63>', followed by the text. The PID, |
| date, time, process name and system name are omitted. This is |
| designed to be used with a local log server. |
| |
| short A message containing only a level between angle brackets such as |
| '<3>', followed by the text. The PID, date, time, process name |
| and system name are omitted. This is designed to be used with a |
| local log server. This format is compatible with what the systemd |
| logger consumes. |
| |
| timed A message containing only a level between angle brackets such as |
| '<3>', followed by ISO date and by the text. The PID, process |
| name and system name are omitted. This is designed to be |
| used with a local log server. |
| |
| iso A message containing only the ISO date, followed by the text. |
| The PID, process name and system name are omitted. This is |
| designed to be used with a local log server. |
| |
| raw A message containing only the text. The level, PID, date, time, |
| process name and system name are omitted. This is designed to be |
| used in containers or during development, where the severity only |
| depends on the file descriptor used (stdout/stderr). |
| |
| <ranges> A list of comma-separated ranges to identify the logs to sample. |
| This is used to balance the load of the logs to send to the log |
| server. The limits of the ranges cannot be null. They are numbered |
| from 1. The size or period (in number of logs) of the sample must be |
| set with <sample_size> parameter. |
| |
| <sample_size> |
| The size of the sample in number of logs to consider when balancing |
| their logging loads. It is used to balance the load of the logs to |
| send to the syslog server. This size must be greater or equal to the |
| maximum of the high limits of the ranges. |
| (see also <ranges> parameter). |
| |
| <facility> must be one of the 24 standard syslog facilities : |
| |
| kern user mail daemon auth syslog lpr news |
| uucp cron auth2 ftp ntp audit alert cron2 |
| local0 local1 local2 local3 local4 local5 local6 local7 |
| |
| Note that the facility is ignored for the "short" and "raw" |
| formats, but still required as a positional field. It is |
| recommended to use "daemon" in this case to make it clear that |
| it's only supposed to be used locally. |
| |
| An optional level can be specified to filter outgoing messages. By default, |
| all messages are sent. If a maximum level is specified, only messages with a |
| severity at least as important as this level will be sent. An optional minimum |
| level can be specified. If it is set, logs emitted with a more severe level |
| than this one will be capped to this level. This is used to avoid sending |
| "emerg" messages on all terminals on some default syslog configurations. |
| Eight levels are known : |
| |
| emerg alert crit err warning notice info debug |
| |
| log-send-hostname [<string>] |
| Sets the hostname field in the syslog header. If optional "string" parameter |
| is set the header is set to the string contents, otherwise uses the hostname |
| of the system. Generally used if one is not relaying logs through an |
| intermediate syslog server or for simply customizing the hostname printed in |
| the logs. |
| |
| log-tag <string> |
| Sets the tag field in the syslog header to this string. It defaults to the |
| program name as launched from the command line, which usually is "haproxy". |
| Sometimes it can be useful to differentiate between multiple processes |
| running on the same host. See also the per-proxy "log-tag" directive. |
| |
| lua-load <file> [ <arg1> [ <arg2> [ ... ] ] ] |
| This global directive loads and executes a Lua file in the shared context |
| that is visible to all threads. Any variable set in such a context is visible |
| from any thread. This is the easiest and recommended way to load Lua programs |
| but it will not scale well if a lot of Lua calls are performed, as only one |
| thread may be running on the global state at a time. A program loaded this |
| way will always see 0 in the "core.thread" variable. This directive can be |
| used multiple times. |
| |
| args are available in the lua file using the code below in the body of the |
| file. Do not forget that Lua arrays start at index 1. A "local" variable |
| declared in a file is available in the entire file and not available on |
| other files. |
| |
| local args = table.pack(...) |
| |
| lua-load-per-thread <file> [ <arg1> [ <arg2> [ ... ] ] ] |
| This global directive loads and executes a Lua file into each started thread. |
| Any global variable has a thread-local visibility so that each thread could |
| see a different value. As such it is strongly recommended not to use global |
| variables in programs loaded this way. An independent copy is loaded and |
| initialized for each thread, everything is done sequentially and in the |
| thread's numeric order from 1 to nbthread. If some operations need to be |
| performed only once, the program should check the "core.thread" variable to |
| figure what thread is being initialized. Programs loaded this way will run |
| concurrently on all threads and will be highly scalable. This is the |
| recommended way to load simple functions that register sample-fetches, |
| converters, actions or services once it is certain the program doesn't depend |
| on global variables. For the sake of simplicity, the directive is available |
| even if only one thread is used and even if threads are disabled (in which |
| case it will be equivalent to lua-load). This directive can be used multiple |
| times. |
| |
| See lua-load for usage of args. |
| |
| lua-prepend-path <string> [<type>] |
| Prepends the given string followed by a semicolon to Lua's package.<type> |
| variable. |
| <type> must either be "path" or "cpath". If <type> is not given it defaults |
| to "path". |
| |
| Lua's paths are semicolon delimited lists of patterns that specify how the |
| `require` function attempts to find the source file of a library. Question |
| marks (?) within a pattern will be replaced by module name. The path is |
| evaluated left to right. This implies that paths that are prepended later |
| will be checked earlier. |
| |
| As an example by specifying the following path: |
| |
| lua-prepend-path /usr/share/haproxy-lua/?/init.lua |
| lua-prepend-path /usr/share/haproxy-lua/?.lua |
| |
| When `require "example"` is being called Lua will first attempt to load the |
| /usr/share/haproxy-lua/example.lua script, if that does not exist the |
| /usr/share/haproxy-lua/example/init.lua will be attempted and the default |
| paths if that does not exist either. |
| |
| See https://www.lua.org/pil/8.1.html for the details within the Lua |
| documentation. |
| |
| master-worker [no-exit-on-failure] |
| Master-worker mode. It is equivalent to the command line "-W" argument. |
| |
| This mode will launch a "master" which will fork a "worker" after reading the |
| configuration to process the traffic. The master is used as a process manager |
| which will monitor the "workers". |
| |
| Using this mode, you can reload HAProxy directly by sending a SIGUSR2 signal |
| to the master. Reloading will ask the master to read the configuration again |
| and fork a new worker. The previous worker will be kept until the end of its |
| jobs. |
| |
| The master-worker mode is compatible either with the foreground or daemon |
| mode. |
| |
| By default, if a worker exits with a bad return code, in the case of a |
| segfault for example, all workers will be killed, and the master will leave. |
| It is convenient to combine this behavior with Restart=on-failure in a |
| systemd unit file in order to relaunch the whole process. If you don't want |
| this behavior, you must use the keyword "no-exit-on-failure". |
| |
| See also "-W" in the management guide. |
| |
| mworker-max-reloads <number> |
| In master-worker mode, this option limits the number of time a worker can |
| survive to a reload. If the worker did not leave after a reload, once its |
| number of reloads is greater than this number, the worker will receive a |
| SIGTERM. This option helps to keep under control the number of workers. |
| See also "show proc" in the Management Guide. |
| |
| nbthread <number> |
| This setting is only available when support for threads was built in. It |
| makes HAProxy run on <number> threads. "nbthread" also works when HAProxy is |
| started in foreground. On some platforms supporting CPU affinity, the default |
| "nbthread" value is automatically set to the number of CPUs the process is |
| bound to upon startup. This means that the thread count can easily be |
| adjusted from the calling process using commands like "taskset" or "cpuset". |
| Otherwise, this value defaults to 1. The default value is reported in the |
| output of "haproxy -vv". |
| |
| no-quic |
| Disable QUIC transport protocol. All the QUIC listeners will still be created. |
| But they will not bind their addresses. Hence, no QUIC traffic will be |
| processed by haproxy. See also "quic_enabled" sample fetch. |
| |
| numa-cpu-mapping |
| If running on a NUMA-aware platform, HAProxy inspects on startup the CPU |
| topology of the machine. If a multi-socket machine is detected, the affinity |
| is automatically calculated to run on the CPUs of a single node. This is done |
| in order to not suffer from the performance penalties caused by the |
| inter-socket bus latency. However, if the applied binding is non optimal on a |
| particular architecture, it can be disabled with the statement 'no |
| numa-cpu-mapping'. This automatic binding is also not applied if a nbthread |
| statement is present in the configuration, or the affinity of the process is |
| already specified, for example via the 'cpu-map' directive or the taskset |
| utility. |
| |
| pidfile <pidfile> |
| Writes PIDs of all daemons into file <pidfile> when daemon mode or writes PID |
| of master process into file <pidfile> when master-worker mode. This option is |
| equivalent to the "-p" command line argument. The file must be accessible to |
| the user starting the process. See also "daemon" and "master-worker". |
| |
| pp2-never-send-local |
| A bug in the PROXY protocol v2 implementation was present in HAProxy up to |
| version 2.1, causing it to emit a PROXY command instead of a LOCAL command |
| for health checks. This is particularly minor but confuses some servers' |
| logs. Sadly, the bug was discovered very late and revealed that some servers |
| which possibly only tested their PROXY protocol implementation against |
| HAProxy fail to properly handle the LOCAL command, and permanently remain in |
| the "down" state when HAProxy checks them. When this happens, it is possible |
| to enable this global option to revert to the older (bogus) behavior for the |
| time it takes to contact the affected components' vendors and get them fixed. |
| This option is disabled by default and acts on all servers having the |
| "send-proxy-v2" statement. |
| |
| presetenv <name> <value> |
| Sets environment variable <name> to value <value>. If the variable exists, it |
| is NOT overwritten. The changes immediately take effect so that the next line |
| in the configuration file sees the new value. See also "setenv", "resetenv", |
| and "unsetenv". |
| |
| prealloc-fd |
| Performs a one-time open of the maximum file descriptor which results in a |
| pre-allocation of the kernel's data structures. This prevents short pauses |
| when nbthread>1 and HAProxy opens a file descriptor which requires the kernel |
| to expand its data structures. |
| |
| resetenv [<name> ...] |
| Removes all environment variables except the ones specified in argument. It |
| allows to use a clean controlled environment before setting new values with |
| setenv or unsetenv. Please note that some internal functions may make use of |
| some environment variables, such as time manipulation functions, but also |
| OpenSSL or even external checks. This must be used with extreme care and only |
| after complete validation. The changes immediately take effect so that the |
| next line in the configuration file sees the new environment. See also |
| "setenv", "presetenv", and "unsetenv". |
| |
| server-state-base <directory> |
| Specifies the directory prefix to be prepended in front of all servers state |
| file names which do not start with a '/'. See also "server-state-file", |
| "load-server-state-from-file" and "server-state-file-name". |
| |
| server-state-file <file> |
| Specifies the path to the file containing state of servers. If the path starts |
| with a slash ('/'), it is considered absolute, otherwise it is considered |
| relative to the directory specified using "server-state-base" (if set) or to |
| the current directory. Before reloading HAProxy, it is possible to save the |
| servers' current state using the stats command "show servers state". The |
| output of this command must be written in the file pointed by <file>. When |
| starting up, before handling traffic, HAProxy will read, load and apply state |
| for each server found in the file and available in its current running |
| configuration. See also "server-state-base" and "show servers state", |
| "load-server-state-from-file" and "server-state-file-name" |
| |
| set-dumpable |
| This option is better left disabled by default and enabled only upon a |
| developer's request. If it has been enabled, it may still be forcibly |
| disabled by prefixing it with the "no" keyword. It has no impact on |
| performance nor stability but will try hard to re-enable core dumps that were |
| possibly disabled by file size limitations (ulimit -f), core size limitations |
| (ulimit -c), or "dumpability" of a process after changing its UID/GID (such |
| as /proc/sys/fs/suid_dumpable on Linux). Core dumps might still be limited by |
| the current directory's permissions (check what directory the file is started |
| from), the chroot directory's permission (it may be needed to temporarily |
| disable the chroot directive or to move it to a dedicated writable location), |
| or any other system-specific constraint. For example, some Linux flavours are |
| notorious for replacing the default core file with a path to an executable |
| not even installed on the system (check /proc/sys/kernel/core_pattern). Often, |
| simply writing "core", "core.%p" or "/var/log/core/core.%p" addresses the |
| issue. When trying to enable this option waiting for a rare issue to |
| re-appear, it's often a good idea to first try to obtain such a dump by |
| issuing, for example, "kill -11" to the "haproxy" process and verify that it |
| leaves a core where expected when dying. |
| |
| set-var <var-name> <expr> |
| Sets the process-wide variable '<var-name>' to the result of the evaluation |
| of the sample expression <expr>. The variable '<var-name>' may only be a |
| process-wide variable (using the 'proc.' prefix). It works exactly like the |
| 'set-var' action in TCP or HTTP rules except that the expression is evaluated |
| at configuration parsing time and that the variable is instantly set. The |
| sample fetch functions and converters permitted in the expression are only |
| those using internal data, typically 'int(value)' or 'str(value)'. It is |
| possible to reference previously allocated variables as well. These variables |
| will then be readable (and modifiable) from the regular rule sets. |
| |
| Example: |
| global |
| set-var proc.current_state str(primary) |
| set-var proc.prio int(100) |
| set-var proc.threshold int(200),sub(proc.prio) |
| |
| set-var-fmt <var-name> <fmt> |
| Sets the process-wide variable '<var-name>' to the string resulting from the |
| evaluation of the log-format <fmt>. The variable '<var-name>' may only be a |
| process-wide variable (using the 'proc.' prefix). It works exactly like the |
| 'set-var-fmt' action in TCP or HTTP rules except that the expression is |
| evaluated at configuration parsing time and that the variable is instantly |
| set. The sample fetch functions and converters permitted in the expression |
| are only those using internal data, typically 'int(value)' or 'str(value)'. |
| It is possible to reference previously allocated variables as well. These |
| variables will then be readable (and modifiable) from the regular rule sets. |
| Please see section 8.2.6 for details on the custom log-format syntax. |
| |
| Example: |
| global |
| set-var-fmt proc.current_state "primary" |
| set-var-fmt proc.bootid "%pid|%t" |
| |
| setcap <name>[,<name>...] |
| Sets a list of capabilities that must be preserved when starting with uid 0 |
| and switching to a non-zero uid. By default all permissions are lost by the |
| uid switch, but some are often needed when trying connecting to a server from |
| a foreign address during transparent proxying, or when binding to a port |
| below 1024, e.g. when using "tune.quic.socket-owner connection", resulting in |
| setups running entirely under uid 0. Setting capabilities generally is a |
| safer alternative, as only the required capabilities will be preserved. The |
| feature is OS-specific and only enabled on Linux when USE_LINUX_CAP=1 is set |
| at build time. The list of supported capabilities also depends on the OS and |
| is enumerated by the error message displayed when an invalid capability name |
| or an empty one is passed. Multiple capabilities may be passed, delimited by |
| commas. Among those commonly used, "cap_net_raw" allows to transparently bind |
| to a foreign address, and "cap_net_bind_service" allows to bind to a |
| privileged port and may be used by QUIC. |
| |
| setenv <name> <value> |
| Sets environment variable <name> to value <value>. If the variable exists, it |
| is overwritten. The changes immediately take effect so that the next line in |
| the configuration file sees the new value. See also "presetenv", "resetenv", |
| and "unsetenv". |
| |
| ssl-default-bind-ciphers <ciphers> |
| This setting is only available when support for OpenSSL was built in. It sets |
| the default string describing the list of cipher algorithms ("cipher suite") |
| that are negotiated during the SSL/TLS handshake up to TLSv1.2 for all |
| "bind" lines which do not explicitly define theirs. The format of the string |
| is defined in "man 1 ciphers" from OpenSSL man pages. For background |
| information and recommendations see e.g. |
| (https://wiki.mozilla.org/Security/Server_Side_TLS) and |
| (https://mozilla.github.io/server-side-tls/ssl-config-generator/). For TLSv1.3 |
| cipher configuration, please check the "ssl-default-bind-ciphersuites" keyword. |
| Please check the "bind" keyword for more information. |
| |
| ssl-default-bind-ciphersuites <ciphersuites> |
| This setting is only available when support for OpenSSL was built in and |
| OpenSSL 1.1.1 or later was used to build HAProxy. It sets the default string |
| describing the list of cipher algorithms ("cipher suite") that are negotiated |
| during the TLSv1.3 handshake for all "bind" lines which do not explicitly define |
| theirs. The format of the string is defined in |
| "man 1 ciphers" from OpenSSL man pages under the section "ciphersuites". For |
| cipher configuration for TLSv1.2 and earlier, please check the |
| "ssl-default-bind-ciphers" keyword. This setting might accept TLSv1.2 |
| ciphersuites however this is an undocumented behavior and not recommended as |
| it could be inconsistent or buggy. |
| The default TLSv1.3 ciphersuites of OpenSSL are: |
| "TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256" |
| |
| TLSv1.3 only supports 5 ciphersuites: |
| |
| - TLS_AES_128_GCM_SHA256 |
| - TLS_AES_256_GCM_SHA384 |
| - TLS_CHACHA20_POLY1305_SHA256 |
| - TLS_AES_128_CCM_SHA256 |
| - TLS_AES_128_CCM_8_SHA256 |
| |
| Please check the "bind" keyword for more information. |
| |
| Example: |
| global |
| ssl-default-bind-ciphers ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-RSA-AES128-GCM-SHA256 |
| ssl-default-bind-ciphersuites TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256 |
| |
| ssl-default-bind-client-sigalgs <sigalgs> |
| This setting is only available when support for OpenSSL was built in. It sets |
| the default string describing the list of signature algorithms related to |
| client authentication for all "bind" lines which do not explicitly define |
| theirs. The format of the string is a colon-delimited list of signature |
| algorithms. Each signature algorithm can use one of two forms: TLS1.3 signature |
| scheme names ("rsa_pss_rsae_sha256") or the public key algorithm + digest form |
| ("ECDSA+SHA256"). A list can contain both forms. For more information on the |
| format, see SSL_CTX_set1_client_sigalgs(3). A list of signature algorithms is |
| also available in RFC8446 section 4.2.3 and in OpenSSL in the ssl/t1_lib.c |
| file. This setting is not applicable to TLSv1.1 and earlier versions of the |
| protocol as the signature algorithms aren't separately negotiated in these |
| versions. It is not recommended to change this setting unless compatibility |
| with a middlebox is required. |
| |
| ssl-default-bind-curves <curves> |
| This setting is only available when support for OpenSSL was built in. It sets |
| the default string describing the list of elliptic curves algorithms ("curve |
| suite") that are negotiated during the SSL/TLS handshake with ECDHE. The format |
| of the string is a colon-delimited list of curve name. |
| Please check the "bind" keyword for more information. |
| |
| ssl-default-bind-options [<option>]... |
| This setting is only available when support for OpenSSL was built in. It sets |
| default ssl-options to force on all "bind" lines. Please check the "bind" |
| keyword to see available options. |
| |
| Example: |
| global |
| ssl-default-bind-options ssl-min-ver TLSv1.0 no-tls-tickets |
| |
| ssl-default-bind-sigalgs <sigalgs> |
| This setting is only available when support for OpenSSL was built in. It |
| sets the default string describing the list of signature algorithms that |
| are negotiated during the TLSv1.2 and TLSv1.3 handshake for all "bind" lines |
| which do not explicitly define theirs. The format of the string is a |
| colon-delimited list of signature algorithms. Each signature algorithm can |
| use one of two forms: TLS1.3 signature scheme names ("rsa_pss_rsae_sha256") |
| or the public key algorithm + digest form ("ECDSA+SHA256"). A list |
| can contain both forms. For more information on the format, |
| see SSL_CTX_set1_sigalgs(3). A list of signature algorithms is also |
| available in RFC8446 section 4.2.3 and in OpenSSL in the ssl/t1_lib.c file. |
| This setting is not applicable to TLSv1.1 and earlier versions of the |
| protocol as the signature algorithms aren't separately negotiated in these |
| versions. It is not recommended to change this setting unless compatibility |
| with a middlebox is required. |
| |
| ssl-default-server-ciphers <ciphers> |
| This setting is only available when support for OpenSSL was built in. It |
| sets the default string describing the list of cipher algorithms that are |
| negotiated during the SSL/TLS handshake up to TLSv1.2 with the server, |
| for all "server" lines which do not explicitly define theirs. The format of |
| the string is defined in "man 1 ciphers" from OpenSSL man pages. For background |
| information and recommendations see e.g. |
| (https://wiki.mozilla.org/Security/Server_Side_TLS) and |
| (https://mozilla.github.io/server-side-tls/ssl-config-generator/). |
| For TLSv1.3 cipher configuration, please check the |
| "ssl-default-server-ciphersuites" keyword. Please check the "server" keyword |
| for more information. |
| |
| ssl-default-server-ciphersuites <ciphersuites> |
| This setting is only available when support for OpenSSL was built in and |
| OpenSSL 1.1.1 or later was used to build HAProxy. It sets the default |
| string describing the list of cipher algorithms that are negotiated during |
| the TLSv1.3 handshake with the server, for all "server" lines which do not |
| explicitly define theirs. The format of the string is defined in |
| "man 1 ciphers" from OpenSSL man pages under the section "ciphersuites". For |
| cipher configuration for TLSv1.2 and earlier, please check the |
| "ssl-default-server-ciphers" keyword. Please check the "server" keyword for |
| more information. |
| |
| ssl-default-server-options [<option>]... |
| This setting is only available when support for OpenSSL was built in. It sets |
| default ssl-options to force on all "server" lines. Please check the "server" |
| keyword to see available options. |
| |
| ssl-dh-param-file <file> |
| This setting is only available when support for OpenSSL was built in. It sets |
| the default DH parameters that are used during the SSL/TLS handshake when |
| ephemeral Diffie-Hellman (DHE) key exchange is used, for all "bind" lines |
| which do not explicitly define theirs. It will be overridden by custom DH |
| parameters found in a bind certificate file if any. If custom DH parameters |
| are not specified either by using ssl-dh-param-file or by setting them |
| directly in the certificate file, DHE ciphers will not be used, unless |
| tune.ssl.default-dh-param is set. In this latter case, pre-defined DH |
| parameters of the specified size will be used. Custom parameters are known to |
| be more secure and therefore their use is recommended. |
| Custom DH parameters may be generated by using the OpenSSL command |
| "openssl dhparam <size>", where size should be at least 2048, as 1024-bit DH |
| parameters should not be considered secure anymore. |
| |
| ssl-propquery <query> |
| This setting is only available when support for OpenSSL was built in and when |
| OpenSSL's version is at least 3.0. It allows to define a default property |
| string used when fetching algorithms in providers. It behave the same way as |
| the openssl propquery option and it follows the same syntax (described in |
| https://www.openssl.org/docs/man3.0/man7/property.html). For instance, if you |
| have two providers loaded, the foo one and the default one, the propquery |
| "?provider=foo" allows to pick the algorithm implementations provided by the |
| foo provider by default, and to fallback on the default provider's one if it |
| was not found. |
| |
| ssl-provider <name> |
| This setting is only available when support for OpenSSL was built in and when |
| OpenSSL's version is at least 3.0. It allows to load a provider during init. |
| If loading is successful, any capabilities provided by the loaded provider |
| might be used by HAProxy. Multiple 'ssl-provider' options can be specified in |
| a configuration file. The providers will be loaded in their order of |
| appearance. |
| |
| Please note that loading a provider explicitly prevents OpenSSL from loading |
| the 'default' provider automatically. OpenSSL also allows to define the |
| providers that should be loaded directly in its configuration file |
| (openssl.cnf for instance) so it is not necessary to use this 'ssl-provider' |
| option to load providers. The "show ssl providers" CLI command can be used to |
| show all the providers that were successfully loaded. |
| |
| The default search path of OpenSSL provider can be found in the output of the |
| "openssl version -a" command. If the provider is in another directory, you |
| can set the OPENSSL_MODULES environment variable, which takes the directory |
| where your provider can be found. |
| |
| See also "ssl-propquery" and "ssl-provider-path". |
| |
| ssl-provider-path <path> |
| This setting is only available when support for OpenSSL was built in and when |
| OpenSSL's version is at least 3.0. It allows to specify the search path that |
| is to be used by OpenSSL for looking for providers. It behaves the same way |
| as the OPENSSL_MODULES environment variable. It will be used for any |
| following 'ssl-provider' option or until a new 'ssl-provider-path' is |
| defined. |
| See also "ssl-provider". |
| |
| ssl-load-extra-del-ext |
| This setting allows to configure the way HAProxy does the lookup for the |
| extra SSL files. By default HAProxy adds a new extension to the filename. |
| (ex: with "foobar.crt" load "foobar.crt.key"). With this option enabled, |
| HAProxy removes the extension before adding the new one (ex: with |
| "foobar.crt" load "foobar.key"). |
| |
| Your crt file must have a ".crt" extension for this option to work. |
| |
| This option is not compatible with bundle extensions (.ecdsa, .rsa. .dsa) |
| and won't try to remove them. |
| |
| This option is disabled by default. See also "ssl-load-extra-files". |
| |
| ssl-load-extra-files <none|all|bundle|sctl|ocsp|issuer|key>* |
| This setting alters the way HAProxy will look for unspecified files during |
| the loading of the SSL certificates. This option applies to certificates |
| associated to "bind" lines as well as "server" lines but some of the extra |
| files will not have any functional impact for "server" line certificates. |
| |
| By default, HAProxy discovers automatically a lot of files not specified in |
| the configuration, and you may want to disable this behavior if you want to |
| optimize the startup time. |
| |
| "none": Only load the files specified in the configuration. Don't try to load |
| a certificate bundle if the file does not exist. In the case of a directory, |
| it won't try to bundle the certificates if they have the same basename. |
| |
| "all": This is the default behavior, it will try to load everything, |
| bundles, sctl, ocsp, issuer, key. |
| |
| "bundle": When a file specified in the configuration does not exist, HAProxy |
| will try to load a "cert bundle". Certificate bundles are only managed on the |
| frontend side and will not work for backend certificates. |
| |
| Starting from HAProxy 2.3, the bundles are not loaded in the same OpenSSL |
| certificate store, instead it will loads each certificate in a separate |
| store which is equivalent to declaring multiple "crt". OpenSSL 1.1.1 is |
| required to achieve this. Which means that bundles are now used only for |
| backward compatibility and are not mandatory anymore to do an hybrid RSA/ECC |
| bind configuration. |
| |
| To associate these PEM files into a "cert bundle" that is recognized by |
| HAProxy, they must be named in the following way: All PEM files that are to |
| be bundled must have the same base name, with a suffix indicating the key |
| type. Currently, three suffixes are supported: rsa, dsa and ecdsa. For |
| example, if www.example.com has two PEM files, an RSA file and an ECDSA |
| file, they must be named: "example.pem.rsa" and "example.pem.ecdsa". The |
| first part of the filename is arbitrary; only the suffix matters. To load |
| this bundle into HAProxy, specify the base name only: |
| |
| Example : bind :8443 ssl crt example.pem |
| |
| Note that the suffix is not given to HAProxy; this tells HAProxy to look for |
| a cert bundle. |
| |
| HAProxy will load all PEM files in the bundle as if they were configured |
| separately in several "crt". |
| |
| The bundle loading does not have an impact anymore on the directory loading |
| since files are loading separately. |
| |
| On the CLI, bundles are seen as separate files, and the bundle extension is |
| required to commit them. |
| |
| OCSP files (.ocsp), issuer files (.issuer), Certificate Transparency (.sctl) |
| as well as private keys (.key) are supported with multi-cert bundling. |
| |
| "sctl": Try to load "<basename>.sctl" for each crt keyword. If provided for |
| a backend certificate, it will be loaded but will not have any functional |
| impact. |
| |
| "ocsp": Try to load "<basename>.ocsp" for each crt keyword. If provided for |
| a backend certificate, it will be loaded but will not have any functional |
| impact. |
| |
| "issuer": Try to load "<basename>.issuer" if the issuer of the OCSP file is |
| not provided in the PEM file. If provided for a backend certificate, it will |
| be loaded but will not have any functional impact. |
| |
| "key": If the private key was not provided by the PEM file, try to load a |
| file "<basename>.key" containing a private key. |
| |
| The default behavior is "all". |
| |
| Example: |
| ssl-load-extra-files bundle sctl |
| ssl-load-extra-files sctl ocsp issuer |
| ssl-load-extra-files none |
| |
| See also: "crt", section 5.1 about bind options and section 5.2 about server |
| options. |
| |
| ssl-server-verify [none|required] |
| The default behavior for SSL verify on servers side. If specified to 'none', |
| servers certificates are not verified. The default is 'required' except if |
| forced using cmdline option '-dV'. |
| |
| ssl-skip-self-issued-ca |
| Self issued CA, aka x509 root CA, is the anchor for chain validation: as a |
| server is useless to send it, client must have it. Standard configuration |
| need to not include such CA in PEM file. This option allows you to keep such |
| CA in PEM file without sending it to the client. Use case is to provide |
| issuer for ocsp without the need for '.issuer' file and be able to share it |
| with 'issuers-chain-path'. This concerns all certificates without intermediate |
| certificates. It's useless for BoringSSL, .issuer is ignored because ocsp |
| bits does not need it. Requires at least OpenSSL 1.0.2. |
| |
| stats maxconn <connections> |
| By default, the stats socket is limited to 10 concurrent connections. It is |
| possible to change this value with "stats maxconn". |
| |
| stats socket [<address:port>|<path>] [param*] |
| Binds a UNIX socket to <path> or a TCPv4/v6 address to <address:port>. |
| Connections to this socket will return various statistics outputs and even |
| allow some commands to be issued to change some runtime settings. Please |
| consult section 9.3 "Unix Socket commands" of Management Guide for more |
| details. |
| |
| All parameters supported by "bind" lines are supported, for instance to |
| restrict access to some users or their access rights. Please consult |
| section 5.1 for more information. |
| |
| stats timeout <timeout, in milliseconds> |
| The default timeout on the stats socket is set to 10 seconds. It is possible |
| to change this value with "stats timeout". The value must be passed in |
| milliseconds, or be suffixed by a time unit among { us, ms, s, m, h, d }. |
| |
| strict-limits |
| Makes process fail at startup when a setrlimit fails. HAProxy tries to set the |
| best setrlimit according to what has been calculated. If it fails, it will |
| emit a warning. This option is here to guarantee an explicit failure of |
| HAProxy when those limits fail. It is enabled by default. It may still be |
| forcibly disabled by prefixing it with the "no" keyword. |
| |
| thread-group <group> [<thread-range>...] |
| This setting is only available when support for threads was built in. It |
| enumerates the list of threads that will compose thread group <group>. |
| Thread numbers and group numbers start at 1. Thread ranges are defined either |
| using a single thread number at once, or by specifying the lower and upper |
| bounds delimited by a dash '-' (e.g. "1-16"). Unassigned threads will be |
| automatically assigned to unassigned thread groups, and thread groups |
| defined with this directive will never receive more threads than those |
| defined. Defining the same group multiple times overrides previous |
| definitions with the new one. See also "nbthread" and "thread-groups". |
| |
| thread-groups <number> |
| This setting is only available when support for threads was built in. It |
| makes HAProxy split its threads into <number> independent groups. At the |
| moment, the default value is 1. Thread groups make it possible to reduce |
| sharing between threads to limit contention, at the expense of some extra |
| configuration efforts. It is also the only way to use more than 64 threads |
| since up to 64 threads per group may be configured. The maximum number of |
| groups is configured at compile time and defaults to 16. See also "nbthread". |
| |
| trace <args...> |
| This command configures one "trace" subsystem statement. Each of them can be |
| found in the management manual, and follow the exact same syntax. Only one |
| statement per line is permitted (i.e. if some long trace configurations using |
| semi-colons are to be imported, they must be placed one per line). Any output |
| that the "trace" command would produce will be emitted during the parsing |
| step of the section. Most of the time these will be errors and warnings, but |
| certain incomplete commands might list permissible choices. This command is |
| not meant for regular use, it will generally only be suggested by developers |
| along complex debugging sessions. For this reason it is internally marked as |
| experimental, meaning that "expose-experimental-directives" must appear on a |
| line before any "trace" statement. Note that these directives are parsed on |
| the fly, so referencing a ring buffer that is only declared further will not |
| work. For such use cases it is suggested to place another "global" section |
| with only the "trace" statements after the declaration of that ring. It is |
| important to keep in mind that depending on the trace level and details, |
| enabling traces can severely degrade the global performance. Please refer to |
| the management manual for the statements syntax. |
| |
| uid <number> |
| Changes the process's user ID to <number>. It is recommended that the user ID |
| is dedicated to HAProxy or to a small set of similar daemons. HAProxy must |
| be started with superuser privileges in order to be able to switch to another |
| one. See also "gid" and "user". |
| |
| ulimit-n <number> |
| Sets the maximum number of per-process file-descriptors to <number>. By |
| default, it is automatically computed, so it is recommended not to use this |
| option. If the intent is only to limit the number of file descriptors, better |
| use "fd-hard-limit" instead. |
| |
| Note that the dynamic servers are not taken into account in this automatic |
| resource calculation. If using a large number of them, it may be needed to |
| manually specify this value. |
| |
| See also: fd-hard-limit, maxconn |
| |
| unix-bind [ prefix <prefix> ] [ mode <mode> ] [ user <user> ] [ uid <uid> ] |
| [ group <group> ] [ gid <gid> ] |
| |
| Fixes common settings to UNIX listening sockets declared in "bind" statements. |
| This is mainly used to simplify declaration of those UNIX sockets and reduce |
| the risk of errors, since those settings are most commonly required but are |
| also process-specific. The <prefix> setting can be used to force all socket |
| path to be relative to that directory. This might be needed to access another |
| component's chroot. Note that those paths are resolved before HAProxy chroots |
| itself, so they are absolute. The <mode>, <user>, <uid>, <group> and <gid> |
| all have the same meaning as their homonyms used by the "bind" statement. If |
| both are specified, the "bind" statement has priority, meaning that the |
| "unix-bind" settings may be seen as process-wide default settings. |
| |
| unsetenv [<name> ...] |
| Removes environment variables specified in arguments. This can be useful to |
| hide some sensitive information that are occasionally inherited from the |
| user's environment during some operations. Variables which did not exist are |
| silently ignored so that after the operation, it is certain that none of |
| these variables remain. The changes immediately take effect so that the next |
| line in the configuration file will not see these variables. See also |
| "setenv", "presetenv", and "resetenv". |
| |
| user <user name> |
| Similar to "uid" but uses the UID of user name <user name> from /etc/passwd. |
| See also "uid" and "group". |
| |
| node <name> |
| Only letters, digits, hyphen and underscore are allowed, like in DNS names. |
| |
| This statement is useful in HA configurations where two or more processes or |
| servers share the same IP address. By setting a different node-name on all |
| nodes, it becomes easy to immediately spot what server is handling the |
| traffic. |
| |
| wurfl-cache-size <size> |
| Sets the WURFL Useragent cache size. For faster lookups, already processed user |
| agents are kept in a LRU cache : |
| - "0" : no cache is used. |
| - <size> : size of lru cache in elements. |
| |
| Please note that this option is only available when HAProxy has been compiled |
| with USE_WURFL=1. |
| |
| wurfl-data-file <file path> |
| The path of the WURFL data file to provide device detection services. The |
| file should be accessible by HAProxy with relevant permissions. |
| |
| Please note that this option is only available when HAProxy has been compiled |
| with USE_WURFL=1. |
| |
| wurfl-information-list [<capability>]* |
| A space-delimited list of WURFL capabilities, virtual capabilities, property |
| names we plan to use in injected headers. A full list of capability and |
| virtual capability names is available on the Scientiamobile website : |
| |
| https://www.scientiamobile.com/wurflCapability |
| |
| Valid WURFL properties are: |
| - wurfl_id Contains the device ID of the matched device. |
| |
| - wurfl_root_id Contains the device root ID of the matched |
| device. |
| |
| - wurfl_isdevroot Tells if the matched device is a root device. |
| Possible values are "TRUE" or "FALSE". |
| |
| - wurfl_useragent The original useragent coming with this |
| particular web request. |
| |
| - wurfl_api_version Contains a string representing the currently |
| used Libwurfl API version. |
| |
| - wurfl_info A string containing information on the parsed |
| wurfl.xml and its full path. |
| |
| - wurfl_last_load_time Contains the UNIX timestamp of the last time |
| WURFL has been loaded successfully. |
| |
| - wurfl_normalized_useragent The normalized useragent. |
| |
| Please note that this option is only available when HAProxy has been compiled |
| with USE_WURFL=1. |
| |
| wurfl-information-list-separator <char> |
| A char that will be used to separate values in a response header containing |
| WURFL results. If not set that a comma (',') will be used by default. |
| |
| Please note that this option is only available when HAProxy has been compiled |
| with USE_WURFL=1. |
| |
| wurfl-patch-file [<file path>] |
| A list of WURFL patch file paths. Note that patches are loaded during startup |
| thus before the chroot. |
| |
| Please note that this option is only available when HAProxy has been compiled |
| with USE_WURFL=1. |
| |
| 3.2. Performance tuning |
| ----------------------- |
| |
| busy-polling |
| In some situations, especially when dealing with low latency on processors |
| supporting a variable frequency or when running inside virtual machines, each |
| time the process waits for an I/O using the poller, the processor goes back |
| to sleep or is offered to another VM for a long time, and it causes |
| excessively high latencies. This option provides a solution preventing the |
| processor from sleeping by always using a null timeout on the pollers. This |
| results in a significant latency reduction (30 to 100 microseconds observed) |
| at the expense of a risk to overheat the processor. It may even be used with |
| threads, in which case improperly bound threads may heavily conflict, |
| resulting in a worse performance and high values for the CPU stolen fields |
| in "show info" output, indicating which threads are misconfigured. It is |
| important not to let the process run on the same processor as the network |
| interrupts when this option is used. It is also better to avoid using it on |
| multiple CPU threads sharing the same core. This option is disabled by |
| default. If it has been enabled, it may still be forcibly disabled by |
| prefixing it with the "no" keyword. It is ignored by the "select" and |
| "poll" pollers. |
| |
| This option is automatically disabled on old processes in the context of |
| seamless reload; it avoids too much cpu conflicts when multiple processes |
| stay around for some time waiting for the end of their current connections. |
| |
| max-spread-checks <delay in milliseconds> |
| By default, HAProxy tries to spread the start of health checks across the |
| smallest health check interval of all the servers in a farm. The principle is |
| to avoid hammering services running on the same server. But when using large |
| check intervals (10 seconds or more), the last servers in the farm take some |
| time before starting to be tested, which can be a problem. This parameter is |
| used to enforce an upper bound on delay between the first and the last check, |
| even if the servers' check intervals are larger. When servers run with |
| shorter intervals, their intervals will be respected though. |
| |
| maxcompcpuusage <number> |
| Sets the maximum CPU usage HAProxy can reach before stopping the compression |
| for new requests or decreasing the compression level of current requests. |
| It works like 'maxcomprate' but measures CPU usage instead of incoming data |
| bandwidth. The value is expressed in percent of the CPU used by HAProxy. A |
| value of 100 disable the limit. The default value is 100. Setting a lower |
| value will prevent the compression work from slowing the whole process down |
| and from introducing high latencies. |
| |
| maxcomprate <number> |
| Sets the maximum per-process input compression rate to <number> kilobytes |
| per second. For each session, if the maximum is reached, the compression |
| level will be decreased during the session. If the maximum is reached at the |
| beginning of a session, the session will not compress at all. If the maximum |
| is not reached, the compression level will be increased up to |
| tune.comp.maxlevel. A value of zero means there is no limit, this is the |
| default value. |
| |
| maxconn <number> |
| Sets the maximum per-process number of concurrent connections to <number>. It |
| is equivalent to the command-line argument "-n". The value provided in |
| command-line argument via "-n" takes the precedence over the maxconn value set |
| in the global section. Haproxy process could be also compiled with |
| SYSTEM_MAXCONN compile-time variable, which is served in this case as the |
| system maxconn maximum. Again, the command-line "-n" argument allows at |
| runtime to bypass SYSTEM_MAXCONN limit, if set. Proxies will stop accepting |
| connections when maxconn is reached. The process soft file descriptor limit |
| (could be obtained with "ulimit -n" command) is automatically adjusted |
| according to provided maxconn. See also "ulimit-n". Note: the "select" poller |
| cannot reliably use more than 1024 file descriptors on some platforms. If your |
| platform only supports select and reports "select FAILED" on startup, you need |
| to reduce the maxconn until it works (slightly below 500 in general). If |
| maxconn value is not set, it will be automatically calculated based on the |
| current file descriptors limits, reported by the "ulimit -nH" command (we take |
| the maximum between the hard and soft values), then automatic value will be |
| possibly reduced by "fd-hard-limit" and by memory limit, if the latter was |
| enforced via "-m" command line option. Automatic value is also dependent from |
| the buffer size, memory allocated to compression, SSL cache size, and the use |
| or not of SSL and the associated maxsslconn (which can also be automatic). |
| |
| See also: fd-hard-limit, ulimit-n |
| |
| maxconnrate <number> |
| Sets the maximum per-process number of connections per second to <number>. |
| Proxies will stop accepting connections when this limit is reached. It can be |
| used to limit the global capacity regardless of each frontend capacity. It is |
| important to note that this can only be used as a service protection measure, |
| as there will not necessarily be a fair share between frontends when the |
| limit is reached, so it's a good idea to also limit each frontend to some |
| value close to its expected share. Also, lowering tune.maxaccept can improve |
| fairness. |
| |
| maxpipes <number> |
| Sets the maximum per-process number of pipes to <number>. Currently, pipes |
| are only used by kernel-based tcp splicing. Since a pipe contains two file |
| descriptors, the "ulimit-n" value will be increased accordingly. The default |
| value is maxconn/4, which seems to be more than enough for most heavy usages. |
| The splice code dynamically allocates and releases pipes, and can fall back |
| to standard copy, so setting this value too low may only impact performance. |
| |
| maxsessrate <number> |
| Sets the maximum per-process number of sessions per second to <number>. |
| Proxies will stop accepting connections when this limit is reached. It can be |
| used to limit the global capacity regardless of each frontend capacity. It is |
| important to note that this can only be used as a service protection measure, |
| as there will not necessarily be a fair share between frontends when the |
| limit is reached, so it's a good idea to also limit each frontend to some |
| value close to its expected share. Also, lowering tune.maxaccept can improve |
| fairness. |
| |
| maxsslconn <number> |
| Sets the maximum per-process number of concurrent SSL connections to |
| <number>. By default there is no SSL-specific limit, which means that the |
| global maxconn setting will apply to all connections. Setting this limit |
| avoids having openssl use too much memory and crash when malloc returns NULL |
| (since it unfortunately does not reliably check for such conditions). Note |
| that the limit applies both to incoming and outgoing connections, so one |
| connection which is deciphered then ciphered accounts for 2 SSL connections. |
| If this value is not set, but a memory limit is enforced, this value will be |
| automatically computed based on the memory limit, maxconn, the buffer size, |
| memory allocated to compression, SSL cache size, and use of SSL in either |
| frontends, backends or both. If neither maxconn nor maxsslconn are specified |
| when there is a memory limit, HAProxy will automatically adjust these values |
| so that 100% of the connections can be made over SSL with no risk, and will |
| consider the sides where it is enabled (frontend, backend, both). |
| |
| maxsslrate <number> |
| Sets the maximum per-process number of SSL sessions per second to <number>. |
| SSL listeners will stop accepting connections when this limit is reached. It |
| can be used to limit the global SSL CPU usage regardless of each frontend |
| capacity. It is important to note that this can only be used as a service |
| protection measure, as there will not necessarily be a fair share between |
| frontends when the limit is reached, so it's a good idea to also limit each |
| frontend to some value close to its expected share. It is also important to |
| note that the sessions are accounted before they enter the SSL stack and not |
| after, which also protects the stack against bad handshakes. Also, lowering |
| tune.maxaccept can improve fairness. |
| |
| maxzlibmem <number> |
| Sets the maximum amount of RAM in megabytes per process usable by the zlib. |
| When the maximum amount is reached, future sessions will not compress as long |
| as RAM is unavailable. When sets to 0, there is no limit. |
| The default value is 0. The value is available in bytes on the UNIX socket |
| with "show info" on the line "MaxZlibMemUsage", the memory used by zlib is |
| "ZlibMemUsage" in bytes. |
| |
| no-memory-trimming |
| Disables memory trimming ("malloc_trim") at a few moments where attempts are |
| made to reclaim lots of memory (on memory shortage or on reload). Trimming |
| memory forces the system's allocator to scan all unused areas and to release |
| them. This is generally seen as nice action to leave more available memory to |
| a new process while the old one is unlikely to make significant use of it. |
| But some systems dealing with tens to hundreds of thousands of concurrent |
| connections may experience a lot of memory fragmentation, that may render |
| this release operation extremely long. During this time, no more traffic |
| passes through the process, new connections are not accepted anymore, some |
| health checks may even fail, and the watchdog may even trigger and kill the |
| unresponsive process, leaving a huge core dump. If this ever happens, then it |
| is suggested to use this option to disable trimming and stop trying to be |
| nice with the new process. Note that advanced memory allocators usually do |
| not suffer from such a problem. |
| |
| noepoll |
| Disables the use of the "epoll" event polling system on Linux. It is |
| equivalent to the command-line argument "-de". The next polling system |
| used will generally be "poll". See also "nopoll". |
| |
| noevports |
| Disables the use of the event ports event polling system on SunOS systems |
| derived from Solaris 10 and later. It is equivalent to the command-line |
| argument "-dv". The next polling system used will generally be "poll". See |
| also "nopoll". |
| |
| nogetaddrinfo |
| Disables the use of getaddrinfo(3) for name resolving. It is equivalent to |
| the command line argument "-dG". Deprecated gethostbyname(3) will be used. |
| |
| nokqueue |
| Disables the use of the "kqueue" event polling system on BSD. It is |
| equivalent to the command-line argument "-dk". The next polling system |
| used will generally be "poll". See also "nopoll". |
| |
| nopoll |
| Disables the use of the "poll" event polling system. It is equivalent to the |
| command-line argument "-dp". The next polling system used will be "select". |
| It should never be needed to disable "poll" since it's available on all |
| platforms supported by HAProxy. See also "nokqueue", "noepoll" and |
| "noevports". |
| |
| noreuseport |
| Disables the use of SO_REUSEPORT - see socket(7). It is equivalent to the |
| command line argument "-dR". |
| |
| nosplice |
| Disables the use of kernel tcp splicing between sockets on Linux. It is |
| equivalent to the command line argument "-dS". Data will then be copied |
| using conventional and more portable recv/send calls. Kernel tcp splicing is |
| limited to some very recent instances of kernel 2.6. Most versions between |
| 2.6.25 and 2.6.28 are buggy and will forward corrupted data, so they must not |
| be used. This option makes it easier to globally disable kernel splicing in |
| case of doubt. See also "option splice-auto", "option splice-request" and |
| "option splice-response". |
| |
| profiling.memory { on | off } |
| Enables ('on') or disables ('off') per-function memory profiling. This will |
| keep usage statistics of malloc/calloc/realloc/free calls anywhere in the |
| process (including libraries) which will be reported on the CLI using the |
| "show profiling" command. This is essentially meant to be used when an |
| abnormal memory usage is observed that cannot be explained by the pools and |
| other info are required. The performance hit will typically be around 1%, |
| maybe a bit more on highly threaded machines, so it is normally suitable for |
| use in production. The same may be achieved at run time on the CLI using the |
| "set profiling memory" command, please consult the management manual. |
| |
| profiling.tasks { auto | on | off } |
| Enables ('on') or disables ('off') per-task CPU profiling. When set to 'auto' |
| the profiling automatically turns on a thread when it starts to suffer from |
| an average latency of 1000 microseconds or higher as reported in the |
| "avg_loop_us" activity field, and automatically turns off when the latency |
| returns below 990 microseconds (this value is an average over the last 1024 |
| loops so it does not vary quickly and tends to significantly smooth short |
| spikes). It may also spontaneously trigger from time to time on overloaded |
| systems, containers, or virtual machines, or when the system swaps (which |
| must absolutely never happen on a load balancer). |
| |
| CPU profiling per task can be very convenient to report where the time is |
| spent and which requests have what effect on which other request. Enabling |
| it will typically affect the overall's performance by less than 1%, thus it |
| is recommended to leave it to the default 'auto' value so that it only |
| operates when a problem is identified. This feature requires a system |
| supporting the clock_gettime(2) syscall with clock identifiers |
| CLOCK_MONOTONIC and CLOCK_THREAD_CPUTIME_ID, otherwise the reported time will |
| be zero. This option may be changed at run time using "set profiling" on the |
| CLI. |
| |
| spread-checks <0..50, in percent> |
| Sometimes it is desirable to avoid sending agent and health checks to |
| servers at exact intervals, for instance when many logical servers are |
| located on the same physical server. With the help of this parameter, it |
| becomes possible to add some randomness in the check interval between 0 |
| and +/- 50%. A value between 2 and 5 seems to show good results. The |
| default value remains at 0. |
| |
| ssl-engine <name> [algo <comma-separated list of algorithms>] |
| Sets the OpenSSL engine to <name>. List of valid values for <name> may be |
| obtained using the command "openssl engine". This statement may be used |
| multiple times, it will simply enable multiple crypto engines. Referencing an |
| unsupported engine will prevent HAProxy from starting. Note that many engines |
| will lead to lower HTTPS performance than pure software with recent |
| processors. The optional command "algo" sets the default algorithms an ENGINE |
| will supply using the OPENSSL function ENGINE_set_default_string(). A value |
| of "ALL" uses the engine for all cryptographic operations. If no list of |
| algo is specified then the value of "ALL" is used. A comma-separated list |
| of different algorithms may be specified, including: RSA, DSA, DH, EC, RAND, |
| CIPHERS, DIGESTS, PKEY, PKEY_CRYPTO, PKEY_ASN1. This is the same format that |
| openssl configuration file uses: |
| https://www.openssl.org/docs/man1.0.2/apps/config.html |
| |
| HAProxy Version 2.6 disabled the support for engines in the default build. |
| This option is only available when HAProxy has been built with support for |
| it. In case the ssl-engine is required HAProxy can be rebuild with the |
| USE_ENGINE=1 flag. |
| |
| ssl-mode-async |
| Adds SSL_MODE_ASYNC mode to the SSL context. This enables asynchronous TLS |
| I/O operations if asynchronous capable SSL engines are used. The current |
| implementation supports a maximum of 32 engines. The Openssl ASYNC API |
| doesn't support moving read/write buffers and is not compliant with |
| HAProxy's buffer management. So the asynchronous mode is disabled on |
| read/write operations (it is only enabled during initial and renegotiation |
| handshakes). |
| |
| tune.buffers.limit <number> |
| Sets a hard limit on the number of buffers which may be allocated per process. |
| The default value is zero which means unlimited. The minimum non-zero value |
| will always be greater than "tune.buffers.reserve" and should ideally always |
| be about twice as large. Forcing this value can be particularly useful to |
| limit the amount of memory a process may take, while retaining a sane |
| behavior. When this limit is reached, sessions which need a buffer wait for |
| another one to be released by another session. Since buffers are dynamically |
| allocated and released, the waiting time is very short and not perceptible |
| provided that limits remain reasonable. In fact sometimes reducing the limit |
| may even increase performance by increasing the CPU cache's efficiency. Tests |
| have shown good results on average HTTP traffic with a limit to 1/10 of the |
| expected global maxconn setting, which also significantly reduces memory |
| usage. The memory savings come from the fact that a number of connections |
| will not allocate 2*tune.bufsize. It is best not to touch this value unless |
| advised to do so by an HAProxy core developer. |
| |
| tune.buffers.reserve <number> |
| Sets the number of buffers which are pre-allocated and reserved for use only |
| during memory shortage conditions resulting in failed memory allocations. The |
| minimum value is 2 and is also the default. There is no reason a user would |
| want to change this value, it's mostly aimed at HAProxy core developers. |
| |
| tune.bufsize <number> |
| Sets the buffer size to this size (in bytes). Lower values allow more |
| sessions to coexist in the same amount of RAM, and higher values allow some |
| applications with very large cookies to work. The default value is 16384 and |
| can be changed at build time. It is strongly recommended not to change this |
| from the default value, as very low values will break some services such as |
| statistics, and values larger than default size will increase memory usage, |
| possibly causing the system to run out of memory. At least the global maxconn |
| parameter should be decreased by the same factor as this one is increased. In |
| addition, use of HTTP/2 mandates that this value must be 16384 or more. If an |
| HTTP request is larger than (tune.bufsize - tune.maxrewrite), HAProxy will |
| return HTTP 400 (Bad Request) error. Similarly if an HTTP response is larger |
| than this size, HAProxy will return HTTP 502 (Bad Gateway). Note that the |
| value set using this parameter will automatically be rounded up to the next |
| multiple of 8 on 32-bit machines and 16 on 64-bit machines. |
| |
| tune.comp.maxlevel <number> |
| Sets the maximum compression level. The compression level affects CPU |
| usage during compression. This value affects CPU usage during compression. |
| Each session using compression initializes the compression algorithm with |
| this value. The default value is 1. |
| |
| tune.disable-fast-forward [ EXPERIMENTAL ] |
| Disables the data fast-forwarding. It is a mechanism to optimize the data |
| forwarding by passing data directly from a side to the other one without |
| waking the stream up. Thanks to this directive, it is possible to disable |
| this optimization. Note it also disable any kernel tcp splicing. This command |
| is not meant for regular use, it will generally only be suggested by |
| developers along complex debugging sessions. For this reason it is internally |
| marked as experimental, meaning that "expose-experimental-directives" must |
| appear on a line before this directive. |
| |
| tune.fail-alloc |
| If compiled with DEBUG_FAIL_ALLOC or started with "-dMfail", gives the |
| percentage of chances an allocation attempt fails. Must be between 0 (no |
| failure) and 100 (no success). This is useful to debug and make sure memory |
| failures are handled gracefully. When not set, the ratio is 0. However the |
| command-line "-dMfail" option automatically sets it to 1% failure rate so that |
| it is not necessary to change the configuration for testing. |
| |
| tune.fd.edge-triggered { on | off } [ EXPERIMENTAL ] |
| Enables ('on') or disables ('off') the edge-triggered polling mode for FDs |
| that support it. This is currently only support with epoll. It may noticeably |
| reduce the number of epoll_ctl() calls and slightly improve performance in |
| certain scenarios. This is still experimental, it may result in frozen |
| connections if bugs are still present, and is disabled by default. |
| |
| tune.h2.be.glitches-threshold <number> |
| Sets the threshold for the number of glitches on a backend connection, where |
| that connection will automatically be killed. This allows to automatically |
| kill misbehaving connections without having to write explicit rules for them. |
| The default value is zero, indicating that no threshold is set so that no |
| event will cause a connection to be closed. Beware that some H2 servers may |
| occasionally cause a few glitches over long lasting connection, so any non- |
| zero value here should probably be in the hundreds or thousands to be |
| effective without affecting slightly bogus servers. |
| |
| See also: tune.h2.fe.glitches-threshold, bc_glitches |
| |
| tune.h2.be.initial-window-size <number> |
| Sets the HTTP/2 initial window size for outgoing connections, which is the |
| number of bytes the server can respond before waiting for an acknowledgment |
| from HAProxy. This setting only affects payload contents, not headers. When |
| not set, the common default value set by tune.h2.initial-window-size applies. |
| It can make sense to slightly increase this value to allow faster downloads |
| or to reduce CPU usage on the servers, at the expense of creating unfairness |
| between clients. It doesn't affect resource usage. |
| |
| See also: tune.h2.initial-window-size. |
| |
| tune.h2.be.max-concurrent-streams <number> |
| Sets the HTTP/2 maximum number of concurrent streams per outgoing connection |
| (i.e. the number of outstanding requests on a single connection to a server). |
| When not set, the default set by tune.h2.max-concurrent-streams applies. A |
| smaller value than the default 100 may improve a site's responsiveness at the |
| expense of maintaining more established connections to the servers. When the |
| "http-reuse" setting is set to "always", it is recommended to reduce this |
| value so as not to mix too many different clients over the same connection, |
| because if a client is slower than others, a mechanism known as "head of |
| line blocking" tends to cause cascade effect on download speed for all |
| clients sharing a connection (keep tune.h2.be.initial-window-size low in this |
| case). It is highly recommended not to increase this value; some might find |
| it optimal to run at low values (1..5 typically). |
| |
| tune.h2.fe.glitches-threshold <number> |
| Sets the threshold for the number of glitches on a frontend connection, where |
| that connection will automatically be killed. This allows to automatically |
| kill misbehaving connections without having to write explicit rules for them. |
| The default value is zero, indicating that no threshold is set so that no |
| event will cause a connection to be closed. Beware that some H2 clientss may |
| occasionally cause a few glitches over long lasting connection, so any non- |
| zero value here should probably be in the hundreds or thousands to be |
| effective without affecting slightly bogus clients. |
| |
| See also: tune.h2.be.glitches-threshold, fc_glitches |
| |
| tune.h2.fe.initial-window-size <number> |
| Sets the HTTP/2 initial window size for incoming connections, which is the |
| number of bytes the client can upload before waiting for an acknowledgment |
| from HAProxy. This setting only affects payload contents (i.e. the body of |
| POST requests), not headers. When not set, the common default value set by |
| tune.h2.initial-window-size applies. It can make sense to increase this value |
| to allow faster uploads. The default value of 65536 allows up to 5 Mbps of |
| bandwidth per client over a 100 ms ping time, and 500 Mbps for 1 ms ping |
| time. It doesn't affect resource usage. Using too large values may cause |
| clients to experience a lack of responsiveness if pages are accessed in |
| parallel to large uploads. |
| |
| See also: tune.h2.initial-window-size. |
| |
| tune.h2.fe.max-concurrent-streams <number> |
| Sets the HTTP/2 maximum number of concurrent streams per incoming connection |
| (i.e. the number of outstanding requests on a single connection from a |
| client). When not set, the default set by tune.h2.max-concurrent-streams |
| applies. A larger value than the default 100 may sometimes slightly improve |
| the page load time for complex sites with lots of small objects over high |
| latency networks but can also result in using more memory by allowing a |
| client to allocate more resources at once. The default value of 100 is |
| generally good and it is recommended not to change this value. |
| |
| tune.h2.fe.max-total-streams <number> |
| Sets the HTTP/2 maximum number of total streams processed per incoming |
| connection. Once this limit is reached, HAProxy will send a graceful GOAWAY |
| frame informing the client that it will close the connection after all |
| pending streams have been closed. In practice, clients tend to close as fast |
| as possible when receiving this, and to establish a new connection for next |
| requests. Doing this is sometimes useful and desired in situations where |
| clients stay connected for a very long time and cause some imbalance inside a |
| farm. For example, in some highly dynamic environments, it is possible that |
| new load balancers are instantiated on the fly to adapt to a load increase, |
| and that once the load goes down they should be stopped without breaking |
| established connections. By setting a limit here, the connections will have |
| a limited lifetime and will be frequently renewed, with some possibly being |
| established to other nodes, so that existing resources are quickly released. |
| |
| It's important to understand that there is an implicit relation between this |
| limit and "tune.h2.fe.max-concurrent-streams" above. Indeed, HAProxy will |
| always accept to process any possibly pending streams that might be in flight |
| between the client and the frontend, so the advertised limit will always |
| automatically be raised by the value configured in max-concurrent-streams, |
| and this value will serve as a hard limit above which a violation by a non- |
| compliant client will result in the connection being closed. Thus when |
| counting the number of requests per connection from the logs, any number |
| between max-total-streams and (max-total-streams + max-concurrent-streams) |
| may be observed depending on how fast streams are created by the client. |
| |
| The default value is zero, which enforces no limit beyond those implied by |
| the protocol (2^30 ~= 1.07 billion). Values around 1000 may already cause |
| frequent connection renewal without causing any perceptible latency to most |
| clients. Setting it too low may result in an increase of CPU usage due to |
| frequent TLS reconnections, in addition to increased page load time. Please |
| note that some load testing tools do not support reconnections and may report |
| errors with this setting; as such it may be needed to disable it when running |
| performance benchmarks. See also "tune.h2.fe.max-concurrent-streams". |
| |
| tune.h2.header-table-size <number> |
| Sets the HTTP/2 dynamic header table size. It defaults to 4096 bytes and |
| cannot be larger than 65536 bytes. A larger value may help certain clients |
| send more compact requests, depending on their capabilities. This amount of |
| memory is consumed for each HTTP/2 connection. It is recommended not to |
| change it. |
| |
| tune.h2.initial-window-size <number> |
| Sets the default value for the HTTP/2 initial window size, on both incoming |
| and outgoing connections. This value is used for incoming connections when |
| tune.h2.fe.initial-window-size is not set, and by outgoing connections when |
| tune.h2.be.initial-window-size is not set. The default value is 65536, which |
| for uploads roughly allows up to 5 Mbps of bandwidth per client over a |
| network showing a 100 ms ping time, or 500 Mbps over a 1-ms local network. |
| Given that changing the default value will both increase upload speeds and |
| cause more unfairness between clients on downloads, it is recommended to |
| instead use the side-specific settings tune.h2.fe.initial-window-size and |
| tune.h2.be.initial-window-size. |
| |
| tune.h2.max-concurrent-streams <number> |
| Sets the default HTTP/2 maximum number of concurrent streams per connection |
| (i.e. the number of outstanding requests on a single connection). This value |
| is used for incoming connections when tune.h2.fe.max-concurrent-streams is |
| not set, and for outgoing connections when tune.h2.be.max-concurrent-streams |
| is not set. The default value is 100. The impact varies depending on the side |
| so please see the two settings above for more details. It is recommended not |
| to use this setting and to switch to the per-side ones instead. A value of |
| zero disables the limit so a single client may create as many streams as |
| allocatable by HAProxy. It is highly recommended not to change this value. |
| |
| tune.h2.max-frame-size <number> |
| Sets the HTTP/2 maximum frame size that HAProxy announces it is willing to |
| receive to its peers. The default value is the largest between 16384 and the |
| buffer size (tune.bufsize). In any case, HAProxy will not announce support |
| for frame sizes larger than buffers. The main purpose of this setting is to |
| allow to limit the maximum frame size setting when using large buffers. Too |
| large frame sizes might have performance impact or cause some peers to |
| misbehave. It is highly recommended not to change this value. |
| |
| tune.http.cookielen <number> |
| Sets the maximum length of captured cookies. This is the maximum value that |
| the "capture cookie xxx len yyy" will be allowed to take, and any upper value |
| will automatically be truncated to this one. It is important not to set too |
| high a value because all cookie captures still allocate this size whatever |
| their configured value (they share a same pool). This value is per request |
| per response, so the memory allocated is twice this value per connection. |
| When not specified, the limit is set to 63 characters. It is recommended not |
| to change this value. |
| |
| tune.http.logurilen <number> |
| Sets the maximum length of request URI in logs. This prevents truncating long |
| request URIs with valuable query strings in log lines. This is not related |
| to syslog limits. If you increase this limit, you may also increase the |
| 'log ... len yyy' parameter. Your syslog daemon may also need specific |
| configuration directives too. |
| The default value is 1024. |
| |
| tune.http.maxhdr <number> |
| Sets the maximum number of headers in a request. When a request comes with a |
| number of headers greater than this value (including the first line), it is |
| rejected with a "400 Bad Request" status code. Similarly, too large responses |
| are blocked with "502 Bad Gateway". The default value is 101, which is enough |
| for all usages, considering that the widely deployed Apache server uses the |
| same limit. It can be useful to push this limit further to temporarily allow |
| a buggy application to work by the time it gets fixed. The accepted range is |
| 1..32767. Keep in mind that each new header consumes 32bits of memory for |
| each session, so don't push this limit too high. |
| |
| tune.idle-pool.shared { on | off } |
| Enables ('on') or disables ('off') sharing of idle connection pools between |
| threads for a same server. The default is to share them between threads in |
| order to minimize the number of persistent connections to a server, and to |
| optimize the connection reuse rate. But to help with debugging or when |
| suspecting a bug in HAProxy around connection reuse, it can be convenient to |
| forcefully disable this idle pool sharing between multiple threads, and force |
| this option to "off". The default is on. It is strongly recommended against |
| disabling this option without setting a conservative value on "pool-low-conn" |
| for all servers relying on connection reuse to achieve a high performance |
| level, otherwise connections might be closed very often as the thread count |
| increases. |
| |
| tune.idletimer <timeout> |
| Sets the duration after which HAProxy will consider that an empty buffer is |
| probably associated with an idle stream. This is used to optimally adjust |
| some packet sizes while forwarding large and small data alternatively. The |
| decision to use splice() or to send large buffers in SSL is modulated by this |
| parameter. The value is in milliseconds between 0 and 65535. A value of zero |
| means that HAProxy will not try to detect idle streams. The default is 1000, |
| which seems to correctly detect end user pauses (e.g. read a page before |
| clicking). There should be no reason for changing this value. Please check |
| tune.ssl.maxrecord below. |
| |
| tune.listener.default-shards { by-process | by-thread | by-group } |
| Normally, all "bind" lines will create a single shard, that is, a single |
| socket that all threads of the process will listen to. With many threads, |
| this is not very efficient, and may even induce some important overhead in |
| the kernel for updating the polling state or even distributing events to the |
| various threads. Modern operating systems support balancing of incoming |
| connections, a mechanism that will consist in permitting multiple sockets to |
| be bound to the same address and port, and to evenly distribute all incoming |
| connections to these sockets so that each thread only sees the connections |
| that are waiting in the socket it is bound to. This significantly reduces |
| kernel-side overhead and increases performance in the incoming connection |
| path. This is usually enabled in HAProxy using the "shards" setting on "bind" |
| lines, which defaults to 1, meaning that each listener will be unique in the |
| process. On systems with many processors, it may be more convenient to change |
| the default setting to "by-thread" in order to always create one listening |
| socket per thread, or "by-group" in order to always create one listening |
| socket per thread group. Be careful about the file descriptor usage with |
| "by-thread" as each listener will need as many sockets as there are threads. |
| Also some operating systems (e.g. FreeBSD) are limited to no more than 256 |
| sockets on a same address. Note that "by-group" will remain equivalent to |
| "by-process" for default configurations involving a single thread group, and |
| will fall back to sharing the same socket on systems that do not support this |
| mechanism. The default is "by-group" with a fallback to "by-process" for |
| systems or socket families that do not support multiple bindings. |
| |
| tune.listener.multi-queue { on | fair | off } |
| Enables ('on' / 'fair') or disables ('off') the listener's multi-queue accept |
| which spreads the incoming traffic to all threads a "bind" line is allowed to |
| run on instead of taking them for itself. This provides a smoother traffic |
| distribution and scales much better, especially in environments where threads |
| may be unevenly loaded due to external activity (network interrupts colliding |
| with one thread for example). The default mode, "on", optimizes the choice of |
| a thread by picking in a sample the one with the less connections. It is |
| often the best choice when connections are long-lived as it manages to keep |
| all threads busy. A second mode, "fair", instead cycles through all threads |
| regardless of their instant load level. It can be better suited for short- |
| lived connections, or on machines with very large numbers of threads where |
| the probability to find the least loaded thread with the first mode is low. |
| Finally it is possible to forcefully disable the redistribution mechanism |
| using "off" for troubleshooting, or for situations where connections are |
| short-lived and it is estimated that the operating system already provides a |
| good enough distribution. The default is "on". |
| |
| tune.lua.forced-yield <number> |
| This directive forces the Lua engine to execute a yield each <number> of |
| instructions executed. This permits interrupting a long script and allows the |
| HAProxy scheduler to process other tasks like accepting connections or |
| forwarding traffic. The default value is 10000 instructions. If HAProxy often |
| executes some Lua code but more responsiveness is required, this value can be |
| lowered. If the Lua code is quite long and its result is absolutely required |
| to process the data, the <number> can be increased. |
| |
| tune.lua.maxmem |
| Sets the maximum amount of RAM in megabytes per process usable by Lua. By |
| default it is zero which means unlimited. It is important to set a limit to |
| ensure that a bug in a script will not result in the system running out of |
| memory. |
| |
| tune.lua.session-timeout <timeout> |
| This is the execution timeout for the Lua sessions. This is useful for |
| preventing infinite loops or spending too much time in Lua. This timeout |
| counts only the pure Lua runtime. If the Lua does a sleep, the sleep is |
| not taken in account. The default timeout is 4s. |
| |
| tune.lua.burst-timeout <timeout> |
| The "burst" execution timeout applies to any Lua handler. If the handler |
| fails to finish or yield before timeout is reached, it will be aborted to |
| prevent thread contention, to prevent traffic from not being served for too |
| long, and ultimately to prevent the process from crashing because of the |
| watchdog kicking in. Unlike other lua timeouts which are yield-cumulative, |
| burst-timeout will ensure that the time spent in a single lua execution |
| window does not exceed the configured timeout. |
| |
| Yielding here means that the lua execution is effectively interrupted |
| either through an explicit call to lua-yielding function such as |
| core.(m)sleep() or core.yield(), or following an automatic forced-yield |
| (see tune.lua.forced-yield) and that it will be resumed later when the |
| related task is set for rescheduling. Not all lua handlers may yield: we have |
| to make a distinction between yieldable handlers and unyieldable handlers. |
| |
| For yieldable handlers (tasks, actions..), reaching the timeout means |
| "tune.lua.forced-yield" might be too high for the system, reducing it |
| could improve the situation, but it could also be a good idea to check if |
| adding manual yields at some key points within the lua function helps or not. |
| It may also indicate that the handler is spending too much time in a specific |
| lua library function that cannot be interrupted. |
| |
| For unyieldable handlers (lua converters, sample fetches), it could simply |
| indicate that the handler is doing too much computation, which could result |
| from an improper design given that such handlers, which often block the |
| request execution flow, are expected to terminate quickly to allow the |
| request processing to go through. A common resolution approach here would be |
| to try to better optimize the lua function for speed since decreasing |
| "tune.lua.forced-yield" won't help. |
| |
| This timeout only counts the pure Lua runtime. If the Lua does a core.sleep, |
| the sleeping time is not taken in account. The default timeout is 1000ms. |
| |
| Note: if a lua GC cycle is initiated from the handler (either explicitly |
| requested or automatically triggered by lua after some time), the GC cycle |
| time will also be accounted for. |
| |
| Indeed, there is no way to deduce the GC cycle time, so this could lead to |
| some false positives on saturated systems (where GC is having hard time to |
| catch up and consumes most of the available execution runtime). If it were |
| to be the case, here are some resolution leads: |
| |
| - checking if the script could be optimized to reduce lua memory footprint |
| - fine-tuning lua GC parameters and / or requesting manual GC cycles |
| (see: https://www.lua.org/manual/5.4/manual.html#pdf-collectgarbage) |
| - increasing tune.lua.burst-timeout |
| |
| Setting value to 0 completely disables this protection. |
| |
| tune.lua.service-timeout <timeout> |
| This is the execution timeout for the Lua services. This is useful for |
| preventing infinite loops or spending too much time in Lua. This timeout |
| counts only the pure Lua runtime. If the Lua does a sleep, the sleep is |
| not taken in account. The default timeout is 4s. |
| |
| tune.lua.task-timeout <timeout> |
| Purpose is the same as "tune.lua.session-timeout", but this timeout is |
| dedicated to the tasks. By default, this timeout isn't set because a task may |
| remain alive during of the lifetime of HAProxy. For example, a task used to |
| check servers. |
| |
| tune.lua.log.loggers { on | off } |
| Enables ('on') or disables ('off') logging the output of LUA scripts via the |
| loggers applicable to the current proxy, if any. |
| |
| Defaults to 'on'. |
| |
| tune.lua.log.stderr { on | auto | off } |
| Enables ('on') or disables ('off') logging the output of LUA scripts via |
| stderr. |
| When set to 'auto', logging via stderr is conditionally 'on' if any of: |
| |
| - tune.lua.log.loggers is set to 'off' |
| - the script is executed in a non-proxy context with no global logger |
| - the script is executed in a proxy context with no logger attached |
| |
| Please note that, when enabled, this logging is in addition to the logging |
| configured via tune.lua.log.loggers. |
| |
| Defaults to 'on'. |
| |
| tune.maxaccept <number> |
| Sets the maximum number of consecutive connections a process may accept in a |
| row before switching to other work. In single process mode, higher numbers |
| used to give better performance at high connection rates, though this is not |
| the case anymore with the multi-queue. This value applies individually to |
| each listener, so that the number of processes a listener is bound to is |
| taken into account. This value defaults to 4 which showed best results. If a |
| significantly higher value was inherited from an ancient config, it might be |
| worth removing it as it will both increase performance and lower response |
| time. In multi-process mode, it is divided by twice the number of processes |
| the listener is bound to. Setting this value to -1 completely disables the |
| limitation. It should normally not be needed to tweak this value. |
| |
| tune.maxpollevents <number> |
| Sets the maximum amount of events that can be processed at once in a call to |
| the polling system. The default value is adapted to the operating system. It |
| has been noticed that reducing it below 200 tends to slightly decrease |
| latency at the expense of network bandwidth, and increasing it above 200 |
| tends to trade latency for slightly increased bandwidth. |
| |
| tune.maxrewrite <number> |
| Sets the reserved buffer space to this size in bytes. The reserved space is |
| used for header rewriting or appending. The first reads on sockets will never |
| fill more than bufsize-maxrewrite. Historically it has defaulted to half of |
| bufsize, though that does not make much sense since there are rarely large |
| numbers of headers to add. Setting it too high prevents processing of large |
| requests or responses. Setting it too low prevents addition of new headers |
| to already large requests or to POST requests. It is generally wise to set it |
| to about 1024. It is automatically readjusted to half of bufsize if it is |
| larger than that. This means you don't have to worry about it when changing |
| bufsize. |
| |
| tune.memory.hot-size <number> |
| Sets the per-thread amount of memory that will be kept hot in the local cache |
| and will never be recoverable by other threads. Access to this memory is very |
| fast (lockless), and having enough is critical to maintain a good performance |
| level under extreme thread contention. The value is expressed in bytes, and |
| the default value is configured at build time via CONFIG_HAP_POOL_CACHE_SIZE |
| which defaults to 524288 (512 kB). A larger value may increase performance in |
| some usage scenarios, especially when performance profiles show that memory |
| allocation is stressed a lot. Experience shows that a good value sits between |
| once to twice the per CPU core L2 cache size. Too large values will have a |
| negative impact on performance by making inefficient use of the L3 caches in |
| the CPUs, and will consume larger amounts of memory. It is recommended not to |
| change this value, or to proceed in small increments. In order to completely |
| disable the per-thread CPU caches, using a very small value could work, but |
| it is better to use "-dMno-cache" on the command-line. |
| |
| tune.pattern.cache-size <number> |
| Sets the size of the pattern lookup cache to <number> entries. This is an LRU |
| cache which reminds previous lookups and their results. It is used by ACLs |
| and maps on slow pattern lookups, namely the ones using the "sub", "reg", |
| "dir", "dom", "end", "bin" match methods as well as the case-insensitive |
| strings. It applies to pattern expressions which means that it will be able |
| to memorize the result of a lookup among all the patterns specified on a |
| configuration line (including all those loaded from files). It automatically |
| invalidates entries which are updated using HTTP actions or on the CLI. The |
| default cache size is set to 10000 entries, which limits its footprint to |
| about 5 MB per process/thread on 32-bit systems and 8 MB per process/thread |
| on 64-bit systems, as caches are thread/process local. There is a very low |
| risk of collision in this cache, which is in the order of the size of the |
| cache divided by 2^64. Typically, at 10000 requests per second with the |
| default cache size of 10000 entries, there's 1% chance that a brute force |
| attack could cause a single collision after 60 years, or 0.1% after 6 years. |
| This is considered much lower than the risk of a memory corruption caused by |
| aging components. If this is not acceptable, the cache can be disabled by |
| setting this parameter to 0. |
| |
| tune.peers.max-updates-at-once <number> |
| Sets the maximum number of stick-table updates that haproxy will try to |
| process at once when sending messages. Retrieving the data for these updates |
| requires some locking operations which can be CPU intensive on highly |
| threaded machines if unbound, and may also increase the traffic latency |
| during the initial batched transfer between an older and a newer process. |
| Conversely low values may also incur higher CPU overhead, and take longer |
| to complete. The default value is 200 and it is suggested not to change it. |
| |
| tune.pipesize <number> |
| Sets the kernel pipe buffer size to this size (in bytes). By default, pipes |
| are the default size for the system. But sometimes when using TCP splicing, |
| it can improve performance to increase pipe sizes, especially if it is |
| suspected that pipes are not filled and that many calls to splice() are |
| performed. This has an impact on the kernel's memory footprint, so this must |
| not be changed if impacts are not understood. |
| |
| tune.pool-high-fd-ratio <number> |
| This setting sets the max number of file descriptors (in percentage) used by |
| HAProxy globally against the maximum number of file descriptors HAProxy can |
| use before we start killing idle connections when we can't reuse a connection |
| and we have to create a new one. The default is 25 (one quarter of the file |
| descriptor will mean that roughly half of the maximum front connections can |
| keep an idle connection behind, anything beyond this probably doesn't make |
| much sense in the general case when targeting connection reuse). |
| |
| tune.pool-low-fd-ratio <number> |
| This setting sets the max number of file descriptors (in percentage) used by |
| HAProxy globally against the maximum number of file descriptors HAProxy can |
| use before we stop putting connection into the idle pool for reuse. The |
| default is 20. |
| |
| tune.quic.frontend.conn-tx-buffers.limit <number> |
| This settings defines the maximum number of buffers allocated for a QUIC |
| connection on data emission. By default, it is set to 30. QUIC buffers are |
| drained on ACK reception. This setting has a direct impact on the throughput |
| and memory consumption and can be adjusted according to an estimated round |
| time-trip. Each buffer is tune.bufsize. |
| |
| tune.quic.frontend.max-idle-timeout <timeout> |
| Sets the QUIC max_idle_timeout transport parameters in milliseconds for |
| frontends which determines the period of time after which a connection silently |
| closes if it has remained inactive during an effective period of time deduced |
| from the two max_idle_timeout values announced by the two endpoints: |
| - the minimum of the two values if both are not null, |
| - the maximum if only one of them is not null, |
| - if both values are null, this feature is disabled. |
| |
| The default value is 30000. |
| |
| tune.quic.frontend.max-streams-bidi <number> |
| Sets the QUIC initial_max_streams_bidi transport parameter for frontends. |
| This is the initial maximum number of bidirectional streams the remote peer |
| will be authorized to open. This determines the number of concurrent client |
| requests. |
| |
| The default value is 100. |
| |
| tune.quic.max-frame-loss <number> |
| Sets the limit for which a single QUIC frame can be marked as lost. If |
| exceeded, the connection is considered as failing and is closed immediately. |
| |
| The default value is 10. |
| |
| tune.quic.reorder-ratio <0..100, in percent> |
| The ratio applied to the packet reordering threshold calculated. It may |
| trigger a high packet loss detection when too small. |
| |
| The default value is 50. |
| |
| tune.quic.retry-threshold <number> |
| Dynamically enables the Retry feature for all the configured QUIC listeners |
| as soon as this number of half open connections is reached. A half open |
| connection is a connection whose handshake has not already successfully |
| completed or failed. To be functional this setting needs a cluster secret to |
| be set, if not it will be silently ignored (see "cluster-secret" setting). |
| This setting will be also silently ignored if the use of QUIC Retry was |
| forced (see "quic-force-retry"). |
| |
| The default value is 100. |
| |
| See https://www.rfc-editor.org/rfc/rfc9000.html#section-8.1.2 for more |
| information about QUIC retry. |
| |
| tune.quic.socket-owner { listener | connection } |
| Specifies how QUIC connections will use socket for receive/send operations. |
| Connections can share listener socket or each connection can allocate its |
| own socket. |
| |
| When default "connection" value is set, a dedicated socket will be allocated |
| by every QUIC connections. This option is the preferred one to achieve the |
| best performance with a large QUIC traffic. This is also the only way to |
| ensure soft-stop is conducted properly without data loss for QUIC connections |
| and cases of transient errors during sendto() operation are handled |
| efficiently. However, this relies on some advanced features from the UDP |
| network stack. If your platform is deemed not compatible, haproxy will |
| automatically switch to "listener" mode on startup. Please note that QUIC |
| listeners running on privileged ports may require to run as uid 0, or some |
| OS-specific tuning to permit the target uid to bind such ports, such as |
| system capabilities. See also the "setcap" global directive. |
| |
| The "listener" value indicates that QUIC transfers will occur on the shared |
| listener socket. This option can be a good compromise for small traffic as it |
| allows to reduce FD consumption. However, performance won't be optimal due to |
| a higher CPU usage if listeners are shared across a lot of threads or a |
| large number of QUIC connections can be used simultaneously. |
| |
| tune.rcvbuf.client <number> |
| tune.rcvbuf.server <number> |
| Forces the kernel socket receive buffer size on the client or the server side |
| to the specified value in bytes. This value applies to all TCP/HTTP frontends |
| and backends. It should normally never be set, and the default size (0) lets |
| the kernel auto-tune this value depending on the amount of available memory. |
| However it can sometimes help to set it to very low values (e.g. 4096) in |
| order to save kernel memory by preventing it from buffering too large amounts |
| of received data. Lower values will significantly increase CPU usage though. |
| |
| tune.recv_enough <number> |
| HAProxy uses some hints to detect that a short read indicates the end of the |
| socket buffers. One of them is that a read returns more than <recv_enough> |
| bytes, which defaults to 10136 (7 segments of 1448 each). This default value |
| may be changed by this setting to better deal with workloads involving lots |
| of short messages such as telnet or SSH sessions. |
| |
| tune.runqueue-depth <number> |
| Sets the maximum amount of task that can be processed at once when running |
| tasks. The default value depends on the number of threads but sits between 35 |
| and 280, which tend to show the highest request rates and lowest latencies. |
| Increasing it may incur latency when dealing with I/Os, making it too small |
| can incur extra overhead. Higher thread counts benefit from lower values. |
| When experimenting with much larger values, it may be useful to also enable |
| tune.sched.low-latency and possibly tune.fd.edge-triggered to limit the |
| maximum latency to the lowest possible. |
| |
| tune.sched.low-latency { on | off } |
| Enables ('on') or disables ('off') the low-latency task scheduler. By default |
| HAProxy processes tasks from several classes one class at a time as this is |
| the most efficient. But when running with large values of tune.runqueue-depth |
| this can have a measurable effect on request or connection latency. When this |
| low-latency setting is enabled, tasks of lower priority classes will always |
| be executed before other ones if they exist. This will permit to lower the |
| maximum latency experienced by new requests or connections in the middle of |
| massive traffic, at the expense of a higher impact on this large traffic. |
| For regular usage it is better to leave this off. The default value is off. |
| |
| tune.sndbuf.client <number> |
| tune.sndbuf.server <number> |
| Forces the kernel socket send buffer size on the client or the server side to |
| the specified value in bytes. This value applies to all TCP/HTTP frontends |
| and backends. It should normally never be set, and the default size (0) lets |
| the kernel auto-tune this value depending on the amount of available memory. |
| However it can sometimes help to set it to very low values (e.g. 4096) in |
| order to save kernel memory by preventing it from buffering too large amounts |
| of received data. Lower values will significantly increase CPU usage though. |
| Another use case is to prevent write timeouts with extremely slow clients due |
| to the kernel waiting for a large part of the buffer to be read before |
| notifying HAProxy again. |
| |
| tune.ssl.cachesize <number> |
| Sets the size of the global SSL session cache, in a number of blocks. A block |
| is large enough to contain an encoded session without peer certificate. An |
| encoded session with peer certificate is stored in multiple blocks depending |
| on the size of the peer certificate. A block uses approximately 200 bytes of |
| memory (based on `sizeof(struct sh_ssl_sess_hdr) + SHSESS_BLOCK_MIN_SIZE` |
| calculation used for `shctx_init` function). The default value may be forced |
| at build time, otherwise defaults to 20000. When the cache is full, the most |
| idle entries are purged and reassigned. Higher values reduce the occurrence |
| of such a purge, hence the number of CPU-intensive SSL handshakes by ensuring |
| that all users keep their session as long as possible. All entries are |
| pre-allocated upon startup. Setting this value to 0 disables the SSL session |
| cache. |
| |
| tune.ssl.capture-buffer-size <number> |
| tune.ssl.capture-cipherlist-size <number> (deprecated) |
| Sets the maximum size of the buffer used for capturing client hello cipher |
| list, extensions list, elliptic curves list and elliptic curve point |
| formats. If the value is 0 (default value) the capture is disabled, |
| otherwise a buffer is allocated for each SSL/TLS connection. |
| |
| tune.ssl.default-dh-param <number> |
| Sets the maximum size of the Diffie-Hellman parameters used for generating |
| the ephemeral/temporary Diffie-Hellman key in case of DHE key exchange. The |
| final size will try to match the size of the server's RSA (or DSA) key (e.g, |
| a 2048 bits temporary DH key for a 2048 bits RSA key), but will not exceed |
| this maximum value. Only 1024 or higher values are allowed. Higher values |
| will increase the CPU load, and values greater than 1024 bits are not |
| supported by Java 7 and earlier clients. This value is not used if static |
| Diffie-Hellman parameters are supplied either directly in the certificate |
| file or by using the ssl-dh-param-file parameter. |
| If there is neither a default-dh-param nor a ssl-dh-param-file defined, and |
| if the server's PEM file of a given frontend does not specify its own DH |
| parameters, then DHE ciphers will be unavailable for this frontend. |
| |
| tune.ssl.force-private-cache |
| This option disables SSL session cache sharing between all processes. It |
| should normally not be used since it will force many renegotiations due to |
| clients hitting a random process. But it may be required on some operating |
| systems where none of the SSL cache synchronization method may be used. In |
| this case, adding a first layer of hash-based load balancing before the SSL |
| layer might limit the impact of the lack of session sharing. |
| |
| tune.ssl.hard-maxrecord <number> |
| Sets the maximum amount of bytes passed to SSL_write() at any time. Default |
| value 0 means there is no limit. In contrast to tune.ssl.maxrecord this |
| settings will not be adjusted dynamically. Smaller records may decrease |
| throughput, but may be required when dealing with low-footprint clients. |
| |
| tune.ssl.keylog { on | off } |
| This option activates the logging of the TLS keys. It should be used with |
| care as it will consume more memory per SSL session and could decrease |
| performances. This is disabled by default. |
| |
| These sample fetches should be used to generate the SSLKEYLOGFILE that is |
| required to decipher traffic with wireshark. |
| |
| https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSS/Key_Log_Format |
| |
| The SSLKEYLOG is a series of lines which are formatted this way: |
| |
| <Label> <space> <ClientRandom> <space> <Secret> |
| |
| The ClientRandom is provided by the %[ssl_fc_client_random,hex] sample |
| fetch, the secret and the Label could be find in the array below. You need |
| to generate a SSLKEYLOGFILE with all the labels in this array. |
| |
| The following sample fetches are hexadecimal strings and does not need to be |
| converted. |
| |
| SSLKEYLOGFILE Label | Sample fetches for the Secrets |
| --------------------------------|----------------------------------------- |
| CLIENT_EARLY_TRAFFIC_SECRET | %[ssl_fc_client_early_traffic_secret] |
| CLIENT_HANDSHAKE_TRAFFIC_SECRET | %[ssl_fc_client_handshake_traffic_secret] |
| SERVER_HANDSHAKE_TRAFFIC_SECRET | %[ssl_fc_server_handshake_traffic_secret] |
| CLIENT_TRAFFIC_SECRET_0 | %[ssl_fc_client_traffic_secret_0] |
| SERVER_TRAFFIC_SECRET_0 | %[ssl_fc_server_traffic_secret_0] |
| EXPORTER_SECRET | %[ssl_fc_exporter_secret] |
| EARLY_EXPORTER_SECRET | %[ssl_fc_early_exporter_secret] |
| |
| This is only available with OpenSSL 1.1.1, and useful with TLS1.3 session. |
| |
| If you want to generate the content of a SSLKEYLOGFILE with TLS < 1.3, you |
| only need this line: |
| |
| "CLIENT_RANDOM %[ssl_fc_client_random,hex] %[ssl_fc_session_key,hex]" |
| |
| tune.ssl.lifetime <timeout> |
| Sets how long a cached SSL session may remain valid. This time is expressed |
| in seconds and defaults to 300 (5 min). It is important to understand that it |
| does not guarantee that sessions will last that long, because if the cache is |
| full, the longest idle sessions will be purged despite their configured |
| lifetime. The real usefulness of this setting is to prevent sessions from |
| being used for too long. |
| |
| tune.ssl.maxrecord <number> |
| Sets the maximum amount of bytes passed to SSL_write() at the beginning of |
| the data transfer. Default value 0 means there is no limit. Over SSL/TLS, |
| the client can decipher the data only once it has received a full record. |
| With large records, it means that clients might have to download up to 16kB |
| of data before starting to process them. Limiting the value can improve page |
| load times on browsers located over high latency or low bandwidth networks. |
| It is suggested to find optimal values which fit into 1 or 2 TCP segments |
| (generally 1448 bytes over Ethernet with TCP timestamps enabled, or 1460 when |
| timestamps are disabled), keeping in mind that SSL/TLS add some overhead. |
| Typical values of 1419 and 2859 gave good results during tests. Use |
| "strace -e trace=write" to find the best value. HAProxy will automatically |
| switch to this setting after an idle stream has been detected (see |
| tune.idletimer above). See also tune.ssl.hard-maxrecord. |
| |
| tune.ssl.ssl-ctx-cache-size <number> |
| Sets the size of the cache used to store generated certificates to <number> |
| entries. This is a LRU cache. Because generating a SSL certificate |
| dynamically is expensive, they are cached. The default cache size is set to |
| 1000 entries. |
| |
| tune.ssl.ocsp-update.maxdelay <number> |
| Sets the maximum interval between two automatic updates of the same OCSP |
| response. This time is expressed in seconds and defaults to 3600 (1 hour). It |
| must be set to a higher value than "tune.ssl.ocsp-update.mindelay". See |
| option "ocsp-update" for more information about the auto update mechanism. |
| |
| tune.ssl.ocsp-update.mindelay <number> |
| Sets the minimum interval between two automatic updates of the same OCSP |
| response. This time is expressed in seconds and defaults to 300 (5 minutes). |
| It is particularly useful for OCSP response that do not have explicit |
| expiration times. It must be set to a lower value than |
| "tune.ssl.ocsp-update.maxdelay". See option "ocsp-update" for more |
| information about the auto update mechanism. |
| |
| tune.stick-counters <number> |
| Sets the number of stick-counters that may be tracked at the same time by a |
| connection or a request via "track-sc*" actions in "tcp-request" or |
| "http-request" rules. The default value is set at build time by the macro |
| MAX_SESS_STK_CTR, and defaults to 3. With this setting it is possible to |
| change the value and ignore the one passed at build time. Increasing this |
| value may be needed when porting complex configurations to haproxy, but users |
| are warned against the costs: each entry takes 16 bytes per connection and |
| 16 bytes per request, all of which need to be allocated and zeroed for all |
| requests even when not used. As such a value of 10 will inflate the memory |
| consumption per request by 320 bytes and will cause this memory to be erased |
| for each request, which does have measurable CPU impacts. Conversely, when |
| no "track-sc" rules are used, the value may be lowered (0 being valid to |
| entirely disable stick-counters). |
| |
| tune.vars.global-max-size <size> |
| tune.vars.proc-max-size <size> |
| tune.vars.reqres-max-size <size> |
| tune.vars.sess-max-size <size> |
| tune.vars.txn-max-size <size> |
| These five tunes help to manage the maximum amount of memory used by the |
| variables system. "global" limits the overall amount of memory available for |
| all scopes. "proc" limits the memory for the process scope, "sess" limits the |
| memory for the session scope, "txn" for the transaction scope, and "reqres" |
| limits the memory for each request or response processing. |
| Memory accounting is hierarchical, meaning more coarse grained limits include |
| the finer grained ones: "proc" includes "sess", "sess" includes "txn", and |
| "txn" includes "reqres". |
| |
| For example, when "tune.vars.sess-max-size" is limited to 100, |
| "tune.vars.txn-max-size" and "tune.vars.reqres-max-size" cannot exceed |
| 100 either. If we create a variable "txn.var" that contains 100 bytes, |
| all available space is consumed. |
| Notice that exceeding the limits at runtime will not result in an error |
| message, but values might be cut off or corrupted. So make sure to accurately |
| plan for the amount of space needed to store all your variables. |
| |
| tune.zlib.memlevel <number> |
| Sets the memLevel parameter in zlib initialization for each session. It |
| defines how much memory should be allocated for the internal compression |
| state. A value of 1 uses minimum memory but is slow and reduces compression |
| ratio, a value of 9 uses maximum memory for optimal speed. Can be a value |
| between 1 and 9. The default value is 8. |
| |
| tune.zlib.windowsize <number> |
| Sets the window size (the size of the history buffer) as a parameter of the |
| zlib initialization for each session. Larger values of this parameter result |
| in better compression at the expense of memory usage. Can be a value between |
| 8 and 15. The default value is 15. |
| |
| 3.3. Debugging |
| -------------- |
| |
| anonkey <key> |
| This sets the global anonymizing key to <key>, which must be a 32-bit number |
| between 0 and 4294967295. This is the key that will be used by default by CLI |
| commands when anonymized mode is enabled. This key may also be set at runtime |
| from the CLI command "set anon global-key". See also command line argument |
| "-dC" in the management manual. |
| |
| quick-exit |
| This speeds up the old process exit upon reload by skipping the releasing of |
| memory objects and listeners, since all of these are reclaimed by the |
| operating system at the process' death. The gains are only marginal (in the |
| order of a few hundred milliseconds for huge configurations at most). The |
| main target usage in fact is when a bug is spotted in the deinit() code, as |
| this allows to bypass it. It is better not to use this unless instructed to |
| do so by developers. |
| |
| quiet |
| Do not display any message during startup. It is equivalent to the command- |
| line argument "-q". |
| |
| zero-warning |
| When this option is set, HAProxy will refuse to start if any warning was |
| emitted while processing the configuration. It is highly recommended to set |
| this option on configurations that are not changed often, as it helps detect |
| subtle mistakes and keep the configuration clean and forward-compatible. Note |
| that "haproxy -c" will also report errors in such a case. This option is |
| equivalent to command line argument "-dW". |
| |
| |
| 3.4. Userlists |
| -------------- |
| It is possible to control access to frontend/backend/listen sections or to |
| http stats by allowing only authenticated and authorized users. To do this, |
| it is required to create at least one userlist and to define users. |
| |
| userlist <listname> |
| Creates new userlist with name <listname>. Many independent userlists can be |
| used to store authentication & authorization data for independent customers. |
| |
| group <groupname> [users <user>,<user>,(...)] |
| Adds group <groupname> to the current userlist. It is also possible to |
| attach users to this group by using a comma separated list of names |
| proceeded by "users" keyword. |
| |
| user <username> [password|insecure-password <password>] |
| [groups <group>,<group>,(...)] |
| Adds user <username> to the current userlist. Both secure (encrypted) and |
| insecure (unencrypted) passwords can be used. Encrypted passwords are |
| evaluated using the crypt(3) function, so depending on the system's |
| capabilities, different algorithms are supported. For example, modern Glibc |
| based Linux systems support MD5, SHA-256, SHA-512, and, of course, the |
| classic DES-based method of encrypting passwords. |
| |
| Attention: Be aware that using encrypted passwords might cause significantly |
| increased CPU usage, depending on the number of requests, and the algorithm |
| used. For any of the hashed variants, the password for each request must |
| be processed through the chosen algorithm, before it can be compared to the |
| value specified in the config file. Most current algorithms are deliberately |
| designed to be expensive to compute to achieve resistance against brute |
| force attacks. They do not simply salt/hash the clear text password once, |
| but thousands of times. This can quickly become a major factor in HAProxy's |
| overall CPU consumption! |
| |
| Example: |
| userlist L1 |
| group G1 users tiger,scott |
| group G2 users xdb,scott |
| |
| user tiger password $6$k6y3o.eP$JlKBx9za9667qe4(...)xHSwRv6J.C0/D7cV91 |
| user scott insecure-password elgato |
| user xdb insecure-password hello |
| |
| userlist L2 |
| group G1 |
| group G2 |
| |
| user tiger password $6$k6y3o.eP$JlKBx(...)xHSwRv6J.C0/D7cV91 groups G1 |
| user scott insecure-password elgato groups G1,G2 |
| user xdb insecure-password hello groups G2 |
| |
| Please note that both lists are functionally identical. |
| |
| |
| 3.5. Peers |
| ---------- |
| It is possible to propagate entries of any data-types in stick-tables between |
| several HAProxy instances over TCP connections in a multi-master fashion. Each |
| instance pushes its local updates and insertions to remote peers. The pushed |
| values overwrite remote ones without aggregation. As an exception, the data |
| type "conn_cur" is never learned from peers, as it is supposed to reflect local |
| values. Earlier versions used to synchronize it and to cause negative values in |
| active-active setups, and always-growing values upon reloads or active-passive |
| switches because the local value would reflect more connections than locally |
| present. This information, however, is pushed so that monitoring systems can |
| watch it. |
| |
| Interrupted exchanges are automatically detected and recovered from the last |
| known point. In addition, during a soft restart, the old process connects to |
| the new one using such a TCP connection to push all its entries before the new |
| process tries to connect to other peers. That ensures very fast replication |
| during a reload, it typically takes a fraction of a second even for large |
| tables. |
| |
| Note that Server IDs are used to identify servers remotely, so it is important |
| that configurations look similar or at least that the same IDs are forced on |
| each server on all participants. |
| |
| peers <peersect> |
| Creates a new peer list with name <peersect>. It is an independent section, |
| which is referenced by one or more stick-tables. |
| |
| bind [<address>]:port [param*] |
| bind /<path> [param*] |
| Defines the binding parameters of the local peer of this "peers" section. |
| Such lines are not supported with "peer" line in the same "peers" section. |
| |
| disabled |
| Disables a peers section. It disables both listening and any synchronization |
| related to this section. This is provided to disable synchronization of stick |
| tables without having to comment out all "peers" references. |
| |
| default-bind [param*] |
| Defines the binding parameters for the local peer, excepted its address. |
| |
| default-server [param*] |
| Change default options for a server in a "peers" section. |
| |
| Arguments: |
| <param*> is a list of parameters for this server. The "default-server" |
| keyword accepts an important number of options and has a complete |
| section dedicated to it. In a peers section, the transport |
| parameters of a "default-server" line are supported. Please refer |
| to section 5 for more details, and the "server" keyword below in |
| this section for some of the restrictions. |
| |
| See also: "server" and section 5 about server options |
| |
| enabled |
| This re-enables a peers section which was previously disabled via the |
| "disabled" keyword. |
| |
| log <address> [len <length>] [format <format>] [sample <ranges>:<sample_size>] |
| <facility> [<level> [<minlevel>]] |
| "peers" sections support the same "log" keyword as for the proxies to |
| log information about the "peers" listener. See "log" option for proxies for |
| more details. |
| |
| peer <peername> [<address>]:port [param*] |
| peer <peername> /<path> [param*] |
| Defines a peer inside a peers section. |
| If <peername> is set to the local peer name (by default hostname, or forced |
| using "-L" command line option or "localpeer" global configuration setting), |
| HAProxy will listen for incoming remote peer connection on the provided |
| address. Otherwise, the address defines where to connect to in order to join |
| the remote peer, and <peername> is used at the protocol level to identify and |
| validate the remote peer on the server side. |
| |
| During a soft restart, local peer address is used by the old instance to |
| connect the new one and initiate a complete replication (teaching process). |
| |
| It is strongly recommended to have the exact same peers declaration on all |
| peers and to only rely on the "-L" command line argument or the "localpeer" |
| global configuration setting to change the local peer name. This makes it |
| easier to maintain coherent configuration files across all peers. |
| |
| You may want to reference some environment variables in the address |
| parameter, see section 2.3 about environment variables. |
| |
| Note: "peer" keyword may transparently be replaced by "server" keyword (see |
| "server" keyword explanation below). |
| |
| server <peername> [<address>:<port>] [param*] |
| server <peername> [/<path>] [param*] |
| As previously mentioned, "peer" keyword may be replaced by "server" keyword |
| with a support for all "server" parameters found in 5.2 paragraph that are |
| related to transport settings. If the underlying peer is local, the address |
| parameter must not be present; it must be provided on a "bind" line (see |
| "bind" keyword of this "peers" section). |
| |
| A number of "server" parameters are irrelevant for "peers" sections. Peers by |
| nature do not support dynamic host name resolution nor health checks, hence |
| parameters like "init_addr", "resolvers", "check", "agent-check", or "track" |
| are not supported. Similarly, there is no load balancing nor stickiness, thus |
| parameters such as "weight" or "cookie" have no effect. |
| |
| Example: |
| # The old way. |
| peers mypeers |
| peer haproxy1 192.168.0.1:1024 |
| peer haproxy2 192.168.0.2:1024 |
| peer haproxy3 10.2.0.1:1024 |
| |
| backend mybackend |
| mode tcp |
| balance roundrobin |
| stick-table type ip size 20k peers mypeers |
| stick on src |
| |
| server srv1 192.168.0.30:80 |
| server srv2 192.168.0.31:80 |
| |
| Example: |
| peers mypeers |
| bind 192.168.0.1:1024 ssl crt mycerts/pem |
| default-server ssl verify none |
| server haproxy1 #local peer |
| server haproxy2 192.168.0.2:1024 |
| server haproxy3 10.2.0.1:1024 |
| |
| shards <shards> |
| |
| In some configurations, one would like to distribute the stick-table contents |
| to some peers in place of sending all the stick-table contents to each peer |
| declared in the "peers" section. In such cases, "shards" specifies the |
| number of peer involved in this stick-table contents distribution. |
| See also "shard" server parameter. |
| |
| table <tablename> type {ip | integer | string [len <length>] | binary [len <length>]} |
| size <size> [expire <expire>] [nopurge] [store <data_type>]* |
| |
| Configure a stickiness table for the current section. This line is parsed |
| exactly the same way as the "stick-table" keyword in others section, except |
| for the "peers" argument which is not required here and with an additional |
| mandatory first parameter to designate the stick-table. Contrary to others |
| sections, there may be several "table" lines in "peers" sections (see also |
| "stick-table" keyword). |
| |
| Also be aware of the fact that "peers" sections have their own stick-table |
| namespaces to avoid collisions between stick-table names identical in |
| different "peers" section. This is internally handled prepending the "peers" |
| sections names to the name of the stick-tables followed by a '/' character. |
| If somewhere else in the configuration file you have to refer to such |
| stick-tables declared in "peers" sections you must use the prefixed version |
| of the stick-table name as follows: |
| |
| peers mypeers |
| peer A ... |
| peer B ... |
| table t1 ... |
| |
| frontend fe1 |
| tcp-request content track-sc0 src table mypeers/t1 |
| |
| This is also this prefixed version of the stick-table names which must be |
| used to refer to stick-tables through the CLI. |
| |
| About "peers" protocol, as only "peers" belonging to the same section may |
| communicate with each others, there is no need to do such a distinction. |
| Several "peers" sections may declare stick-tables with the same name. |
| This is shorter version of the stick-table name which is sent over the network. |
| There is only a '/' character as prefix to avoid stick-table name collisions between |
| stick-tables declared as backends and stick-table declared in "peers" sections |
| as follows in this weird but supported configuration: |
| |
| peers mypeers |
| peer A ... |
| peer B ... |
| table t1 type string size 10m store gpc0 |
| |
| backend t1 |
| stick-table type string size 10m store gpc0 peers mypeers |
| |
| Here "t1" table declared in "mypeers" section has "mypeers/t1" as global name. |
| "t1" table declared as a backend as "t1" as global name. But at peer protocol |
| level the former table is named "/t1", the latter is again named "t1". |
| |
| 3.6. Mailers |
| ------------ |
| It is possible to send email alerts when the state of servers changes. |
| If configured email alerts are sent to each mailer that is configured |
| in a mailers section. Email is sent to mailers using SMTP. |
| |
| mailers <mailersect> |
| Creates a new mailer list with the name <mailersect>. It is an |
| independent section which is referenced by one or more proxies. |
| |
| mailer <mailername> <ip>:<port> |
| Defines a mailer inside a mailers section. |
| |
| Example: |
| mailers mymailers |
| mailer smtp1 192.168.0.1:587 |
| mailer smtp2 192.168.0.2:587 |
| |
| backend mybackend |
| mode tcp |
| balance roundrobin |
| |
| email-alert mailers mymailers |
| email-alert from test1@horms.org |
| email-alert to test2@horms.org |
| |
| server srv1 192.168.0.30:80 |
| server srv2 192.168.0.31:80 |
| |
| timeout mail <time> |
| Defines the time available for a mail/connection to be made and send to |
| the mail-server. If not defined the default value is 10 seconds. To allow |
| for at least two SYN-ACK packets to be send during initial TCP handshake it |
| is advised to keep this value above 4 seconds. |
| |
| Example: |
| mailers mymailers |
| timeout mail 20s |
| mailer smtp1 192.168.0.1:587 |
| |
| 3.7. Programs |
| ------------- |
| In master-worker mode, it is possible to launch external binaries with the |
| master, these processes are called programs. These programs are launched and |
| managed the same way as the workers. |
| |
| During a reload of HAProxy, those processes are dealing with the same |
| sequence as a worker: |
| |
| - the master is re-executed |
| - the master sends a SIGUSR1 signal to the program |
| - if "option start-on-reload" is not disabled, the master launches a new |
| instance of the program |
| |
| During a stop, or restart, a SIGTERM is sent to the programs. |
| |
| program <name> |
| This is a new program section, this section will create an instance <name> |
| which is visible in "show proc" on the master CLI. (See "9.4. Master CLI" in |
| the management guide). |
| |
| command <command> [arguments*] |
| Define the command to start with optional arguments. The command is looked |
| up in the current PATH if it does not include an absolute path. This is a |
| mandatory option of the program section. Arguments containing spaces must |
| be enclosed in quotes or double quotes or be prefixed by a backslash. |
| |
| user <user name> |
| Changes the executed command user ID to the <user name> from /etc/passwd. |
| See also "group". |
| |
| group <group name> |
| Changes the executed command group ID to the <group name> from /etc/group. |
| See also "user". |
| |
| option start-on-reload |
| no option start-on-reload |
| Start (or not) a new instance of the program upon a reload of the master. |
| The default is to start a new instance. This option may only be used in a |
| program section. |
| |
| |
| 3.8. HTTP-errors |
| ---------------- |
| |
| It is possible to globally declare several groups of HTTP errors, to be |
| imported afterwards in any proxy section. Same group may be referenced at |
| several places and can be fully or partially imported. |
| |
| http-errors <name> |
| Create a new http-errors group with the name <name>. It is an independent |
| section that may be referenced by one or more proxies using its name. |
| |
| errorfile <code> <file> |
| Associate a file contents to an HTTP error code |
| |
| Arguments : |
| <code> is the HTTP status code. Currently, HAProxy is capable of |
| generating codes 200, 400, 401, 403, 404, 405, 407, 408, 410, |
| 425, 429, 500, 501, 502, 503, and 504. |
| |
| <file> designates a file containing the full HTTP response. It is |
| recommended to follow the common practice of appending ".http" to |
| the filename so that people do not confuse the response with HTML |
| error pages, and to use absolute paths, since files are read |
| before any chroot is performed. |
| |
| Please referrers to "errorfile" keyword in section 4 for details. |
| |
| Example: |
| http-errors website-1 |
| errorfile 400 /etc/haproxy/errorfiles/site1/400.http |
| errorfile 404 /etc/haproxy/errorfiles/site1/404.http |
| errorfile 408 /dev/null # work around Chrome pre-connect bug |
| |
| http-errors website-2 |
| errorfile 400 /etc/haproxy/errorfiles/site2/400.http |
| errorfile 404 /etc/haproxy/errorfiles/site2/404.http |
| errorfile 408 /dev/null # work around Chrome pre-connect bug |
| |
| 3.9. Rings |
| ---------- |
| |
| It is possible to globally declare ring-buffers, to be used as target for log |
| servers or traces. |
| |
| ring <ringname> |
| Creates a new ring-buffer with name <ringname>. |
| |
| backing-file <path> |
| This replaces the regular memory allocation by a RAM-mapped file to store the |
| ring. This can be useful for collecting traces or logs for post-mortem |
| analysis, without having to attach a slow client to the CLI. Newer contents |
| will automatically replace older ones so that the latest contents are always |
| available. The contents written to the ring will be visible in that file once |
| the process stops (most often they will even be seen very soon after but |
| there is no such guarantee since writes are not synchronous). |
| |
| When this option is used, the total storage area is reduced by the size of |
| the "struct ring" that starts at the beginning of the area, and that is |
| required to recover the area's contents. The file will be created with the |
| starting user's ownership, with mode 0600 and will be of the size configured |
| by the "size" directive. When the directive is parsed (thus even during |
| config checks), any existing non-empty file will first be renamed with the |
| extra suffix ".bak", and any previously existing file with suffix ".bak" will |
| be removed. This ensures that instant reload or restart of the process will |
| not wipe precious debugging information, and will leave time for an admin to |
| spot this new ".bak" file and to archive it if needed. As such, after a crash |
| the file designated by <path> will contain the freshest information, and if |
| the service is restarted, the "<path>.bak" file will have it instead. This |
| means that the total storage capacity required will be double of the ring |
| size. Failures to rotate the file are silently ignored, so placing the file |
| into a directory without write permissions will be sufficient to avoid the |
| backup file if not desired. |
| |
| WARNING: there are stability and security implications in using this feature. |
| First, backing the ring to a slow device (e.g. physical hard drive) may cause |
| perceptible slowdowns during accesses, and possibly even panics if too many |
| threads compete for accesses. Second, an external process modifying the area |
| could cause the haproxy process to crash or to overwrite some of its own |
| memory with traces. Third, if the file system fills up before the ring, |
| writes to the ring may cause the process to crash. |
| |
| The information present in this ring are structured and are NOT directly |
| readable using a text editor (even though most of it looks barely readable). |
| The output of this file is only intended for developers. |
| |
| description <text> |
| The description is an optional description string of the ring. It will |
| appear on CLI. By default, <name> is reused to fill this field. |
| |
| format <format> |
| Format used to store events into the ring buffer. |
| |
| Arguments: |
| <format> is the log format used when generating syslog messages. It may be |
| one of the following : |
| |
| iso A message containing only the ISO date, followed by the text. |
| The PID, process name and system name are omitted. This is |
| designed to be used with a local log server. |
| |
| local Analog to rfc3164 syslog message format except that hostname |
| field is stripped. This is the default. |
| Note: option "log-send-hostname" switches the default to |
| rfc3164. |
| |
| raw A message containing only the text. The level, PID, date, time, |
| process name and system name are omitted. This is designed to be |
| used in containers or during development, where the severity |
| only depends on the file descriptor used (stdout/stderr). This |
| is the default. |
| |
| rfc3164 The RFC3164 syslog message format. |
| (https://tools.ietf.org/html/rfc3164) |
| |
| rfc5424 The RFC5424 syslog message format. |
| (https://tools.ietf.org/html/rfc5424) |
| |
| short A message containing only a level between angle brackets such as |
| '<3>', followed by the text. The PID, date, time, process name |
| and system name are omitted. This is designed to be used with a |
| local log server. This format is compatible with what the systemd |
| logger consumes. |
| |
| priority A message containing only a level plus syslog facility between angle |
| brackets such as '<63>', followed by the text. The PID, date, time, |
| process name and system name are omitted. This is designed to be used |
| with a local log server. |
| |
| timed A message containing only a level between angle brackets such as |
| '<3>', followed by ISO date and by the text. The PID, process |
| name and system name are omitted. This is designed to be |
| used with a local log server. |
| |
| maxlen <length> |
| The maximum length of an event message stored into the ring, |
| including formatted header. If an event message is longer than |
| <length>, it will be truncated to this length. |
| |
| server <name> <address> [param*] |
| Used to configure a syslog tcp server to forward messages from ring buffer. |
| This supports for all "server" parameters found in 5.2 paragraph. Some of |
| these parameters are irrelevant for "ring" sections. Important point: there |
| is little reason to add more than one server to a ring, because all servers |
| will receive the exact same copy of the ring contents, and as such the ring |
| will progress at the speed of the slowest server. If one server does not |
| respond, it will prevent old messages from being purged and may block new |
| messages from being inserted into the ring. The proper way to send messages |
| to multiple servers is to use one distinct ring per log server, not to |
| attach multiple servers to the same ring. Note that specific server directive |
| "log-proto" is used to set the protocol used to send messages. |
| |
| size <size> |
| This is the optional size in bytes for the ring-buffer. Default value is |
| set to BUFSIZE. |
| |
| timeout connect <timeout> |
| Set the maximum time to wait for a connection attempt to a server to succeed. |
| |
| Arguments : |
| <timeout> is the timeout value specified in milliseconds by default, but |
| can be in any other unit if the number is suffixed by the unit, |
| as explained at the top of this document. |
| |
| timeout server <timeout> |
| Set the maximum time for pending data staying into output buffer. |
| |
| Arguments : |
| <timeout> is the timeout value specified in milliseconds by default, but |
| can be in any other unit if the number is suffixed by the unit, |
| as explained at the top of this document. |
| |
| Example: |
| global |
| log ring@myring local7 |
| |
| ring myring |
| description "My local buffer" |
| format rfc3164 |
| maxlen 1200 |
| size 32764 |
| timeout connect 5s |
| timeout server 10s |
| server mysyslogsrv 127.0.0.1:6514 log-proto octet-count |
| |
| 3.10. Log forwarding |
| ------------------- |
| |
| It is possible to declare one or multiple log forwarding section, |
| HAProxy will forward all received log messages to a log servers list. |
| |
| log-forward <name> |
| Creates a new log forwarder proxy identified as <name>. |
| |
| backlog <conns> |
| Give hints to the system about the approximate listen backlog desired size |
| on connections accept. |
| |
| bind <addr> [param*] |
| Used to configure a stream log listener to receive messages to forward. |
| This supports the "bind" parameters found in 5.1 paragraph including |
| those about ssl but some statements such as "alpn" may be irrelevant for |
| syslog protocol over TCP. |
| Those listeners support both "Octet Counting" and "Non-Transparent-Framing" |
| modes as defined in rfc-6587. |
| |
| dgram-bind <addr> [param*] |
| Used to configure a datagram log listener to receive messages to forward. |
| Addresses must be in IPv4 or IPv6 form,followed by a port. This supports |
| for some of the "bind" parameters found in 5.1 paragraph among which |
| "interface", "namespace" or "transparent", the other ones being |
| silently ignored as irrelevant for UDP/syslog case. |
| |
| log global |
| log <address> [len <length>] [format <format>] [sample <ranges>:<sample_size>] |
| <facility> [<level> [<minlevel>]] |
| Used to configure target log servers. See more details on proxies |
| documentation. |
| If no format specified, HAProxy tries to keep the incoming log format. |
| Configured facility is ignored, except if incoming message does not |
| present a facility but one is mandatory on the outgoing format. |
| If there is no timestamp available in the input format, but the field |
| exists in output format, HAProxy will use the local date. |
| |
| Example: |
| global |
| log stderr format iso local7 |
| |
| ring myring |
| description "My local buffer" |
| format rfc5424 |
| maxlen 1200 |
| size 32764 |
| timeout connect 5s |
| timeout server 10s |
| # syslog tcp server |
| server mysyslogsrv 127.0.0.1:514 log-proto octet-count |
| |
| log-forward sylog-loadb |
| dgram-bind 127.0.0.1:1514 |
| bind 127.0.0.1:1514 |
| # all messages on stderr |
| log global |
| # all messages on local tcp syslog server |
| log ring@myring local0 |
| # load balance messages on 4 udp syslog servers |
| log 127.0.0.1:10001 sample 1:4 local0 |
| log 127.0.0.1:10002 sample 2:4 local0 |
| log 127.0.0.1:10003 sample 3:4 local0 |
| log 127.0.0.1:10004 sample 4:4 local0 |
| |
| maxconn <conns> |
| Fix the maximum number of concurrent connections on a log forwarder. |
| 10 is the default. |
| |
| timeout client <timeout> |
| Set the maximum inactivity time on the client side. |
| |
| 3.11. HTTPClient tuning |
| ----------------------- |
| |
| HTTPClient is an internal HTTP library, it can be used by various subsystems, |
| for example in LUA scripts. HTTPClient is not used in the data path, in other |
| words it has nothing with HTTP traffic passing through HAProxy. |
| |
| httpclient.resolvers.disabled <on|off> |
| Disable the DNS resolution of the httpclient. Prevent the creation of the |
| "default" resolvers section. |
| |
| Default value is off. |
| |
| httpclient.resolvers.id <resolvers id> |
| This option defines the resolvers section with which the httpclient will try |
| to resolve. |
| |
| Default option is the "default" resolvers ID. By default, if this option is |
| not used, it will simply disable the resolving if the section is not found. |
| |
| However, when this option is explicitly enabled it will trigger a |
| configuration error if it fails to load. |
| |
| httpclient.resolvers.prefer <ipv4|ipv6> |
| This option allows to chose which family of IP you want when resolving, |
| which is convenient when IPv6 is not available on your network. Default |
| option is "ipv6". |
| |
| httpclient.retries <number> |
| This option allows to configure the number of retries attempt of the |
| httpclient when a request failed. This does the same as the "retries" keyword |
| in a backend. |
| |
| Default value is 3. |
| |
| httpclient.ssl.ca-file <cafile> |
| This option defines the ca-file which should be used to verify the server |
| certificate. It takes the same parameters as the "ca-file" option on the |
| server line. |
| |
| By default and when this option is not used, the value is |
| "@system-ca" which tries to load the CA of the system. If it fails the SSL |
| will be disabled for the httpclient. |
| |
| However, when this option is explicitly enabled it will trigger a |
| configuration error if it fails. |
| |
| httpclient.ssl.verify [none|required] |
| Works the same way as the verify option on server lines. If specified to 'none', |
| servers certificates are not verified. Default option is "required". |
| |
| By default and when this option is not used, the value is |
| "required". If it fails the SSL will be disabled for the httpclient. |
| |
| However, when this option is explicitly enabled it will trigger a |
| configuration error if it fails. |
| |
| httpclient.timeout.connect <timeout> |
| Set the maximum time to wait for a connection attempt by default for the |
| httpclient. |
| |
| Arguments : |
| <timeout> is the timeout value specified in milliseconds by default, but |
| can be in any other unit if the number is suffixed by the unit, |
| as explained at the top of this document. |
| |
| The default value is 5000ms. |
| |
| 4. Proxies |
| ---------- |
| |
| Proxy configuration can be located in a set of sections : |
| - defaults [<name>] [ from <defaults_name> ] |
| - frontend <name> [ from <defaults_name> ] |
| - backend <name> [ from <defaults_name> ] |
| - listen <name> [ from <defaults_name> ] |
| |
| A "frontend" section describes a set of listening sockets accepting client |
| connections. |
| |
| A "backend" section describes a set of servers to which the proxy will connect |
| to forward incoming connections. |
| |
| A "listen" section defines a complete proxy with its frontend and backend |
| parts combined in one section. It is generally useful for TCP-only traffic. |
| |
| A "defaults" section resets all settings to the documented ones and presets new |
| ones for use by subsequent sections. All of "frontend", "backend" and "listen" |
| sections always take their initial settings from a defaults section, by default |
| the latest one that appears before the newly created section. It is possible to |
| explicitly designate a specific "defaults" section to load the initial settings |
| from by indicating its name on the section line after the optional keyword |
| "from". While "defaults" section do not impose a name, this use is encouraged |
| for better readability. It is also the only way to designate a specific section |
| to use instead of the default previous one. Since "defaults" section names are |
| optional, by default a very permissive check is applied on their name and these |
| are even permitted to overlap. However if a "defaults" section is referenced by |
| any other section, its name must comply with the syntax imposed on all proxy |
| names, and this name must be unique among the defaults sections. Please note |
| that regardless of what is currently permitted, it is recommended to avoid |
| duplicate section names in general and to respect the same syntax as for proxy |
| names. This rule might be enforced in a future version. In addition, a warning |
| is emitted if a defaults section is explicitly used by a proxy while it is also |
| implicitly used by another one because it is the last one defined. It is highly |
| encouraged to not mix both usages by always using explicit references or by |
| adding a last common defaults section reserved for all implicit uses. |
| |
| Note that it is even possible for a defaults section to take its initial |
| settings from another one, and as such, inherit settings across multiple levels |
| of defaults sections. This can be convenient to establish certain configuration |
| profiles to carry groups of default settings (e.g. TCP vs HTTP or short vs long |
| timeouts) but can quickly become confusing to follow. |
| |
| All proxy names must be formed from upper and lower case letters, digits, |
| '-' (dash), '_' (underscore) , '.' (dot) and ':' (colon). ACL names are |
| case-sensitive, which means that "www" and "WWW" are two different proxies. |
| |
| Historically, all proxy names could overlap, it just caused troubles in the |
| logs. Since the introduction of content switching, it is mandatory that two |
| proxies with overlapping capabilities (frontend/backend) have different names. |
| However, it is still permitted that a frontend and a backend share the same |
| name, as this configuration seems to be commonly encountered. |
| |
| Right now, two major proxy modes are supported : "tcp", also known as layer 4, |
| and "http", also known as layer 7. In layer 4 mode, HAProxy simply forwards |
| bidirectional traffic between two sides. In layer 7 mode, HAProxy analyzes the |
| protocol, and can interact with it by allowing, blocking, switching, adding, |
| modifying, or removing arbitrary contents in requests or responses, based on |
| arbitrary criteria. |
| |
| In HTTP mode, the processing applied to requests and responses flowing over |
| a connection depends in the combination of the frontend's HTTP options and |
| the backend's. HAProxy supports 3 connection modes : |
| |
| - KAL : keep alive ("option http-keep-alive") which is the default mode : all |
| requests and responses are processed, and connections remain open but idle |
| between responses and new requests. |
| |
| - SCL: server close ("option http-server-close") : the server-facing |
| connection is closed after the end of the response is received, but the |
| client-facing connection remains open. |
| |
| - CLO: close ("option httpclose"): the connection is closed after the end of |
| the response and "Connection: close" appended in both directions. |
| |
| The effective mode that will be applied to a connection passing through a |
| frontend and a backend can be determined by both proxy modes according to the |
| following matrix, but in short, the modes are symmetric, keep-alive is the |
| weakest option and close is the strongest. |
| |
| Backend mode |
| |
| | KAL | SCL | CLO |
| ----+-----+-----+---- |
| KAL | KAL | SCL | CLO |
| ----+-----+-----+---- |
| mode SCL | SCL | SCL | CLO |
| ----+-----+-----+---- |
| CLO | CLO | CLO | CLO |
| |
| It is possible to chain a TCP frontend to an HTTP backend. It is pointless if |
| only HTTP traffic is handled. But it may be used to handle several protocols |
| within the same frontend. In this case, the client's connection is first handled |
| as a raw tcp connection before being upgraded to HTTP. Before the upgrade, the |
| content processings are performend on raw data. Once upgraded, data is parsed |
| and stored using an internal representation called HTX and it is no longer |
| possible to rely on raw representation. There is no way to go back. |
| |
| There are two kind of upgrades, in-place upgrades and destructive upgrades. The |
| first ones involves a TCP to HTTP/1 upgrade. In HTTP/1, the request |
| processings are serialized, thus the applicative stream can be preserved. The |
| second one involves a TCP to HTTP/2 upgrade. Because it is a multiplexed |
| protocol, the applicative stream cannot be associated to any HTTP/2 stream and |
| is destroyed. New applicative streams are then created when HAProxy receives |
| new HTTP/2 streams at the lower level, in the H2 multiplexer. It is important |
| to understand this difference because that drastically changes the way to |
| process data. When an HTTP/1 upgrade is performed, the content processings |
| already performed on raw data are neither lost nor reexecuted while for an |
| HTTP/2 upgrade, applicative streams are distinct and all frontend rules are |
| evaluated systematically on each one. And as said, the first stream, the TCP |
| one, is destroyed, but only after the frontend rules were evaluated. |
| |
| There is another importnat point to understand when HTTP processings are |
| performed from a TCP proxy. While HAProxy is able to parse HTTP/1 in-fly from |
| tcp-request content rules, it is not possible for HTTP/2. Only the HTTP/2 |
| preface can be parsed. This is a huge limitation regarding the HTTP content |
| analysis in TCP. Concretely it is only possible to know if received data are |
| HTTP. For instance, it is not possible to choose a backend based on the Host |
| header value while it is trivial in HTTP/1. Hopefully, there is a solution to |
| mitigate this drawback. |
| |
| There are two ways to perform an HTTP upgrade. The first one, the historical |
| method, is to select an HTTP backend. The upgrade happens when the backend is |
| set. Thus, for in-place upgrades, only the backend configuration is considered |
| in the HTTP data processing. For destructive upgrades, the applicative stream |
| is destroyed, thus its processing is stopped. With this method, possibilities |
| to choose a backend with an HTTP/2 connection are really limited, as mentioned |
| above, and a bit useless because the stream is destroyed. The second method is |
| to upgrade during the tcp-request content rules evaluation, thanks to the |
| "switch-mode http" action. In this case, the upgrade is performed in the |
| frontend context and it is possible to define HTTP directives in this |
| frontend. For in-place upgrades, it offers all the power of the HTTP analysis |
| as soon as possible. It is not that far from an HTTP frontend. For destructive |
| upgrades, it does not change anything except it is useless to choose a backend |
| on limited information. It is of course the recommended method. Thus, testing |
| the request protocol from the tcp-request content rules to perform an HTTP |
| upgrade is enough. All the remaining HTTP manipulation may be moved to the |
| frontend http-request ruleset. But keep in mind that tcp-request content rules |
| remains evaluated on each streams, that can't be changed. |
| |
| 4.1. Proxy keywords matrix |
| -------------------------- |
| |
| The following list of keywords is supported. Most of them may only be used in a |
| limited set of section types. Some of them are marked as "deprecated" because |
| they are inherited from an old syntax which may be confusing or functionally |
| limited, and there are new recommended keywords to replace them. Keywords |
| marked with "(*)" can be optionally inverted using the "no" prefix, e.g. "no |
| option contstats". This makes sense when the option has been enabled by default |
| and must be disabled for a specific instance. Such options may also be prefixed |
| with "default" in order to restore default settings regardless of what has been |
| specified in a previous "defaults" section. Keywords supported in defaults |
| sections marked with "(!)" are only supported in named defaults sections, not |
| anonymous ones. |
| |
| |
| keyword defaults frontend listen backend |
| ------------------------------------+----------+----------+---------+--------- |
| acl X (!) X X X |
| backlog X X X - |
| balance X - X X |
| bind - X X - |
| capture cookie - X X - |
| capture request header - X X - |
| capture response header - X X - |
| clitcpka-cnt X X X - |
| clitcpka-idle X X X - |
| clitcpka-intvl X X X - |
| compression X X X X |
| cookie X - X X |
| declare capture - X X - |
| default-server X - X X |
| default_backend X X X - |
| description - X X X |
| disabled X X X X |
| dispatch - - X X |
| email-alert from X X X X |
| email-alert level X X X X |
| email-alert mailers X X X X |
| email-alert myhostname X X X X |
| email-alert to X X X X |
| enabled X X X X |
| errorfile X X X X |
| errorfiles X X X X |
| errorloc X X X X |
| errorloc302 X X X X |
| -- keyword -------------------------- defaults - frontend - listen -- backend - |
| errorloc303 X X X X |
| error-log-format X X X - |
| force-persist - - X X |
| filter - X X X |
| fullconn X - X X |
| hash-type X - X X |
| http-after-response X (!) X X X |
| http-check comment X - X X |
| http-check connect X - X X |
| http-check disable-on-404 X - X X |
| http-check expect X - X X |
| http-check send X - X X |
| http-check send-state X - X X |
| http-check set-var X - X X |
| http-check unset-var X - X X |
| http-error X X X X |
| http-request X (!) X X X |
| http-response X (!) X X X |
| http-reuse X - X X |
| http-send-name-header X - X X |
| id - X X X |
| ignore-persist - - X X |
| load-server-state-from-file X - X X |
| log (*) X X X X |
| log-format X X X - |
| log-format-sd X X X - |
| log-tag X X X X |
| max-keep-alive-queue X - X X |
| max-session-srv-conns X X X - |
| maxconn X X X - |
| mode X X X X |
| monitor fail - X X - |
| monitor-uri X X X - |
| option abortonclose (*) X - X X |
| option accept-invalid-http-request (*) X X X - |
| option accept-invalid-http-response (*) X - X X |
| option allbackups (*) X - X X |
| option checkcache (*) X - X X |
| option clitcpka (*) X X X - |
| option contstats (*) X X X - |
| option disable-h2-upgrade (*) X X X - |
| option dontlog-normal (*) X X X - |
| option dontlognull (*) X X X - |
| -- keyword -------------------------- defaults - frontend - listen -- backend - |
| option forwardfor X X X X |
| option forwarded (*) X - X X |
| option h1-case-adjust-bogus-client (*) X X X - |
| option h1-case-adjust-bogus-server (*) X - X X |
| option http-buffer-request (*) X X X X |
| option http-ignore-probes (*) X X X - |
| option http-keep-alive (*) X X X X |
| option http-no-delay (*) X X X X |
| option http-pretend-keepalive (*) X - X X |
| option http-restrict-req-hdr-names X X X X |
| option http-server-close (*) X X X X |
| option http-use-proxy-header (*) X X X - |
| option httpchk X - X X |
| option httpclose (*) X X X X |
| option httplog X X X - |
| option httpslog X X X - |
| option independent-streams (*) X X X X |
| option ldap-check X - X X |
| option external-check X - X X |
| option log-health-checks (*) X - X X |
| option log-separate-errors (*) X X X - |
| option logasap (*) X X X - |
| option mysql-check X - X X |
| option nolinger (*) X X X X |
| option originalto X X X X |
| option persist (*) X - X X |
| option pgsql-check X - X X |
| option prefer-last-server (*) X - X X |
| option redispatch (*) X - X X |
| option redis-check X - X X |
| option smtpchk X - X X |
| option socket-stats (*) X X X - |
| option splice-auto (*) X X X X |
| option splice-request (*) X X X X |
| option splice-response (*) X X X X |
| option spop-check X - X X |
| option srvtcpka (*) X - X X |
| option ssl-hello-chk X - X X |
| -- keyword -------------------------- defaults - frontend - listen -- backend - |
| option tcp-check X - X X |
| option tcp-smart-accept (*) X X X - |
| option tcp-smart-connect (*) X - X X |
| option tcpka X X X X |
| option tcplog X X X X |
| option transparent (*) X - X X |
| option idle-close-on-response (*) X X X - |
| external-check command X - X X |
| external-check path X - X X |
| persist rdp-cookie X - X X |
| rate-limit sessions X X X - |
| redirect - X X X |
| -- keyword -------------------------- defaults - frontend - listen -- backend - |
| retries X - X X |
| retry-on X - X X |
| server - - X X |
| server-state-file-name X - X X |
| server-template - - X X |
| source X - X X |
| srvtcpka-cnt X - X X |
| srvtcpka-idle X - X X |
| srvtcpka-intvl X - X X |
| stats admin - X X X |
| stats auth X X X X |
| stats enable X X X X |
| stats hide-version X X X X |
| stats http-request - X X X |
| stats realm X X X X |
| stats refresh X X X X |
| stats scope X X X X |
| stats show-desc X X X X |
| stats show-legends X X X X |
| stats show-node X X X X |
| stats uri X X X X |
| -- keyword -------------------------- defaults - frontend - listen -- backend - |
| stick match - - X X |
| stick on - - X X |
| stick store-request - - X X |
| stick store-response - - X X |
| stick-table - X X X |
| tcp-check comment X - X X |
| tcp-check connect X - X X |
| tcp-check expect X - X X |
| tcp-check send X - X X |
| tcp-check send-lf X - X X |
| tcp-check send-binary X - X X |
| tcp-check send-binary-lf X - X X |
| tcp-check set-var X - X X |
| tcp-check unset-var X - X X |
| tcp-request connection X (!) X X - |
| tcp-request content X (!) X X X |
| tcp-request inspect-delay X (!) X X X |
| tcp-request session X (!) X X - |
| tcp-response content X (!) - X X |
| tcp-response inspect-delay X (!) - X X |
| timeout check X - X X |
| timeout client X X X - |
| timeout client-fin X X X - |
| timeout connect X - X X |
| timeout http-keep-alive X X X X |
| timeout http-request X X X X |
| timeout queue X - X X |
| timeout server X - X X |
| timeout server-fin X - X X |
| timeout tarpit X X X X |
| timeout tunnel X - X X |
| transparent (deprecated) X - X X |
| unique-id-format X X X - |
| unique-id-header X X X - |
| use_backend - X X - |
| use-fcgi-app - - X X |
| use-server - - X X |
| ------------------------------------+----------+----------+---------+--------- |
| keyword defaults frontend listen backend |
| |
| |
| 4.2. Alphabetically sorted keywords reference |
| --------------------------------------------- |
| |
| This section provides a description of each keyword and its usage. |
| |
| |
| acl <aclname> <criterion> [flags] [operator] <value> ... |
| Declare or complete an access list. |
| May be used in sections : defaults | frontend | listen | backend |
| yes(!) | yes | yes | yes |
| |
| This directive is only available from named defaults sections, not anonymous |
| ones. ACLs defined in a defaults section are not visible from other sections |
| using it. |
| |
| Example: |
| acl invalid_src src 0.0.0.0/7 224.0.0.0/3 |
| acl invalid_src src_port 0:1023 |
| acl local_dst hdr(host) -i localhost |
| |
| See section 7 about ACL usage. |
| |
| |
| backlog <conns> |
| Give hints to the system about the approximate listen backlog desired size |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | no |
| Arguments : |
| <conns> is the number of pending connections. Depending on the operating |
| system, it may represent the number of already acknowledged |
| connections, of non-acknowledged ones, or both. |
| |
| In order to protect against SYN flood attacks, one solution is to increase |
| the system's SYN backlog size. Depending on the system, sometimes it is just |
| tunable via a system parameter, sometimes it is not adjustable at all, and |
| sometimes the system relies on hints given by the application at the time of |
| the listen() syscall. By default, HAProxy passes the frontend's maxconn value |
| to the listen() syscall. On systems which can make use of this value, it can |
| sometimes be useful to be able to specify a different value, hence this |
| backlog parameter. |
| |
| On Linux 2.4, the parameter is ignored by the system. On Linux 2.6, it is |
| used as a hint and the system accepts up to the smallest greater power of |
| two, and never more than some limits (usually 32768). |
| |
| See also : "maxconn" and the target operating system's tuning guide. |
| |
| |
| balance <algorithm> [ <arguments> ] |
| balance url_param <param> [check_post] |
| Define the load balancing algorithm to be used in a backend. |
| May be used in sections : defaults | frontend | listen | backend |
| yes | no | yes | yes |
| Arguments : |
| <algorithm> is the algorithm used to select a server when doing load |
| balancing. This only applies when no persistence information |
| is available, or when a connection is redispatched to another |
| server. <algorithm> may be one of the following : |
| |
| roundrobin Each server is used in turns, according to their weights. |
| This is the smoothest and fairest algorithm when the server's |
| processing time remains equally distributed. This algorithm |
| is dynamic, which means that server weights may be adjusted |
| on the fly for slow starts for instance. It is limited by |
| design to 4095 active servers per backend. Note that in some |
| large farms, when a server becomes up after having been down |
| for a very short time, it may sometimes take a few hundreds |
| requests for it to be re-integrated into the farm and start |
| receiving traffic. This is normal, though very rare. It is |
| indicated here in case you would have the chance to observe |
| it, so that you don't worry. |
| |
| static-rr Each server is used in turns, according to their weights. |
| This algorithm is as similar to roundrobin except that it is |
| static, which means that changing a server's weight on the |
| fly will have no effect. On the other hand, it has no design |
| limitation on the number of servers, and when a server goes |
| up, it is always immediately reintroduced into the farm, once |
| the full map is recomputed. It also uses slightly less CPU to |
| run (around -1%). |
| |
| leastconn The server with the lowest number of connections receives the |
| connection. Round-robin is performed within groups of servers |
| of the same load to ensure that all servers will be used. Use |
| of this algorithm is recommended where very long sessions are |
| expected, such as LDAP, SQL, TSE, etc... but is not very well |
| suited for protocols using short sessions such as HTTP. This |
| algorithm is dynamic, which means that server weights may be |
| adjusted on the fly for slow starts for instance. It will |
| also consider the number of queued connections in addition to |
| the established ones in order to minimize queuing. |
| |
| first The first server with available connection slots receives the |
| connection. The servers are chosen from the lowest numeric |
| identifier to the highest (see server parameter "id"), which |
| defaults to the server's position in the farm. Once a server |
| reaches its maxconn value, the next server is used. It does |
| not make sense to use this algorithm without setting maxconn. |
| The purpose of this algorithm is to always use the smallest |
| number of servers so that extra servers can be powered off |
| during non-intensive hours. This algorithm ignores the server |
| weight, and brings more benefit to long session such as RDP |
| or IMAP than HTTP, though it can be useful there too. In |
| order to use this algorithm efficiently, it is recommended |
| that a cloud controller regularly checks server usage to turn |
| them off when unused, and regularly checks backend queue to |
| turn new servers on when the queue inflates. Alternatively, |
| using "http-check send-state" may inform servers on the load. |
| |
| hash Takes a regular sample expression in argument. The expression |
| is evaluated for each request and hashed according to the |
| configured hash-type. The result of the hash is divided by |
| the total weight of the running servers to designate which |
| server will receive the request. This can be used in place of |
| "source", "uri", "hdr()", "url_param()", "rdp-cookie" to make |
| use of a converter, refine the evaluation, or be used to |
| extract data from local variables for example. When the data |
| is not available, round robin will apply. This algorithm is |
| static by default, which means that changing a server's |
| weight on the fly will have no effect, but this can be |
| changed using "hash-type". |
| |
| source The source IP address is hashed and divided by the total |
| weight of the running servers to designate which server will |
| receive the request. This ensures that the same client IP |
| address will always reach the same server as long as no |
| server goes down or up. If the hash result changes due to the |
| number of running servers changing, many clients will be |
| directed to a different server. This algorithm is generally |
| used in TCP mode where no cookie may be inserted. It may also |
| be used on the Internet to provide a best-effort stickiness |
| to clients which refuse session cookies. This algorithm is |
| static by default, which means that changing a server's |
| weight on the fly will have no effect, but this can be |
| changed using "hash-type". See also the "hash" option above. |
| |
| uri This algorithm hashes either the left part of the URI (before |
| the question mark) or the whole URI (if the "whole" parameter |
| is present) and divides the hash value by the total weight of |
| the running servers. The result designates which server will |
| receive the request. This ensures that the same URI will |
| always be directed to the same server as long as no server |
| goes up or down. This is used with proxy caches and |
| anti-virus proxies in order to maximize the cache hit rate. |
| Note that this algorithm may only be used in an HTTP backend. |
| This algorithm is static by default, which means that |
| changing a server's weight on the fly will have no effect, |
| but this can be changed using "hash-type". |
| |
| This algorithm supports two optional parameters "len" and |
| "depth", both followed by a positive integer number. These |
| options may be helpful when it is needed to balance servers |
| based on the beginning of the URI only. The "len" parameter |
| indicates that the algorithm should only consider that many |
| characters at the beginning of the URI to compute the hash. |
| Note that having "len" set to 1 rarely makes sense since most |
| URIs start with a leading "/". |
| |
| The "depth" parameter indicates the maximum directory depth |
| to be used to compute the hash. One level is counted for each |
| slash in the request. If both parameters are specified, the |
| evaluation stops when either is reached. |
| |
| A "path-only" parameter indicates that the hashing key starts |
| at the first '/' of the path. This can be used to ignore the |
| authority part of absolute URIs, and to make sure that HTTP/1 |
| and HTTP/2 URIs will provide the same hash. See also the |
| "hash" option above. |
| |
| url_param The URL parameter specified in argument will be looked up in |
| the query string of each HTTP GET request. |
| |
| If the modifier "check_post" is used, then an HTTP POST |
| request entity will be searched for the parameter argument, |
| when it is not found in a query string after a question mark |
| ('?') in the URL. The message body will only start to be |
| analyzed once either the advertised amount of data has been |
| received or the request buffer is full. In the unlikely event |
| that chunked encoding is used, only the first chunk is |
| scanned. Parameter values separated by a chunk boundary, may |
| be randomly balanced if at all. This keyword used to support |
| an optional <max_wait> parameter which is now ignored. |
| |
| If the parameter is found followed by an equal sign ('=') and |
| a value, then the value is hashed and divided by the total |
| weight of the running servers. The result designates which |
| server will receive the request. |
| |
| This is used to track user identifiers in requests and ensure |
| that a same user ID will always be sent to the same server as |
| long as no server goes up or down. If no value is found or if |
| the parameter is not found, then a round robin algorithm is |
| applied. Note that this algorithm may only be used in an HTTP |
| backend. This algorithm is static by default, which means |
| that changing a server's weight on the fly will have no |
| effect, but this can be changed using "hash-type". See also |
| the "hash" option above. |
| |
| hdr(<name>) The HTTP header <name> will be looked up in each HTTP |
| request. Just as with the equivalent ACL 'hdr()' function, |
| the header name in parenthesis is not case sensitive. If the |
| header is absent or if it does not contain any value, the |
| roundrobin algorithm is applied instead. |
| |
| An optional 'use_domain_only' parameter is available, for |
| reducing the hash algorithm to the main domain part with some |
| specific headers such as 'Host'. For instance, in the Host |
| value "haproxy.1wt.eu", only "1wt" will be considered. |
| |
| This algorithm is static by default, which means that |
| changing a server's weight on the fly will have no effect, |
| but this can be changed using "hash-type". See also the |
| "hash" option above. |
| |
| random |
| random(<draws>) |
| A random number will be used as the key for the consistent |
| hashing function. This means that the servers' weights are |
| respected, dynamic weight changes immediately take effect, as |
| well as new server additions. Random load balancing can be |
| useful with large farms or when servers are frequently added |
| or removed as it may avoid the hammering effect that could |
| result from roundrobin or leastconn in this situation. The |
| hash-balance-factor directive can be used to further improve |
| fairness of the load balancing, especially in situations |
| where servers show highly variable response times. When an |
| argument <draws> is present, it must be an integer value one |
| or greater, indicating the number of draws before selecting |
| the least loaded of these servers. It was indeed demonstrated |
| that picking the least loaded of two servers is enough to |
| significantly improve the fairness of the algorithm, by |
| always avoiding to pick the most loaded server within a farm |
| and getting rid of any bias that could be induced by the |
| unfair distribution of the consistent list. Higher values N |
| will take away N-1 of the highest loaded servers at the |
| expense of performance. With very high values, the algorithm |
| will converge towards the leastconn's result but much slower. |
| The default value is 2, which generally shows very good |
| distribution and performance. This algorithm is also known as |
| the Power of Two Random Choices and is described here : |
| http://www.eecs.harvard.edu/~michaelm/postscripts/handbook2001.pdf |
| |
| rdp-cookie |
| rdp-cookie(<name>) |
| The RDP cookie <name> (or "mstshash" if omitted) will be |
| looked up and hashed for each incoming TCP request. Just as |
| with the equivalent ACL 'req.rdp_cookie()' function, the name |
| is not case-sensitive. This mechanism is useful as a degraded |
| persistence mode, as it makes it possible to always send the |
| same user (or the same session ID) to the same server. If the |
| cookie is not found, the normal roundrobin algorithm is |
| used instead. |
| |
| Note that for this to work, the frontend must ensure that an |
| RDP cookie is already present in the request buffer. For this |
| you must use 'tcp-request content accept' rule combined with |
| a 'req.rdp_cookie_cnt' ACL. |
| |
| This algorithm is static by default, which means that |
| changing a server's weight on the fly will have no effect, |
| but this can be changed using "hash-type". See also the |
| "hash" option above. |
| |
| <arguments> is an optional list of arguments which may be needed by some |
| algorithms. Right now, only "url_param" and "uri" support an |
| optional argument. |
| |
| The load balancing algorithm of a backend is set to roundrobin when no other |
| algorithm, mode nor option have been set. The algorithm may only be set once |
| for each backend. |
| |
| With authentication schemes that require the same connection like NTLM, URI |
| based algorithms must not be used, as they would cause subsequent requests |
| to be routed to different backend servers, breaking the invalid assumptions |
| NTLM relies on. |
| |
| Examples : |
| balance roundrobin |
| balance url_param userid |
| balance url_param session_id check_post 64 |
| balance hdr(User-Agent) |
| balance hdr(host) |
| balance hdr(Host) use_domain_only |
| balance hash req.cookie(clientid) |
| balance hash var(req.client_id) |
| balance hash req.hdr_ip(x-forwarded-for,-1),ipmask(24) |
| |
| Note: the following caveats and limitations on using the "check_post" |
| extension with "url_param" must be considered : |
| |
| - all POST requests are eligible for consideration, because there is no way |
| to determine if the parameters will be found in the body or entity which |
| may contain binary data. Therefore another method may be required to |
| restrict consideration of POST requests that have no URL parameters in |
| the body. (see acl http_end) |
| |
| - using a <max_wait> value larger than the request buffer size does not |
| make sense and is useless. The buffer size is set at build time, and |
| defaults to 16 kB. |
| |
| - Content-Encoding is not supported, the parameter search will probably |
| fail; and load balancing will fall back to Round Robin. |
| |
| - Expect: 100-continue is not supported, load balancing will fall back to |
| Round Robin. |
| |
| - Transfer-Encoding (RFC7230 3.3.1) is only supported in the first chunk. |
| If the entire parameter value is not present in the first chunk, the |
| selection of server is undefined (actually, defined by how little |
| actually appeared in the first chunk). |
| |
| - This feature does not support generation of a 100, 411 or 501 response. |
| |
| - In some cases, requesting "check_post" MAY attempt to scan the entire |
| contents of a message body. Scanning normally terminates when linear |
| white space or control characters are found, indicating the end of what |
| might be a URL parameter list. This is probably not a concern with SGML |
| type message bodies. |
| |
| See also : "dispatch", "cookie", "transparent", "hash-type". |
| |
| |
| bind [<address>]:<port_range> [, ...] [param*] |
| bind /<path> [, ...] [param*] |
| Define one or several listening addresses and/or ports in a frontend. |
| May be used in sections : defaults | frontend | listen | backend |
| no | yes | yes | no |
| Arguments : |
| <address> is optional and can be a host name, an IPv4 address, an IPv6 |
| address, or '*'. It designates the address the frontend will |
| listen on. If unset, all IPv4 addresses of the system will be |
| listened on. The same will apply for '*' or the system's |
| special address "0.0.0.0". The IPv6 equivalent is '::'. Note |
| that for UDP, specific OS features are required when binding |
| on multiple addresses to ensure the correct network interface |
| and source address will be used on response. In other way, |
| for QUIC listeners only bind on multiple addresses if running |
| with a modern enough systems. |
| |
| Optionally, an address family prefix may be used before the |
| address to force the family regardless of the address format, |
| which can be useful to specify a path to a unix socket with |
| no slash ('/'). Currently supported prefixes are : |
| - 'ipv4@' -> address is always IPv4 |
| - 'ipv6@' -> address is always IPv6 |
| - 'udp@' -> address is resolved as IPv4 or IPv6 and |
| protocol UDP is used. Currently those listeners are |
| supported only in log-forward sections. |
| - 'udp4@' -> address is always IPv4 and protocol UDP |
| is used. Currently those listeners are supported |
| only in log-forward sections. |
| - 'udp6@' -> address is always IPv6 and protocol UDP |
| is used. Currently those listeners are supported |
| only in log-forward sections. |
| - 'unix@' -> address is a path to a local unix socket |
| - 'abns@' -> address is in abstract namespace (Linux only). |
| - 'fd@<n>' -> use file descriptor <n> inherited from the |
| parent. The fd must be bound and may or may not already |
| be listening. |
| - 'sockpair@<n>'-> like fd@ but you must use the fd of a |
| connected unix socket or of a socketpair. The bind waits |
| to receive a FD over the unix socket and uses it as if it |
| was the FD of an accept(). Should be used carefully. |
| - 'quic4@' -> address is resolved as IPv4 and protocol UDP |
| is used. Note that to achieve the best performance with a |
| large traffic you should keep "tune.quic.socket-owner" on |
| connection. Else QUIC connections will be multiplexed |
| over the listener socket. Another alternative would be to |
| duplicate QUIC listener instances over several threads, |
| for example using "shards" keyword to at least reduce |
| thread contention. |
| - 'quic6@' -> address is resolved as IPv6 and protocol UDP |
| is used. The performance note for QUIC over IPv4 applies |
| as well. |
| |
| You may want to reference some environment variables in the |
| address parameter, see section 2.3 about environment |
| variables. |
| |
| <port_range> is either a unique TCP port, or a port range for which the |
| proxy will accept connections for the IP address specified |
| above. The port is mandatory for TCP listeners. Note that in |
| the case of an IPv6 address, the port is always the number |
| after the last colon (':'). A range can either be : |
| - a numerical port (ex: '80') |
| - a dash-delimited ports range explicitly stating the lower |
| and upper bounds (ex: '2000-2100') which are included in |
| the range. |
| |
| Particular care must be taken against port ranges, because |
| every <address:port> couple consumes one socket (= a file |
| descriptor), so it's easy to consume lots of descriptors |
| with a simple range, and to run out of sockets. Also, each |
| <address:port> couple must be used only once among all |
| instances running on a same system. Please note that binding |
| to ports lower than 1024 generally require particular |
| privileges to start the program, which are independent of |
| the 'uid' parameter. |
| |
| <path> is a UNIX socket path beginning with a slash ('/'). This is |
| alternative to the TCP listening port. HAProxy will then |
| receive UNIX connections on the socket located at this place. |
| The path must begin with a slash and by default is absolute. |
| It can be relative to the prefix defined by "unix-bind" in |
| the global section. Note that the total length of the prefix |
| followed by the socket path cannot exceed some system limits |
| for UNIX sockets, which commonly are set to 107 characters. |
| |
| <param*> is a list of parameters common to all sockets declared on the |
| same line. These numerous parameters depend on OS and build |
| options and have a complete section dedicated to them. Please |
| refer to section 5 to for more details. |
| |
| It is possible to specify a list of address:port combinations delimited by |
| commas. The frontend will then listen on all of these addresses. There is no |
| fixed limit to the number of addresses and ports which can be listened on in |
| a frontend, as well as there is no limit to the number of "bind" statements |
| in a frontend. |
| |
| Example : |
| listen http_proxy |
| bind :80,:443 |
| bind 10.0.0.1:10080,10.0.0.1:10443 |
| bind /var/run/ssl-frontend.sock user root mode 600 accept-proxy |
| |
| listen http_https_proxy |
| bind :80 |
| bind :443 ssl crt /etc/haproxy/site.pem |
| |
| listen http_https_proxy_explicit |
| bind ipv6@:80 |
| bind ipv4@public_ssl:443 ssl crt /etc/haproxy/site.pem |
| bind unix@ssl-frontend.sock user root mode 600 accept-proxy |
| |
| listen external_bind_app1 |
| bind "fd@${FD_APP1}" |
| |
| listen h3_quic_proxy |
| bind quic4@10.0.0.1:8888 ssl crt /etc/mycrt |
| |
| Note: regarding Linux's abstract namespace sockets, HAProxy uses the whole |
| sun_path length is used for the address length. Some other programs |
| such as socat use the string length only by default. Pass the option |
| ",unix-tightsocklen=0" to any abstract socket definition in socat to |
| make it compatible with HAProxy's. |
| |
| See also : "source", "option forwardfor", "unix-bind" and the PROXY protocol |
| documentation, and section 5 about bind options. |
| |
| |
| capture cookie <name> len <length> |
| Capture and log a cookie in the request and in the response. |
| May be used in sections : defaults | frontend | listen | backend |
| no | yes | yes | no |
| Arguments : |
| <name> is the beginning of the name of the cookie to capture. In order |
| to match the exact name, simply suffix the name with an equal |
| sign ('='). The full name will appear in the logs, which is |
| useful with application servers which adjust both the cookie name |
| and value (e.g. ASPSESSIONXXX). |
| |
| <length> is the maximum number of characters to report in the logs, which |
| include the cookie name, the equal sign and the value, all in the |
| standard "name=value" form. The string will be truncated on the |
| right if it exceeds <length>. |
| |
| Only the first cookie is captured. Both the "cookie" request headers and the |
| "set-cookie" response headers are monitored. This is particularly useful to |
| check for application bugs causing session crossing or stealing between |
| users, because generally the user's cookies can only change on a login page. |
| |
| When the cookie was not presented by the client, the associated log column |
| will report "-". When a request does not cause a cookie to be assigned by the |
| server, a "-" is reported in the response column. |
| |
| The capture is performed in the frontend only because it is necessary that |
| the log format does not change for a given frontend depending on the |
| backends. This may change in the future. Note that there can be only one |
| "capture cookie" statement in a frontend. The maximum capture length is set |
| by the global "tune.http.cookielen" setting and defaults to 63 characters. It |
| is not possible to specify a capture in a "defaults" section. |
| |
| Example: |
| capture cookie ASPSESSION len 32 |
| |
| See also : "capture request header", "capture response header" as well as |
| section 8 about logging. |
| |
| |
| capture request header <name> len <length> |
| Capture and log the last occurrence of the specified request header. |
| May be used in sections : defaults | frontend | listen | backend |
| no | yes | yes | no |
| Arguments : |
| <name> is the name of the header to capture. The header names are not |
| case-sensitive, but it is a common practice to write them as they |
| appear in the requests, with the first letter of each word in |
| upper case. The header name will not appear in the logs, only the |
| value is reported, but the position in the logs is respected. |
| |
| <length> is the maximum number of characters to extract from the value and |
| report in the logs. The string will be truncated on the right if |
| it exceeds <length>. |
| |
| The complete value of the last occurrence of the header is captured. The |
| value will be added to the logs between braces ('{}'). If multiple headers |
| are captured, they will be delimited by a vertical bar ('|') and will appear |
| in the same order they were declared in the configuration. Non-existent |
| headers will be logged just as an empty string. Common uses for request |
| header captures include the "Host" field in virtual hosting environments, the |
| "Content-length" when uploads are supported, "User-agent" to quickly |
| differentiate between real users and robots, and "X-Forwarded-For" in proxied |
| environments to find where the request came from. |
| |
| Note that when capturing headers such as "User-agent", some spaces may be |
| logged, making the log analysis more difficult. Thus be careful about what |
| you log if you know your log parser is not smart enough to rely on the |
| braces. |
| |
| There is no limit to the number of captured request headers nor to their |
| length, though it is wise to keep them low to limit memory usage per session. |
| In order to keep log format consistent for a same frontend, header captures |
| can only be declared in a frontend. It is not possible to specify a capture |
| in a "defaults" section. |
| |
| Example: |
| capture request header Host len 15 |
| capture request header X-Forwarded-For len 15 |
| capture request header Referer len 15 |
| |
| See also : "capture cookie", "capture response header" as well as section 8 |
| about logging. |
| |
| |
| capture response header <name> len <length> |
| Capture and log the last occurrence of the specified response header. |
| May be used in sections : defaults | frontend | listen | backend |
| no | yes | yes | no |
| Arguments : |
| <name> is the name of the header to capture. The header names are not |
| case-sensitive, but it is a common practice to write them as they |
| appear in the response, with the first letter of each word in |
| upper case. The header name will not appear in the logs, only the |
| value is reported, but the position in the logs is respected. |
| |
| <length> is the maximum number of characters to extract from the value and |
| report in the logs. The string will be truncated on the right if |
| it exceeds <length>. |
| |
| The complete value of the last occurrence of the header is captured. The |
| result will be added to the logs between braces ('{}') after the captured |
| request headers. If multiple headers are captured, they will be delimited by |
| a vertical bar ('|') and will appear in the same order they were declared in |
| the configuration. Non-existent headers will be logged just as an empty |
| string. Common uses for response header captures include the "Content-length" |
| header which indicates how many bytes are expected to be returned, the |
| "Location" header to track redirections. |
| |
| There is no limit to the number of captured response headers nor to their |
| length, though it is wise to keep them low to limit memory usage per session. |
| In order to keep log format consistent for a same frontend, header captures |
| can only be declared in a frontend. It is not possible to specify a capture |
| in a "defaults" section. |
| |
| Example: |
| capture response header Content-length len 9 |
| capture response header Location len 15 |
| |
| See also : "capture cookie", "capture request header" as well as section 8 |
| about logging. |
| |
| |
| clitcpka-cnt <count> |
| Sets the maximum number of keepalive probes TCP should send before dropping |
| the connection on the client side. |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | no |
| Arguments : |
| <count> is the maximum number of keepalive probes. |
| |
| This keyword corresponds to the socket option TCP_KEEPCNT. If this keyword |
| is not specified, system-wide TCP parameter (tcp_keepalive_probes) is used. |
| The availability of this setting depends on the operating system. It is |
| known to work on Linux. |
| |
| See also : "option clitcpka", "clitcpka-idle", "clitcpka-intvl". |
| |
| |
| clitcpka-idle <timeout> |
| Sets the time the connection needs to remain idle before TCP starts sending |
| keepalive probes, if enabled the sending of TCP keepalive packets on the |
| client side. |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | no |
| Arguments : |
| <timeout> is the time the connection needs to remain idle before TCP starts |
| sending keepalive probes. It is specified in seconds by default, |
| but can be in any other unit if the number is suffixed by the |
| unit, as explained at the top of this document. |
| |
| This keyword corresponds to the socket option TCP_KEEPIDLE. If this keyword |
| is not specified, system-wide TCP parameter (tcp_keepalive_time) is used. |
| The availability of this setting depends on the operating system. It is |
| known to work on Linux. |
| |
| See also : "option clitcpka", "clitcpka-cnt", "clitcpka-intvl". |
| |
| |
| clitcpka-intvl <timeout> |
| Sets the time between individual keepalive probes on the client side. |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | no |
| Arguments : |
| <timeout> is the time between individual keepalive probes. It is specified |
| in seconds by default, but can be in any other unit if the number |
| is suffixed by the unit, as explained at the top of this |
| document. |
| |
| This keyword corresponds to the socket option TCP_KEEPINTVL. If this keyword |
| is not specified, system-wide TCP parameter (tcp_keepalive_intvl) is used. |
| The availability of this setting depends on the operating system. It is |
| known to work on Linux. |
| |
| See also : "option clitcpka", "clitcpka-cnt", "clitcpka-idle". |
| |
| |
| compression algo <algorithm> ... |
| compression algo-req <algorithm> |
| compression algo-res <algorithm> |
| compression type <mime type> ... |
| Enable HTTP compression. |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | yes |
| Arguments : |
| algo is followed by the list of supported compression algorithms for |
| responses (legacy keyword) |
| algo-req is followed by compression algorithm for request (only one is |
| provided). |
| algo-res is followed by the list of supported compression algorithms for |
| responses. |
| type is followed by the list of MIME types that will be compressed for |
| responses (legacy keyword). |
| type-req is followed by the list of MIME types that will be compressed for |
| requests. |
| type-res is followed by the list of MIME types that will be compressed for |
| responses. |
| |
| The currently supported algorithms are : |
| identity this is mostly for debugging, and it was useful for developing |
| the compression feature. Identity does not apply any change on |
| data. |
| |
| gzip applies gzip compression. This setting is only available when |
| support for zlib or libslz was built in. |
| |
| deflate same as "gzip", but with deflate algorithm and zlib format. |
| Note that this algorithm has ambiguous support on many |
| browsers and no support at all from recent ones. It is |
| strongly recommended not to use it for anything else than |
| experimentation. This setting is only available when support |
| for zlib or libslz was built in. |
| |
| raw-deflate same as "deflate" without the zlib wrapper, and used as an |
| alternative when the browser wants "deflate". All major |
| browsers understand it and despite violating the standards, |
| it is known to work better than "deflate", at least on MSIE |
| and some versions of Safari. Do not use it in conjunction |
| with "deflate", use either one or the other since both react |
| to the same Accept-Encoding token. This setting is only |
| available when support for zlib or libslz was built in. |
| |
| Compression will be activated depending on the Accept-Encoding request |
| header. With identity, it does not take care of that header. |
| If backend servers support HTTP compression, these directives |
| will be no-op: HAProxy will see the compressed response and will not |
| compress again. If backend servers do not support HTTP compression and |
| there is Accept-Encoding header in request, HAProxy will compress the |
| matching response. |
| |
| Compression is disabled when: |
| * the request does not advertise a supported compression algorithm in the |
| "Accept-Encoding" header |
| * the response message is not HTTP/1.1 or above |
| * HTTP status code is not one of 200, 201, 202, or 203 |
| * response contain neither a "Content-Length" header nor a |
| "Transfer-Encoding" whose last value is "chunked" |
| * response contains a "Content-Type" header whose first value starts with |
| "multipart" |
| * the response contains the "no-transform" value in the "Cache-control" |
| header |
| * User-Agent matches "Mozilla/4" unless it is MSIE 6 with XP SP2, or MSIE 7 |
| and later |
| * The response contains a "Content-Encoding" header, indicating that the |
| response is already compressed (see compression offload) |
| * The response contains an invalid "ETag" header or multiple ETag headers |
| |
| Note: The compression does not emit the Warning header. |
| |
| Examples : |
| compression algo gzip |
| compression type text/html text/plain |
| |
| See also : "compression offload", "compression direction" |
| |
| compression offload |
| Makes HAProxy work as a compression offloader only. |
| May be used in sections : defaults | frontend | listen | backend |
| no | yes | yes | yes |
| |
| The "offload" setting makes HAProxy remove the Accept-Encoding header to |
| prevent backend servers from compressing responses. It is strongly |
| recommended not to do this because this means that all the compression work |
| will be done on the single point where HAProxy is located. However in some |
| deployment scenarios, HAProxy may be installed in front of a buggy gateway |
| with broken HTTP compression implementation which can't be turned off. |
| In that case HAProxy can be used to prevent that gateway from emitting |
| invalid payloads. In this case, simply removing the header in the |
| configuration does not work because it applies before the header is parsed, |
| so that prevents HAProxy from compressing. The "offload" setting should |
| then be used for such scenarios. |
| |
| If this setting is used in a defaults section, a warning is emitted and the |
| option is ignored. |
| |
| See also : "compression type", "compression algo", "compression direction" |
| |
| compression direction <direction> |
| Makes haproxy able to compress both requests and responses. |
| Valid values are "request", to compress only requests, "response", to |
| compress only responses, or "both", when you want to compress both. |
| The default value is "response". |
| |
| See also : "compression type", "compression algo", "compression offload" |
| |
| cookie <name> [ rewrite | insert | prefix ] [ indirect ] [ nocache ] |
| [ postonly ] [ preserve ] [ httponly ] [ secure ] |
| [ domain <domain> ]* [ maxidle <idle> ] [ maxlife <life> ] |
| [ dynamic ] [ attr <value> ]* |
| Enable cookie-based persistence in a backend. |
| May be used in sections : defaults | frontend | listen | backend |
| yes | no | yes | yes |
| Arguments : |
| <name> is the name of the cookie which will be monitored, modified or |
| inserted in order to bring persistence. This cookie is sent to |
| the client via a "Set-Cookie" header in the response, and is |
| brought back by the client in a "Cookie" header in all requests. |
| Special care should be taken to choose a name which does not |
| conflict with any likely application cookie. Also, if the same |
| backends are subject to be used by the same clients (e.g. |
| HTTP/HTTPS), care should be taken to use different cookie names |
| between all backends if persistence between them is not desired. |
| |
| rewrite This keyword indicates that the cookie will be provided by the |
| server and that HAProxy will have to modify its value to set the |
| server's identifier in it. This mode is handy when the management |
| of complex combinations of "Set-cookie" and "Cache-control" |
| headers is left to the application. The application can then |
| decide whether or not it is appropriate to emit a persistence |
| cookie. Since all responses should be monitored, this mode |
| doesn't work in HTTP tunnel mode. Unless the application |
| behavior is very complex and/or broken, it is advised not to |
| start with this mode for new deployments. This keyword is |
| incompatible with "insert" and "prefix". |
| |
| insert This keyword indicates that the persistence cookie will have to |
| be inserted by HAProxy in server responses if the client did not |
| |
| already have a cookie that would have permitted it to access this |
| server. When used without the "preserve" option, if the server |
| emits a cookie with the same name, it will be removed before |
| processing. For this reason, this mode can be used to upgrade |
| existing configurations running in the "rewrite" mode. The cookie |
| will only be a session cookie and will not be stored on the |
| client's disk. By default, unless the "indirect" option is added, |
| the server will see the cookies emitted by the client. Due to |
| caching effects, it is generally wise to add the "nocache" or |
| "postonly" keywords (see below). The "insert" keyword is not |
| compatible with "rewrite" and "prefix". |
| |
| prefix This keyword indicates that instead of relying on a dedicated |
| cookie for the persistence, an existing one will be completed. |
| This may be needed in some specific environments where the client |
| does not support more than one single cookie and the application |
| already needs it. In this case, whenever the server sets a cookie |
| named <name>, it will be prefixed with the server's identifier |
| and a delimiter. The prefix will be removed from all client |
| requests so that the server still finds the cookie it emitted. |
| Since all requests and responses are subject to being modified, |
| this mode doesn't work with tunnel mode. The "prefix" keyword is |
| not compatible with "rewrite" and "insert". Note: it is highly |
| recommended not to use "indirect" with "prefix", otherwise server |
| cookie updates would not be sent to clients. |
| |
| indirect When this option is specified, no cookie will be emitted to a |
| client which already has a valid one for the server which has |
| processed the request. If the server sets such a cookie itself, |
| it will be removed, unless the "preserve" option is also set. In |
| "insert" mode, this will additionally remove cookies from the |
| requests transmitted to the server, making the persistence |
| mechanism totally transparent from an application point of view. |
| Note: it is highly recommended not to use "indirect" with |
| "prefix", otherwise server cookie updates would not be sent to |
| clients. |
| |
| nocache This option is recommended in conjunction with the insert mode |
| when there is a cache between the client and HAProxy, as it |
| ensures that a cacheable response will be tagged non-cacheable if |
| a cookie needs to be inserted. This is important because if all |
| persistence cookies are added on a cacheable home page for |
| instance, then all customers will then fetch the page from an |
| outer cache and will all share the same persistence cookie, |
| leading to one server receiving much more traffic than others. |
| See also the "insert" and "postonly" options. |
| |
| postonly This option ensures that cookie insertion will only be performed |
| on responses to POST requests. It is an alternative to the |
| "nocache" option, because POST responses are not cacheable, so |
| this ensures that the persistence cookie will never get cached. |
| Since most sites do not need any sort of persistence before the |
| first POST which generally is a login request, this is a very |
| efficient method to optimize caching without risking to find a |
| persistence cookie in the cache. |
| See also the "insert" and "nocache" options. |
| |
| preserve This option may only be used with "insert" and/or "indirect". It |
| allows the server to emit the persistence cookie itself. In this |
| case, if a cookie is found in the response, HAProxy will leave it |
| untouched. This is useful in order to end persistence after a |
| logout request for instance. For this, the server just has to |
| emit a cookie with an invalid value (e.g. empty) or with a date in |
| the past. By combining this mechanism with the "disable-on-404" |
| check option, it is possible to perform a completely graceful |
| shutdown because users will definitely leave the server after |
| they logout. |
| |
| httponly This option tells HAProxy to add an "HttpOnly" cookie attribute |
| when a cookie is inserted. This attribute is used so that a |
| user agent doesn't share the cookie with non-HTTP components. |
| Please check RFC6265 for more information on this attribute. |
| |
| secure This option tells HAProxy to add a "Secure" cookie attribute when |
| a cookie is inserted. This attribute is used so that a user agent |
| never emits this cookie over non-secure channels, which means |
| that a cookie learned with this flag will be presented only over |
| SSL/TLS connections. Please check RFC6265 for more information on |
| this attribute. |
| |
| domain This option allows to specify the domain at which a cookie is |
| inserted. It requires exactly one parameter: a valid domain |
| name. If the domain begins with a dot, the browser is allowed to |
| use it for any host ending with that name. It is also possible to |
| specify several domain names by invoking this option multiple |
| times. Some browsers might have small limits on the number of |
| domains, so be careful when doing that. For the record, sending |
| 10 domains to MSIE 6 or Firefox 2 works as expected. |
| |
| maxidle This option allows inserted cookies to be ignored after some idle |
| time. It only works with insert-mode cookies. When a cookie is |
| sent to the client, the date this cookie was emitted is sent too. |
| Upon further presentations of this cookie, if the date is older |
| than the delay indicated by the parameter (in seconds), it will |
| be ignored. Otherwise, it will be refreshed if needed when the |
| response is sent to the client. This is particularly useful to |
| prevent users who never close their browsers from remaining for |
| too long on the same server (e.g. after a farm size change). When |
| this option is set and a cookie has no date, it is always |
| accepted, but gets refreshed in the response. This maintains the |
| ability for admins to access their sites. Cookies that have a |
| date in the future further than 24 hours are ignored. Doing so |
| lets admins fix timezone issues without risking kicking users off |
| the site. |
| |
| maxlife This option allows inserted cookies to be ignored after some life |
| time, whether they're in use or not. It only works with insert |
| mode cookies. When a cookie is first sent to the client, the date |
| this cookie was emitted is sent too. Upon further presentations |
| of this cookie, if the date is older than the delay indicated by |
| the parameter (in seconds), it will be ignored. If the cookie in |
| the request has no date, it is accepted and a date will be set. |
| Cookies that have a date in the future further than 24 hours are |
| ignored. Doing so lets admins fix timezone issues without risking |
| kicking users off the site. Contrary to maxidle, this value is |
| not refreshed, only the first visit date counts. Both maxidle and |
| maxlife may be used at the time. This is particularly useful to |
| prevent users who never close their browsers from remaining for |
| too long on the same server (e.g. after a farm size change). This |
| is stronger than the maxidle method in that it forces a |
| redispatch after some absolute delay. |
| |
| dynamic Activate dynamic cookies. When used, a session cookie is |
| dynamically created for each server, based on the IP and port |
| of the server, and a secret key, specified in the |
| "dynamic-cookie-key" backend directive. |
| The cookie will be regenerated each time the IP address change, |
| and is only generated for IPv4/IPv6. |
| |
| attr This option tells HAProxy to add an extra attribute when a |
| cookie is inserted. The attribute value can contain any |
| characters except control ones or ";". This option may be |
| repeated. |
| |
| There can be only one persistence cookie per HTTP backend, and it can be |
| declared in a defaults section. The value of the cookie will be the value |
| indicated after the "cookie" keyword in a "server" statement. If no cookie |
| is declared for a given server, the cookie is not set. |
| |
| Examples : |
| cookie JSESSIONID prefix |
| cookie SRV insert indirect nocache |
| cookie SRV insert postonly indirect |
| cookie SRV insert indirect nocache maxidle 30m maxlife 8h |
| |
| See also : "balance source", "capture cookie", "server" and "ignore-persist". |
| |
| |
| declare capture [ request | response ] len <length> |
| Declares a capture slot. |
| May be used in sections : defaults | frontend | listen | backend |
| no | yes | yes | no |
| Arguments: |
| <length> is the length allowed for the capture. |
| |
| This declaration is only available in the frontend or listen section, but the |
| reserved slot can be used in the backends. The "request" keyword allocates a |
| capture slot for use in the request, and "response" allocates a capture slot |
| for use in the response. |
| |
| See also: "capture-req", "capture-res" (sample converters), |
| "capture.req.hdr", "capture.res.hdr" (sample fetches), |
| "http-request capture" and "http-response capture". |
| |
| |
| default-server [param*] |
| Change default options for a server in a backend |
| May be used in sections : defaults | frontend | listen | backend |
| yes | no | yes | yes |
| Arguments: |
| <param*> is a list of parameters for this server. The "default-server" |
| keyword accepts an important number of options and has a complete |
| section dedicated to it. Please refer to section 5 for more |
| details. |
| |
| Example : |
| default-server inter 1000 weight 13 |
| |
| See also: "server" and section 5 about server options |
| |
| |
| default_backend <backend> |
| Specify the backend to use when no "use_backend" rule has been matched. |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | no |
| Arguments : |
| <backend> is the name of the backend to use. |
| |
| When doing content-switching between frontend and backends using the |
| "use_backend" keyword, it is often useful to indicate which backend will be |
| used when no rule has matched. It generally is the dynamic backend which |
| will catch all undetermined requests. |
| |
| Example : |
| |
| use_backend dynamic if url_dyn |
| use_backend static if url_css url_img extension_img |
| default_backend dynamic |
| |
| See also : "use_backend" |
| |
| |
| description <string> |
| Describe a listen, frontend or backend. |
| May be used in sections : defaults | frontend | listen | backend |
| no | yes | yes | yes |
| Arguments : string |
| |
| Allows to add a sentence to describe the related object in the HAProxy HTML |
| stats page. The description will be printed on the right of the object name |
| it describes. |
| No need to backslash spaces in the <string> arguments. |
| |
| |
| disabled |
| Disable a proxy, frontend or backend. |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | yes |
| Arguments : none |
| |
| The "disabled" keyword is used to disable an instance, mainly in order to |
| liberate a listening port or to temporarily disable a service. The instance |
| will still be created and its configuration will be checked, but it will be |
| created in the "stopped" state and will appear as such in the statistics. It |
| will not receive any traffic nor will it send any health-checks or logs. It |
| is possible to disable many instances at once by adding the "disabled" |
| keyword in a "defaults" section. |
| |
| See also : "enabled" |
| |
| |
| dispatch <address>:<port> |
| Set a default server address |
| May be used in sections : defaults | frontend | listen | backend |
| no | no | yes | yes |
| Arguments : |
| |
| <address> is the IPv4 address of the default server. Alternatively, a |
| resolvable hostname is supported, but this name will be resolved |
| during start-up. |
| |
| <ports> is a mandatory port specification. All connections will be sent |
| to this port, and it is not permitted to use port offsets as is |
| possible with normal servers. |
| |
| The "dispatch" keyword designates a default server for use when no other |
| server can take the connection. In the past it was used to forward non |
| persistent connections to an auxiliary load balancer. Due to its simple |
| syntax, it has also been used for simple TCP relays. It is recommended not to |
| use it for more clarity, and to use the "server" directive instead. |
| |
| See also : "server" |
| |
| |
| dynamic-cookie-key <string> |
| Set the dynamic cookie secret key for a backend. |
| May be used in sections : defaults | frontend | listen | backend |
| yes | no | yes | yes |
| Arguments : The secret key to be used. |
| |
| When dynamic cookies are enabled (see the "dynamic" directive for cookie), |
| a dynamic cookie is created for each server (unless one is explicitly |
| specified on the "server" line), using a hash of the IP address of the |
| server, the TCP port, and the secret key. |
| That way, we can ensure session persistence across multiple load-balancers, |
| even if servers are dynamically added or removed. |
| |
| enabled |
| Enable a proxy, frontend or backend. |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | yes |
| Arguments : none |
| |
| The "enabled" keyword is used to explicitly enable an instance, when the |
| defaults has been set to "disabled". This is very rarely used. |
| |
| See also : "disabled" |
| |
| |
| errorfile <code> <file> |
| Return a file contents instead of errors generated by HAProxy |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | yes |
| Arguments : |
| <code> is the HTTP status code. Currently, HAProxy is capable of |
| generating codes 200, 400, 401, 403, 404, 405, 407, 408, 410, |
| 413, 425, 429, 500, 501, 502, 503, and 504. |
| |
| <file> designates a file containing the full HTTP response. It is |
| recommended to follow the common practice of appending ".http" to |
| the filename so that people do not confuse the response with HTML |
| error pages, and to use absolute paths, since files are read |
| before any chroot is performed. |
| |
| It is important to understand that this keyword is not meant to rewrite |
| errors returned by the server, but errors detected and returned by HAProxy. |
| This is why the list of supported errors is limited to a small set. |
| |
| Code 200 is emitted in response to requests matching a "monitor-uri" rule. |
| |
| The files are parsed when HAProxy starts and must be valid according to the |
| HTTP specification. They should not exceed the configured buffer size |
| (BUFSIZE), which generally is 16 kB, otherwise an internal error will be |
| returned. It is also wise not to put any reference to local contents |
| (e.g. images) in order to avoid loops between the client and HAProxy when all |
| servers are down, causing an error to be returned instead of an |
| image. Finally, The response cannot exceed (tune.bufsize - tune.maxrewrite) |
| so that "http-after-response" rules still have room to operate (see |
| "tune.maxrewrite"). |
| |
| The files are read at the same time as the configuration and kept in memory. |
| For this reason, the errors continue to be returned even when the process is |
| chrooted, and no file change is considered while the process is running. A |
| simple method for developing those files consists in associating them to the |
| 403 status code and interrogating a blocked URL. |
| |
| See also : "http-error", "errorloc", "errorloc302", "errorloc303" |
| |
| Example : |
| errorfile 400 /etc/haproxy/errorfiles/400badreq.http |
| errorfile 408 /dev/null # work around Chrome pre-connect bug |
| errorfile 403 /etc/haproxy/errorfiles/403forbid.http |
| errorfile 503 /etc/haproxy/errorfiles/503sorry.http |
| |
| |
| errorfiles <name> [<code> ...] |
| Import, fully or partially, the error files defined in the <name> http-errors |
| section. |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | yes |
| Arguments : |
| <name> is the name of an existing http-errors section. |
| |
| <code> is a HTTP status code. Several status code may be listed. |
| Currently, HAProxy is capable of generating codes 200, 400, 401, |
| 403, 404, 405, 407, 408, 410, 413, 425, 429, 500, 501, 502, 503, |
| and 504. |
| |
| Errors defined in the http-errors section with the name <name> are imported |
| in the current proxy. If no status code is specified, all error files of the |
| http-errors section are imported. Otherwise, only error files associated to |
| the listed status code are imported. Those error files override the already |
| defined custom errors for the proxy. And they may be overridden by following |
| ones. Functionally, it is exactly the same as declaring all error files by |
| hand using "errorfile" directives. |
| |
| See also : "http-error", "errorfile", "errorloc", "errorloc302" , |
| "errorloc303" and section 3.8 about http-errors. |
| |
| Example : |
| errorfiles generic |
| errorfiles site-1 403 404 |
| |
| |
| errorloc <code> <url> |
| errorloc302 <code> <url> |
| Return an HTTP redirection to a URL instead of errors generated by HAProxy |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | yes |
| Arguments : |
| <code> is the HTTP status code. Currently, HAProxy is capable of |
| generating codes 200, 400, 401, 403, 404, 405, 407, 408, 410, |
| 413, 425, 429, 500, 501, 502, 503, and 504. |
| |
| <url> it is the exact contents of the "Location" header. It may contain |
| either a relative URI to an error page hosted on the same site, |
| or an absolute URI designating an error page on another site. |
| Special care should be given to relative URIs to avoid redirect |
| loops if the URI itself may generate the same error (e.g. 500). |
| |
| It is important to understand that this keyword is not meant to rewrite |
| errors returned by the server, but errors detected and returned by HAProxy. |
| This is why the list of supported errors is limited to a small set. |
| |
| Code 200 is emitted in response to requests matching a "monitor-uri" rule. |
| |
| Note that both keyword return the HTTP 302 status code, which tells the |
| client to fetch the designated URL using the same HTTP method. This can be |
| quite problematic in case of non-GET methods such as POST, because the URL |
| sent to the client might not be allowed for something other than GET. To |
| work around this problem, please use "errorloc303" which send the HTTP 303 |
| status code, indicating to the client that the URL must be fetched with a GET |
| request. |
| |
| See also : "http-error", "errorfile", "errorloc303" |
| |
| |
| errorloc303 <code> <url> |
| Return an HTTP redirection to a URL instead of errors generated by HAProxy |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | yes |
| Arguments : |
| <code> is the HTTP status code. Currently, HAProxy is capable of |
| generating codes 200, 400, 401, 403, 404, 405, 407, 408, 410, |
| 413, 425, 429, 500, 501, 502, 503, and 504. |
| |
| <url> it is the exact contents of the "Location" header. It may contain |
| either a relative URI to an error page hosted on the same site, |
| or an absolute URI designating an error page on another site. |
| Special care should be given to relative URIs to avoid redirect |
| loops if the URI itself may generate the same error (e.g. 500). |
| |
| It is important to understand that this keyword is not meant to rewrite |
| errors returned by the server, but errors detected and returned by HAProxy. |
| This is why the list of supported errors is limited to a small set. |
| |
| Code 200 is emitted in response to requests matching a "monitor-uri" rule. |
| |
| Note that both keyword return the HTTP 303 status code, which tells the |
| client to fetch the designated URL using the same HTTP GET method. This |
| solves the usual problems associated with "errorloc" and the 302 code. It is |
| possible that some very old browsers designed before HTTP/1.1 do not support |
| it, but no such problem has been reported till now. |
| |
| See also : "http-error", "errorfile", "errorloc", "errorloc302" |
| |
| |
| email-alert from <emailaddr> |
| Declare the from email address to be used in both the envelope and header |
| of email alerts. This is the address that email alerts are sent from. |
| May be used in sections: defaults | frontend | listen | backend |
| yes | yes | yes | yes |
| |
| Arguments : |
| |
| <emailaddr> is the from email address to use when sending email alerts |
| |
| Also requires "email-alert mailers" and "email-alert to" to be set |
| and if so sending email alerts is enabled for the proxy. |
| |
| See also : "email-alert level", "email-alert mailers", |
| "email-alert myhostname", "email-alert to", section 3.6 about |
| mailers. |
| |
| |
| email-alert level <level> |
| Declare the maximum log level of messages for which email alerts will be |
| sent. This acts as a filter on the sending of email alerts. |
| May be used in sections: defaults | frontend | listen | backend |
| yes | yes | yes | yes |
| |
| Arguments : |
| |
| <level> One of the 8 syslog levels: |
| emerg alert crit err warning notice info debug |
| The above syslog levels are ordered from lowest to highest. |
| |
| By default level is alert |
| |
| Also requires "email-alert from", "email-alert mailers" and |
| "email-alert to" to be set and if so sending email alerts is enabled |
| for the proxy. |
| |
| Alerts are sent when : |
| |
| * An un-paused server is marked as down and <level> is alert or lower |
| * A paused server is marked as down and <level> is notice or lower |
| * A server is marked as up or enters the drain state and <level> |
| is notice or lower |
| * "option log-health-checks" is enabled, <level> is info or lower, |
| and a health check status update occurs |
| |
| See also : "email-alert from", "email-alert mailers", |
| "email-alert myhostname", "email-alert to", |
| section 3.6 about mailers. |
| |
| |
| email-alert mailers <mailersect> |
| Declare the mailers to be used when sending email alerts |
| May be used in sections: defaults | frontend | listen | backend |
| yes | yes | yes | yes |
| |
| Arguments : |
| |
| <mailersect> is the name of the mailers section to send email alerts. |
| |
| Also requires "email-alert from" and "email-alert to" to be set |
| and if so sending email alerts is enabled for the proxy. |
| |
| See also : "email-alert from", "email-alert level", "email-alert myhostname", |
| "email-alert to", section 3.6 about mailers. |
| |
| |
| email-alert myhostname <hostname> |
| Declare the to hostname address to be used when communicating with |
| mailers. |
| May be used in sections: defaults | frontend | listen | backend |
| yes | yes | yes | yes |
| |
| Arguments : |
| |
| <hostname> is the hostname to use when communicating with mailers |
| |
| By default the systems hostname is used. |
| |
| Also requires "email-alert from", "email-alert mailers" and |
| "email-alert to" to be set and if so sending email alerts is enabled |
| for the proxy. |
| |
| See also : "email-alert from", "email-alert level", "email-alert mailers", |
| "email-alert to", section 3.6 about mailers. |
| |
| |
| email-alert to <emailaddr> |
| Declare both the recipient address in the envelope and to address in the |
| header of email alerts. This is the address that email alerts are sent to. |
| May be used in sections: defaults | frontend | listen | backend |
| yes | yes | yes | yes |
| |
| Arguments : |
| |
| <emailaddr> is the to email address to use when sending email alerts |
| |
| Also requires "email-alert mailers" and "email-alert to" to be set |
| and if so sending email alerts is enabled for the proxy. |
| |
| See also : "email-alert from", "email-alert level", "email-alert mailers", |
| "email-alert myhostname", section 3.6 about mailers. |
| |
| |
| error-log-format <string> |
| Specifies the log format string to use in case of connection error on the frontend side. |
| May be used in sections: defaults | frontend | listen | backend |
| yes | yes | yes | no |
| |
| This directive specifies the log format string that will be used for logs |
| containing information related to errors, timeouts, retries redispatches or |
| HTTP status code 5xx. This format will in short be used for every log line |
| that would be concerned by the "log-separate-errors" option, including |
| connection errors described in section 8.2.5. |
| |
| If the directive is used in a defaults section, all subsequent frontends will |
| use the same log format. Please see section 8.2.6 which covers the custom log |
| format string in depth. |
| |
| "error-log-format" directive overrides previous "error-log-format" |
| directives. |
| |
| |
| force-persist { if | unless } <condition> |
| Declare a condition to force persistence on down servers |
| May be used in sections: defaults | frontend | listen | backend |
| no | no | yes | yes |
| |
| By default, requests are not dispatched to down servers. It is possible to |
| force this using "option persist", but it is unconditional and redispatches |
| to a valid server if "option redispatch" is set. That leaves with very little |
| possibilities to force some requests to reach a server which is artificially |
| marked down for maintenance operations. |
| |
| The "force-persist" statement allows one to declare various ACL-based |
| conditions which, when met, will cause a request to ignore the down status of |
| a server and still try to connect to it. That makes it possible to start a |
| server, still replying an error to the health checks, and run a specially |
| configured browser to test the service. Among the handy methods, one could |
| use a specific source IP address, or a specific cookie. The cookie also has |
| the advantage that it can easily be added/removed on the browser from a test |
| page. Once the service is validated, it is then possible to open the service |
| to the world by returning a valid response to health checks. |
| |
| The forced persistence is enabled when an "if" condition is met, or unless an |
| "unless" condition is met. The final redispatch is always disabled when this |
| is used. |
| |
| See also : "option redispatch", "ignore-persist", "persist", |
| and section 7 about ACL usage. |
| |
| |
| filter <name> [param*] |
| Add the filter <name> in the filter list attached to the proxy. |
| May be used in sections : defaults | frontend | listen | backend |
| no | yes | yes | yes |
| Arguments : |
| <name> is the name of the filter. Officially supported filters are |
| referenced in section 9. |
| |
| <param*> is a list of parameters accepted by the filter <name>. The |
| parsing of these parameters are the responsibility of the |
| filter. Please refer to the documentation of the corresponding |
| filter (section 9) for all details on the supported parameters. |
| |
| Multiple occurrences of the filter line can be used for the same proxy. The |
| same filter can be referenced many times if needed. |
| |
| Example: |
| listen |
| bind *:80 |
| |
| filter trace name BEFORE-HTTP-COMP |
| filter compression |
| filter trace name AFTER-HTTP-COMP |
| |
| compression algo gzip |
| compression offload |
| |
| server srv1 192.168.0.1:80 |
| |
| See also : section 9. |
| |
| |
| fullconn <conns> |
| Specify at what backend load the servers will reach their maxconn |
| May be used in sections : defaults | frontend | listen | backend |
| yes | no | yes | yes |
| Arguments : |
| <conns> is the number of connections on the backend which will make the |
| servers use the maximal number of connections. |
| |
| When a server has a "maxconn" parameter specified, it means that its number |
| of concurrent connections will never go higher. Additionally, if it has a |
| "minconn" parameter, it indicates a dynamic limit following the backend's |
| load. The server will then always accept at least <minconn> connections, |
| never more than <maxconn>, and the limit will be on the ramp between both |
| values when the backend has less than <conns> concurrent connections. This |
| makes it possible to limit the load on the servers during normal loads, but |
| push it further for important loads without overloading the servers during |
| exceptional loads. |
| |
| Since it's hard to get this value right, HAProxy automatically sets it to |
| 10% of the sum of the maxconns of all frontends that may branch to this |
| backend (based on "use_backend" and "default_backend" rules). That way it's |
| safe to leave it unset. However, "use_backend" involving dynamic names are |
| not counted since there is no way to know if they could match or not. |
| |
| Example : |
| # The servers will accept between 100 and 1000 concurrent connections each |
| # and the maximum of 1000 will be reached when the backend reaches 10000 |
| # connections. |
| backend dynamic |
| fullconn 10000 |
| server srv1 dyn1:80 minconn 100 maxconn 1000 |
| server srv2 dyn2:80 minconn 100 maxconn 1000 |
| |
| See also : "maxconn", "server" |
| |
| |
| hash-balance-factor <factor> |
| Specify the balancing factor for bounded-load consistent hashing |
| May be used in sections : defaults | frontend | listen | backend |
| yes | no | no | yes |
| Arguments : |
| <factor> is the control for the maximum number of concurrent requests to |
| send to a server, expressed as a percentage of the average number |
| of concurrent requests across all of the active servers. |
| |
| Specifying a "hash-balance-factor" for a server with "hash-type consistent" |
| enables an algorithm that prevents any one server from getting too many |
| requests at once, even if some hash buckets receive many more requests than |
| others. Setting <factor> to 0 (the default) disables the feature. Otherwise, |
| <factor> is a percentage greater than 100. For example, if <factor> is 150, |
| then no server will be allowed to have a load more than 1.5 times the average. |
| If server weights are used, they will be respected. |
| |
| If the first-choice server is disqualified, the algorithm will choose another |
| server based on the request hash, until a server with additional capacity is |
| found. A higher <factor> allows more imbalance between the servers, while a |
| lower <factor> means that more servers will be checked on average, affecting |
| performance. Reasonable values are from 125 to 200. |
| |
| This setting is also used by "balance random" which internally relies on the |
| consistent hashing mechanism. |
| |
| See also : "balance" and "hash-type". |
| |
| |
| hash-type <method> <function> <modifier> |
| Specify a method to use for mapping hashes to servers |
| May be used in sections : defaults | frontend | listen | backend |
| yes | no | yes | yes |
| Arguments : |
| <method> is the method used to select a server from the hash computed by |
| the <function> : |
| |
| map-based the hash table is a static array containing all alive servers. |
| The hashes will be very smooth, will consider weights, but |
| will be static in that weight changes while a server is up |
| will be ignored. This means that there will be no slow start. |
| Also, since a server is selected by its position in the array, |
| most mappings are changed when the server count changes. This |
| means that when a server goes up or down, or when a server is |
| added to a farm, most connections will be redistributed to |
| different servers. This can be inconvenient with caches for |
| instance. |
| |
| consistent the hash table is a tree filled with many occurrences of each |
| server. The hash key is looked up in the tree and the closest |
| server is chosen. This hash is dynamic, it supports changing |
| weights while the servers are up, so it is compatible with the |
| slow start feature. It has the advantage that when a server |
| goes up or down, only its associations are moved. When a |
| server is added to the farm, only a few part of the mappings |
| are redistributed, making it an ideal method for caches. |
| However, due to its principle, the distribution will never be |
| very smooth and it may sometimes be necessary to adjust a |
| server's weight or its ID to get a more balanced distribution. |
| In order to get the same distribution on multiple load |
| balancers, it is important that all servers have the exact |
| same IDs. Note: consistent hash uses sdbm and avalanche if no |
| hash function is specified. |
| |
| <function> is the hash function to be used : |
| |
| sdbm this function was created initially for sdbm (a public-domain |
| reimplementation of ndbm) database library. It was found to do |
| well in scrambling bits, causing better distribution of the keys |
| and fewer splits. It also happens to be a good general hashing |
| function with good distribution, unless the total server weight |
| is a multiple of 64, in which case applying the avalanche |
| modifier may help. |
| |
| djb2 this function was first proposed by Dan Bernstein many years ago |
| on comp.lang.c. Studies have shown that for certain workload this |
| function provides a better distribution than sdbm. It generally |
| works well with text-based inputs though it can perform extremely |
| poorly with numeric-only input or when the total server weight is |
| a multiple of 33, unless the avalanche modifier is also used. |
| |
| wt6 this function was designed for HAProxy while testing other |
| functions in the past. It is not as smooth as the other ones, but |
| is much less sensible to the input data set or to the number of |
| servers. It can make sense as an alternative to sdbm+avalanche or |
| djb2+avalanche for consistent hashing or when hashing on numeric |
| data such as a source IP address or a visitor identifier in a URL |
| parameter. |
| |
| crc32 this is the most common CRC32 implementation as used in Ethernet, |
| gzip, PNG, etc. It is slower than the other ones but may provide |
| a better distribution or less predictable results especially when |
| used on strings. |
| |
| <modifier> indicates an optional method applied after hashing the key : |
| |
| avalanche This directive indicates that the result from the hash |
| function above should not be used in its raw form but that |
| a 4-byte full avalanche hash must be applied first. The |
| purpose of this step is to mix the resulting bits from the |
| previous hash in order to avoid any undesired effect when |
| the input contains some limited values or when the number of |
| servers is a multiple of one of the hash's components (64 |
| for SDBM, 33 for DJB2). Enabling avalanche tends to make the |
| result less predictable, but it's also not as smooth as when |
| using the original function. Some testing might be needed |
| with some workloads. This hash is one of the many proposed |
| by Bob Jenkins. |
| |
| The default hash type is "map-based" and is recommended for most usages. The |
| default function is "sdbm", the selection of a function should be based on |
| the range of the values being hashed. |
| |
| See also : "balance", "hash-balance-factor", "server" |
| |
| |
| http-after-response <action> <options...> [ { if | unless } <condition> ] |
| Access control for all Layer 7 responses (server, applet/service and internal |
| ones). |
| |
| May be used in sections: defaults | frontend | listen | backend |
| yes(!) | yes | yes | yes |
| |
| The http-after-response statement defines a set of rules which apply to layer |
| 7 processing. The rules are evaluated in their declaration order when they |
| are met in a frontend, listen or backend section. Any rule may optionally be |
| followed by an ACL-based condition, in which case it will only be evaluated |
| if the condition is true. Since these rules apply on responses, the backend |
| rules are applied first, followed by the frontend's rules. |
| |
| Unlike http-response rules, these ones are applied on all responses, the |
| server ones but also to all responses generated by HAProxy. These rules are |
| evaluated at the end of the responses analysis, before the data forwarding. |
| |
| The first keyword is the rule's action. Several types of actions are |
| supported: |
| - add-header <name> <fmt> |
| - allow |
| - capture <sample> id <id> |
| - del-acl(<file-name>) <key fmt> |
| - del-header <name> [ -m <meth> ] |
| - del-map(<file-name>) <key fmt> |
| - replace-header <name> <regex-match> <replace-fmt> |
| - replace-value <name> <regex-match> <replace-fmt> |
| - sc-add-gpc(<idx>,<sc-id>) { <int> | <expr> } |
| - sc-inc-gpc(<idx>,<sc-id>) |
| - sc-inc-gpc0(<sc-id>) |
| - sc-inc-gpc1(<sc-id>) |
| - sc-set-gpt(<idx>,<sc-id>) { <int> | <expr> } |
| - sc-set-gpt0(<sc-id>) { <int> | <expr> } |
| - set-header <name> <fmt> |
| - set-log-level <level> |
| - set-map(<file-name>) <key fmt> <value fmt> |
| - set-status <status> [reason <str>] |
| - set-var(<var-name>[,<cond>...]) <expr> |
| - set-var-fmt(<var-name>[,<cond>...]) <fmt> |
| - strict-mode { on | off } |
| - unset-var(<var-name>) |
| |
| The supported actions are described below. |
| |
| There is no limit to the number of http-after-response statements per |
| instance. |
| |
| This directive is only available from named defaults sections, not anonymous |
| ones. Rules defined in the defaults section are evaluated before ones in the |
| associated proxy section. To avoid ambiguities, in this case the same |
| defaults section cannot be used by proxies with the frontend capability and |
| by proxies with the backend capability. It means a listen section cannot use |
| a defaults section defining such rules. |
| |
| Note: Errors emitted in early stage of the request parsing are handled by the |
| multiplexer at a lower level, before any http analysis. Thus no |
| http-after-response ruleset is evaluated on these errors. |
| |
| Example: |
| http-after-response set-header Strict-Transport-Security "max-age=31536000" |
| http-after-response set-header Cache-Control "no-store,no-cache,private" |
| http-after-response set-header Pragma "no-cache" |
| |
| http-after-response add-header <name> <fmt> [ { if | unless } <condition> ] |
| |
| This appends an HTTP header field whose name is specified in <name> and whose |
| value is defined by <fmt>. Please refer to "http-request add-header" for a |
| complete description. |
| |
| http-after-response allow [ { if | unless } <condition> ] |
| |
| This stops the evaluation of the rules and lets the response pass the check. |
| No further "http-after-response" rules are evaluated for the current section. |
| |
| http-after-response capture <sample> id <id> [ { if | unless } <condition> ] |
| |
| This captures sample expression <sample> from the response buffer, and |
| converts it to a string. Please refer to "http-response capture" for a |
| complete description. |
| |
| http-after-response del-acl(<file-name>) <key fmt> [ { if | unless } <condition> ] |
| |
| This is used to delete an entry from an ACL. Please refer to "http-request |
| del-acl" for a complete description. |
| |
| http-after-response del-header <name> [ -m <meth> ] [ { if | unless } <condition> ] |
| |
| This removes all HTTP header fields whose name is specified in <name>. Please |
| refer to "http-request del-header" for a complete description. |
| |
| http-after-response del-map(<file-name>) <key fmt> [ { if | unless } <condition> ] |
| |
| This is used to delete an entry from a MAP. Please refer to "http-request |
| del-map" for a complete description. |
| |
| http-after-response replace-header <name> <regex-match> <replace-fmt> |
| [ { if | unless } <condition> ] |
| |
| This works like "http-response replace-header". |
| |
| Example: |
| http-after-response replace-header Set-Cookie (C=[^;]*);(.*) \1;ip=%bi;\2 |
| |
| # applied to: |
| Set-Cookie: C=1; expires=Tue, 14-Jun-2016 01:40:45 GMT |
| |
| # outputs: |
| Set-Cookie: C=1;ip=192.168.1.20; expires=Tue, 14-Jun-2016 01:40:45 GMT |
| |
| # assuming the backend IP is 192.168.1.20. |
| |
| http-after-response replace-value <name> <regex-match> <replace-fmt> |
| [ { if | unless } <condition> ] |
| |
| This works like "http-response replace-value". |
| |
| Example: |
| http-after-response replace-value Cache-control ^public$ private |
| |
| # applied to: |
| Cache-Control: max-age=3600, public |
| |
| # outputs: |
| Cache-Control: max-age=3600, private |
| |
| http-after-response sc-add-gpc(<idx>,<sc-id>) { <int> | <expr> } |
| [ { if | unless } <condition> ] |
| |
| This action increments the General Purpose Counter according to the sticky |
| counter designated by <sc-id>. Please refer to "http-request sc-add-gpc" for |
| a complete description. |
| |
| http-after-response sc-inc-gpc(<idx>,<sc-id>) [ { if | unless } <condition> ] |
| http-after-response sc-inc-gpc0(<sc-id>) [ { if | unless } <condition> ] |
| http-after-response sc-inc-gpc1(<sc-id>) [ { if | unless } <condition> ] |
| |
| These actions increment the General Purppose Counters according to the sticky |
| counter designated by <sc-id>. Please refer to "http-request sc-inc-gpc", |
| "http-request sc-inc-gpc0" and "http-request sc-inc-gpc1" for a complete |
| description. |
| |
| http-after-response sc-set-gpt(<idx>,<sc-id>) { <int> | <expr> } |
| [ { if | unless } <condition> ] |
| http-after-response sc-set-gpt0(<sc-id>) { <int> | <expr> } |
| [ { if | unless } <condition> ] |
| |
| These actions set the 32-bit unsigned General Purpose Tags according to the |
| sticky counter designated by <sc-id>. Please refer to "http-request |
| sc-set-gpt" and "http-request sc-set-gpt0" for a complete description. |
| |
| http-after-response set-log-level <level> [ { if | unless } <condition> ] |
| |
| This is used to change the log level of the current response. Please refer to |
| "http-request set-log-level" for a complete description. |
| |
| http-after-response set-map(<file-name>) <key fmt> <value fmt> |
| |
| This is used to add a new entry into a MAP. Please refer to "http-request |
| set-map" for a complete description. |
| |
| http-after-response set-header <name> <fmt> [ { if | unless } <condition> ] |
| |
| This does the same as "http-after-response add-header" except that the header |
| name is first removed if it existed. This is useful when passing security |
| information to the server, where the header must not be manipulated by |
| external users. |
| |
| http-after-response set-status <status> [reason <str>] |
| [ { if | unless } <condition> ] |
| |
| This replaces the response status code with <status> which must be an integer |
| between 100 and 999. Please refer to "http-response set-status" for a complete |
| description. |
| |
| http-after-response set-var(<var-name>[,<cond>...]) <expr> [ { if | unless } <condition> ] |
| http-after-response set-var-fmt(<var-name>[,<cond>...]) <fmt> [ { if | unless } <condition> ] |
| |
| This is used to set the contents of a variable. The variable is declared |
| inline. Please refer to "http-request set-var" and "http-request set-var-fmt" |
| for a complete description. |
| |
| http-after-response strict-mode { on | off } [ { if | unless } <condition> ] |
| |
| This enables or disables the strict rewriting mode for following |
| rules. Please refer to "http-request strict-mode" for a complete description. |
| |
| http-after-response unset-var(<var-name>) [ { if | unless } <condition> ] |
| |
| This is used to unset a variable. See "http-request set-var" for details |
| about <var-name>. |
| |
| |
| http-check comment <string> |
| Defines a comment for the following the http-check rule, reported in logs if |
| it fails. |
| May be used in sections : defaults | frontend | listen | backend |
| yes | no | yes | yes |
| |
| Arguments : |
| <string> is the comment message to add in logs if the following http-check |
| rule fails. |
| |
| It only works for connect, send and expect rules. It is useful to make |
| user-friendly error reporting. |
| |
| See also : "option httpchk", "http-check connect", "http-check send" and |
| "http-check expect". |
| |
| |
| http-check connect [default] [port <expr>] [addr <ip>] [send-proxy] |
| [via-socks4] [ssl] [sni <sni>] [alpn <alpn>] [linger] |
| [proto <name>] [comment <msg>] |
| Opens a new connection to perform an HTTP health check |
| May be used in sections : defaults | frontend | listen | backend |
| yes | no | yes | yes |
| |
| Arguments : |
| comment <msg> defines a message to report if the rule evaluation fails. |
| |
| default Use default options of the server line to do the health |
| checks. The server options are used only if not redefined. |
| |
| port <expr> if not set, check port or server port is used. |
| It tells HAProxy where to open the connection to. |
| <port> must be a valid TCP port source integer, from 1 to |
| 65535 or an sample-fetch expression. |
| |
| addr <ip> defines the IP address to do the health check. |
| |
| send-proxy send a PROXY protocol string |
| |
| via-socks4 enables outgoing health checks using upstream socks4 proxy. |
| |
| ssl opens a ciphered connection |
| |
| sni <sni> specifies the SNI to use to do health checks over SSL. |
| |
| alpn <alpn> defines which protocols to advertise with ALPN. The protocol |
| list consists in a comma-delimited list of protocol names, |
| for instance: "h2,http/1.1". If it is not set, the server ALPN |
| is used. |
| |
| proto <name> forces the multiplexer's protocol to use for this connection. |
| It must be an HTTP mux protocol and it must be usable on the |
| backend side. The list of available protocols is reported in |
| haproxy -vv. |
| |
| linger cleanly close the connection instead of using a single RST. |
| |
| Just like tcp-check health checks, it is possible to configure the connection |
| to use to perform HTTP health check. This directive should also be used to |
| describe a scenario involving several request/response exchanges, possibly on |
| different ports or with different servers. |
| |
| When there are no TCP port configured on the server line neither server port |
| directive, then the first step of the http-check sequence must be to specify |
| the port with a "http-check connect". |
| |
| In an http-check ruleset a 'connect' is required, it is also mandatory to start |
| the ruleset with a 'connect' rule. Purpose is to ensure admin know what they |
| do. |
| |
| When a connect must start the ruleset, if may still be preceded by set-var, |
| unset-var or comment rules. |
| |
| Examples : |
| # check HTTP and HTTPs services on a server. |
| # first open port 80 thanks to server line port directive, then |
| # tcp-check opens port 443, ciphered and run a request on it: |
| option httpchk |
| |
| http-check connect |
| http-check send meth GET uri / ver HTTP/1.1 hdr host haproxy.1wt.eu |
| http-check expect status 200-399 |
| http-check connect port 443 ssl sni haproxy.1wt.eu |
| http-check send meth GET uri / ver HTTP/1.1 hdr host haproxy.1wt.eu |
| http-check expect status 200-399 |
| |
| server www 10.0.0.1 check port 80 |
| |
| See also : "option httpchk", "http-check send", "http-check expect" |
| |
| |
| http-check disable-on-404 |
| Enable a maintenance mode upon HTTP/404 response to health-checks |
| May be used in sections : defaults | frontend | listen | backend |
| yes | no | yes | yes |
| Arguments : none |
| |
| When this option is set, a server which returns an HTTP code 404 will be |
| excluded from further load-balancing, but will still receive persistent |
| connections. This provides a very convenient method for Web administrators |
| to perform a graceful shutdown of their servers. It is also important to note |
| that a server which is detected as failed while it was in this mode will not |
| generate an alert, just a notice. If the server responds 2xx or 3xx again, it |
| will immediately be reinserted into the farm. The status on the stats page |
| reports "NOLB" for a server in this mode. It is important to note that this |
| option only works in conjunction with the "httpchk" option. If this option |
| is used with "http-check expect", then it has precedence over it so that 404 |
| responses will still be considered as soft-stop. Note also that a stopped |
| server will stay stopped even if it replies 404s. This option is only |
| evaluated for running servers. |
| |
| See also : "option httpchk" and "http-check expect". |
| |
| |
| http-check expect [min-recv <int>] [comment <msg>] |
| [ok-status <st>] [error-status <st>] [tout-status <st>] |
| [on-success <fmt>] [on-error <fmt>] [status-code <expr>] |
| [!] <match> <pattern> |
| Make HTTP health checks consider response contents or specific status codes |
| May be used in sections : defaults | frontend | listen | backend |
| yes | no | yes | yes |
| |
| Arguments : |
| comment <msg> defines a message to report if the rule evaluation fails. |
| |
| min-recv is optional and can define the minimum amount of data required to |
| evaluate the current expect rule. If the number of received bytes |
| is under this limit, the check will wait for more data. This |
| option can be used to resolve some ambiguous matching rules or to |
| avoid executing costly regex matches on content known to be still |
| incomplete. If an exact string is used, the minimum between the |
| string length and this parameter is used. This parameter is |
| ignored if it is set to -1. If the expect rule does not match, |
| the check will wait for more data. If set to 0, the evaluation |
| result is always conclusive. |
| |
| ok-status <st> is optional and can be used to set the check status if |
| the expect rule is successfully evaluated and if it is |
| the last rule in the tcp-check ruleset. "L7OK", "L7OKC", |
| "L6OK" and "L4OK" are supported : |
| - L7OK : check passed on layer 7 |
| - L7OKC : check conditionally passed on layer 7, set |
| server to NOLB state. |
| - L6OK : check passed on layer 6 |
| - L4OK : check passed on layer 4 |
| By default "L7OK" is used. |
| |
| error-status <st> is optional and can be used to set the check status if |
| an error occurred during the expect rule evaluation. |
| "L7OKC", "L7RSP", "L7STS", "L6RSP" and "L4CON" are |
| supported : |
| - L7OKC : check conditionally passed on layer 7, set |
| server to NOLB state. |
| - L7RSP : layer 7 invalid response - protocol error |
| - L7STS : layer 7 response error, for example HTTP 5xx |
| - L6RSP : layer 6 invalid response - protocol error |
| - L4CON : layer 1-4 connection problem |
| By default "L7RSP" is used. |
| |
| tout-status <st> is optional and can be used to set the check status if |
| a timeout occurred during the expect rule evaluation. |
| "L7TOUT", "L6TOUT", and "L4TOUT" are supported : |
| - L7TOUT : layer 7 (HTTP/SMTP) timeout |
| - L6TOUT : layer 6 (SSL) timeout |
| - L4TOUT : layer 1-4 timeout |
| By default "L7TOUT" is used. |
| |
| on-success <fmt> is optional and can be used to customize the |
| informational message reported in logs if the expect |
| rule is successfully evaluated and if it is the last rule |
| in the tcp-check ruleset. <fmt> is a log-format string. |
| |
| on-error <fmt> is optional and can be used to customize the |
| informational message reported in logs if an error |
| occurred during the expect rule evaluation. <fmt> is a |
| log-format string. |
| |
| <match> is a keyword indicating how to look for a specific pattern in the |
| response. The keyword may be one of "status", "rstatus", "hdr", |
| "fhdr", "string", or "rstring". The keyword may be preceded by an |
| exclamation mark ("!") to negate the match. Spaces are allowed |
| between the exclamation mark and the keyword. See below for more |
| details on the supported keywords. |
| |
| <pattern> is the pattern to look for. It may be a string, a regular |
| expression or a more complex pattern with several arguments. If |
| the string pattern contains spaces, they must be escaped with the |
| usual backslash ('\'). |
| |
| By default, "option httpchk" considers that response statuses 2xx and 3xx |
| are valid, and that others are invalid. When "http-check expect" is used, |
| it defines what is considered valid or invalid. Only one "http-check" |
| statement is supported in a backend. If a server fails to respond or times |
| out, the check obviously fails. The available matches are : |
| |
| status <codes> : test the status codes found parsing <codes> string. it |
| must be a comma-separated list of status codes or range |
| codes. A health check response will be considered as |
| valid if the response's status code matches any status |
| code or is inside any range of the list. If the "status" |
| keyword is prefixed with "!", then the response will be |
| considered invalid if the status code matches. |
| |
| rstatus <regex> : test a regular expression for the HTTP status code. |
| A health check response will be considered valid if the |
| response's status code matches the expression. If the |
| "rstatus" keyword is prefixed with "!", then the response |
| will be considered invalid if the status code matches. |
| This is mostly used to check for multiple codes. |
| |
| hdr { name | name-lf } [ -m <meth> ] <name> |
| [ { value | value-lf } [ -m <meth> ] <value> : |
| test the specified header pattern on the HTTP response |
| headers. The name pattern is mandatory but the value |
| pattern is optional. If not specified, only the header |
| presence is verified. <meth> is the matching method, |
| applied on the header name or the header value. Supported |
| matching methods are "str" (exact match), "beg" (prefix |
| match), "end" (suffix match), "sub" (substring match) or |
| "reg" (regex match). If not specified, exact matching |
| method is used. If the "name-lf" parameter is used, |
| <name> is evaluated as a log-format string. If "value-lf" |
| parameter is used, <value> is evaluated as a log-format |
| string. These parameters cannot be used with the regex |
| matching method. Finally, the header value is considered |
| as comma-separated list. Note that matchings are case |
| insensitive on the header names. |
| |
| fhdr { name | name-lf } [ -m <meth> ] <name> |
| [ { value | value-lf } [ -m <meth> ] <value> : |
| test the specified full header pattern on the HTTP |
| response headers. It does exactly the same than "hdr" |
| keyword, except the full header value is tested, commas |
| are not considered as delimiters. |
| |
| string <string> : test the exact string match in the HTTP response body. |
| A health check response will be considered valid if the |
| response's body contains this exact string. If the |
| "string" keyword is prefixed with "!", then the response |
| will be considered invalid if the body contains this |
| string. This can be used to look for a mandatory word at |
| the end of a dynamic page, or to detect a failure when a |
| specific error appears on the check page (e.g. a stack |
| trace). |
| |
| rstring <regex> : test a regular expression on the HTTP response body. |
| A health check response will be considered valid if the |
| response's body matches this expression. If the "rstring" |
| keyword is prefixed with "!", then the response will be |
| considered invalid if the body matches the expression. |
| This can be used to look for a mandatory word at the end |
| of a dynamic page, or to detect a failure when a specific |
| error appears on the check page (e.g. a stack trace). |
| |
| string-lf <fmt> : test a log-format string match in the HTTP response body. |
| A health check response will be considered valid if the |
| response's body contains the string resulting of the |
| evaluation of <fmt>, which follows the log-format rules. |
| If prefixed with "!", then the response will be |
| considered invalid if the body contains the string. |
| |
| It is important to note that the responses will be limited to a certain size |
| defined by the global "tune.bufsize" option, which defaults to 16384 bytes. |
| Thus, too large responses may not contain the mandatory pattern when using |
| "string" or "rstring". If a large response is absolutely required, it is |
| possible to change the default max size by setting the global variable. |
| However, it is worth keeping in mind that parsing very large responses can |
| waste some CPU cycles, especially when regular expressions are used, and that |
| it is always better to focus the checks on smaller resources. |
| |
| In an http-check ruleset, the last expect rule may be implicit. If no expect |
| rule is specified after the last "http-check send", an implicit expect rule |
| is defined to match on 2xx or 3xx status codes. It means this rule is also |
| defined if there is no "http-check" rule at all, when only "option httpchk" |
| is set. |
| |
| Last, if "http-check expect" is combined with "http-check disable-on-404", |
| then this last one has precedence when the server responds with 404. |
| |
| Examples : |
| # only accept status 200 as valid |
| http-check expect status 200,201,300-310 |
| |
| # be sure a sessid coookie is set |
| http-check expect header name "set-cookie" value -m beg "sessid=" |
| |
| # consider SQL errors as errors |
| http-check expect ! string SQL\ Error |
| |
| # consider status 5xx only as errors |
| http-check expect ! rstatus ^5 |
| |
| # check that we have a correct hexadecimal tag before /html |
| http-check expect rstring <!--tag:[0-9a-f]*--></html> |
| |
| See also : "option httpchk", "http-check connect", "http-check disable-on-404" |
| and "http-check send". |
| |
| |
| http-check send [meth <method>] [{ uri <uri> | uri-lf <fmt> }>] [ver <version>] |
| [hdr <name> <fmt>]* [{ body <string> | body-lf <fmt> }] |
| [comment <msg>] |
| Add a possible list of headers and/or a body to the request sent during HTTP |
| health checks. |
| May be used in sections : defaults | frontend | listen | backend |
| yes | no | yes | yes |
| Arguments : |
| comment <msg> defines a message to report if the rule evaluation fails. |
| |
| meth <method> is the optional HTTP method used with the requests. When not |
| set, the "OPTIONS" method is used, as it generally requires |
| low server processing and is easy to filter out from the |
| logs. Any method may be used, though it is not recommended |
| to invent non-standard ones. |
| |
| uri <uri> is optional and set the URI referenced in the HTTP requests |
| to the string <uri>. It defaults to "/" which is accessible |
| by default on almost any server, but may be changed to any |
| other URI. Query strings are permitted. |
| |
| uri-lf <fmt> is optional and set the URI referenced in the HTTP requests |
| using the log-format string <fmt>. It defaults to "/" which |
| is accessible by default on almost any server, but may be |
| changed to any other URI. Query strings are permitted. |
| |
| ver <version> is the optional HTTP version string. It defaults to |
| "HTTP/1.0" but some servers might behave incorrectly in HTTP |
| 1.0, so turning it to HTTP/1.1 may sometimes help. Note that |
| the Host field is mandatory in HTTP/1.1, use "hdr" argument |
| to add it. |
| |
| hdr <name> <fmt> adds the HTTP header field whose name is specified in |
| <name> and whose value is defined by <fmt>, which follows |
| to the log-format rules. |
| |
| body <string> add the body defined by <string> to the request sent during |
| HTTP health checks. If defined, the "Content-Length" header |
| is thus automatically added to the request. |
| |
| body-lf <fmt> add the body defined by the log-format string <fmt> to the |
| request sent during HTTP health checks. If defined, the |
| "Content-Length" header is thus automatically added to the |
| request. |
| |
| In addition to the request line defined by the "option httpchk" directive, |
| this one is the valid way to add some headers and optionally a body to the |
| request sent during HTTP health checks. If a body is defined, the associate |
| "Content-Length" header is automatically added. Thus, this header or |
| "Transfer-encoding" header should not be present in the request provided by |
| "http-check send". If so, it will be ignored. The old trick consisting to add |
| headers after the version string on the "option httpchk" line is now |
| deprecated. |
| |
| Also "http-check send" doesn't support HTTP keep-alive. Keep in mind that it |
| will automatically append a "Connection: close" header, unless a Connection |
| header has already already been configured via a hdr entry. |
| |
| Note that the Host header and the request authority, when both defined, are |
| automatically synchronized. It means when the HTTP request is sent, when a |
| Host is inserted in the request, the request authority is accordingly |
| updated. Thus, don't be surprised if the Host header value overwrites the |
| configured request authority. |
| |
| Note also for now, no Host header is automatically added in HTTP/1.1 or above |
| requests. You should add it explicitly. |
| |
| See also : "option httpchk", "http-check send-state" and "http-check expect". |
| |
| |
| http-check send-state |
| Enable emission of a state header with HTTP health checks |
| May be used in sections : defaults | frontend | listen | backend |
| yes | no | yes | yes |
| Arguments : none |
| |
| When this option is set, HAProxy will systematically send a special header |
| "X-Haproxy-Server-State" with a list of parameters indicating to each server |
| how they are seen by HAProxy. This can be used for instance when a server is |
| manipulated without access to HAProxy and the operator needs to know whether |
| HAProxy still sees it up or not, or if the server is the last one in a farm. |
| |
| The header is composed of fields delimited by semi-colons, the first of which |
| is a word ("UP", "DOWN", "NOLB"), possibly followed by a number of valid |
| checks on the total number before transition, just as appears in the stats |
| interface. Next headers are in the form "<variable>=<value>", indicating in |
| no specific order some values available in the stats interface : |
| - a variable "address", containing the address of the backend server. |
| This corresponds to the <address> field in the server declaration. For |
| unix domain sockets, it will read "unix". |
| |
| - a variable "port", containing the port of the backend server. This |
| corresponds to the <port> field in the server declaration. For unix |
| domain sockets, it will read "unix". |
| |
| - a variable "name", containing the name of the backend followed by a slash |
| ("/") then the name of the server. This can be used when a server is |
| checked in multiple backends. |
| |
| - a variable "node" containing the name of the HAProxy node, as set in the |
| global "node" variable, otherwise the system's hostname if unspecified. |
| |
| - a variable "weight" indicating the weight of the server, a slash ("/") |
| and the total weight of the farm (just counting usable servers). This |
| helps to know if other servers are available to handle the load when this |
| one fails. |
| |
| - a variable "scur" indicating the current number of concurrent connections |
| on the server, followed by a slash ("/") then the total number of |
| connections on all servers of the same backend. |
| |
| - a variable "qcur" indicating the current number of requests in the |
| server's queue. |
| |
| Example of a header received by the application server : |
| >>> X-Haproxy-Server-State: UP 2/3; name=bck/srv2; node=lb1; weight=1/2; \ |
| scur=13/22; qcur=0 |
| |
| See also : "option httpchk", "http-check disable-on-404" and |
| "http-check send". |
| |
| |
| http-check set-var(<var-name>[,<cond>...]) <expr> |
| http-check set-var-fmt(<var-name>[,<cond>...]) <fmt> |
| This operation sets the content of a variable. The variable is declared inline. |
| May be used in sections: defaults | frontend | listen | backend |
| yes | no | yes | yes |
| |
| Arguments : |
| <var-name> The name of the variable starts with an indication about its |
| scope. The scopes allowed for http-check are: |
| "proc" : the variable is shared with the whole process. |
| "sess" : the variable is shared with the tcp-check session. |
| "check": the variable is declared for the lifetime of the tcp-check. |
| This prefix is followed by a name. The separator is a '.'. |
| The name may only contain characters 'a-z', 'A-Z', '0-9', '.', |
| and '-'. |
| |
| <cond> A set of conditions that must all be true for the variable to |
| actually be set (such as "ifnotempty", "ifgt" ...). See the |
| set-var converter's description for a full list of possible |
| conditions. |
| |
| <expr> Is a sample-fetch expression potentially followed by converters. |
| |
| <fmt> This is the value expressed using log-format rules (see Custom |
| Log Format in section 8.2.6). |
| |
| Examples : |
| http-check set-var(check.port) int(1234) |
| http-check set-var-fmt(check.port) "name=%H" |
| |
| |
| http-check unset-var(<var-name>) |
| Free a reference to a variable within its scope. |
| May be used in sections: defaults | frontend | listen | backend |
| yes | no | yes | yes |
| |
| Arguments : |
| <var-name> The name of the variable starts with an indication about its |
| scope. The scopes allowed for http-check are: |
| "proc" : the variable is shared with the whole process. |
| "sess" : the variable is shared with the tcp-check session. |
| "check": the variable is declared for the lifetime of the tcp-check. |
| This prefix is followed by a name. The separator is a '.'. |
| The name may only contain characters 'a-z', 'A-Z', '0-9', '.', |
| and '-'. |
| |
| Examples : |
| http-check unset-var(check.port) |
| |
| |
| http-error status <code> [content-type <type>] |
| [ { default-errorfiles | errorfile <file> | errorfiles <name> | |
| file <file> | lf-file <file> | string <str> | lf-string <fmt> } ] |
| [ hdr <name> <fmt> ]* |
| Defines a custom error message to use instead of errors generated by HAProxy. |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | yes |
| Arguments : |
| status <code> is the HTTP status code. It must be specified. |
| Currently, HAProxy is capable of generating codes |
| 200, 400, 401, 403, 404, 405, 407, 408, 410, 413, 425, |
| 429, 500, 501, 502, 503, and 504. |
| |
| content-type <type> is the response content type, for instance |
| "text/plain". This parameter is ignored and should be |
| omitted when an errorfile is configured or when the |
| payload is empty. Otherwise, it must be defined. |
| |
| default-errorfiles Reset the previously defined error message for current |
| proxy for the status <code>. If used on a backend, the |
| frontend error message is used, if defined. If used on |
| a frontend, the default error message is used. |
| |
| errorfile <file> designates a file containing the full HTTP response. |
| It is recommended to follow the common practice of |
| appending ".http" to the filename so that people do |
| not confuse the response with HTML error pages, and to |
| use absolute paths, since files are read before any |
| chroot is performed. |
| |
| errorfiles <name> designates the http-errors section to use to import |
| the error message with the status code <code>. If no |
| such message is found, the proxy's error messages are |
| considered. |
| |
| file <file> specifies the file to use as response payload. If the |
| file is not empty, its content-type must be set as |
| argument to "content-type", otherwise, any |
| "content-type" argument is ignored. <file> is |
| considered as a raw string. |
| |
| string <str> specifies the raw string to use as response payload. |
| The content-type must always be set as argument to |
| "content-type". |
| |
| lf-file <file> specifies the file to use as response payload. If the |
| file is not empty, its content-type must be set as |
| argument to "content-type", otherwise, any |
| "content-type" argument is ignored. <file> is |
| evaluated as a log-format string. |
| |
| lf-string <str> specifies the log-format string to use as response |
| payload. The content-type must always be set as |
| argument to "content-type". |
| |
| hdr <name> <fmt> adds to the response the HTTP header field whose name |
| is specified in <name> and whose value is defined by |
| <fmt>, which follows to the log-format rules. |
| This parameter is ignored if an errorfile is used. |
| |
| This directive may be used instead of "errorfile", to define a custom error |
| message. As "errorfile" directive, it is used for errors detected and |
| returned by HAProxy. If an errorfile is defined, it is parsed when HAProxy |
| starts and must be valid according to the HTTP standards. The generated |
| response must not exceed the configured buffer size (BUFFSIZE), otherwise an |
| internal error will be returned. Finally, if you consider to use some |
| http-after-response rules to rewrite these errors, the reserved buffer space |
| should be available (see "tune.maxrewrite"). |
| |
| The files are read at the same time as the configuration and kept in memory. |
| For this reason, the errors continue to be returned even when the process is |
| chrooted, and no file change is considered while the process is running. |
| |
| Note: 400/408/500 errors emitted in early stage of the request parsing are |
| handled by the multiplexer at a lower level. No custom formatting is |
| supported at this level. Thus only static error messages, defined with |
| "errorfile" directive, are supported. However, this limitation only |
| exists during the request headers parsing or between two transactions. |
| |
| See also : "errorfile", "errorfiles", "errorloc", "errorloc302", |
| "errorloc303" and section 3.8 about http-errors. |
| |
| |
| http-request <action> [options...] [ { if | unless } <condition> ] |
| Access control for Layer 7 requests |
| |
| May be used in sections: defaults | frontend | listen | backend |
| yes(!) | yes | yes | yes |
| |
| The http-request statement defines a set of rules which apply to layer 7 |
| processing. The rules are evaluated in their declaration order when they are |
| met in a frontend, listen or backend section. Any rule may optionally be |
| followed by an ACL-based condition, in which case it will only be evaluated |
| if the condition is true. |
| |
| The first keyword is the rule's action. Several types of actions are |
| supported: |
| - add-acl(<file-name>) <key fmt> |
| - add-header <name> <fmt> |
| - allow |
| - auth [realm <realm>] |
| - cache-use <name> |
| - capture <sample> [ len <length> | id <id> ] |
| - del-acl(<file-name>) <key fmt> |
| - del-header <name> [ -m <meth> ] |
| - del-map(<file-name>) <key fmt> |
| - deny [ { status | deny_status } <code>] ... |
| - disable-l7-retry |
| - do-resolve(<var>,<resolvers>,[ipv4,ipv6]) <expr> |
| - early-hint <name> <fmt> |
| - normalize-uri <normalizer> |
| - redirect <rule> |
| - reject |
| - replace-header <name> <match-regex> <replace-fmt> |
| - replace-path <match-regex> <replace-fmt> |
| - replace-pathq <match-regex> <replace-fmt> |
| - replace-uri <match-regex> <replace-fmt> |
| - replace-value <name> <match-regex> <replace-fmt> |
| - return [status <code>] [content-type <type>] ... |
| - sc-add-gpc(<idx>,<sc-id>) { <int> | <expr> } |
| - sc-inc-gpc(<idx>,<sc-id>) |
| - sc-inc-gpc0(<sc-id>) |
| - sc-inc-gpc1(<sc-id>) |
| - sc-set-gpt(<idx>,<sc-id>) { <int> | <expr> } |
| - sc-set-gpt0(<sc-id>) { <int> | <expr> } |
| - set-bandwidth-limit <name> [limit {<expr> | <size>}] [period {<expr> | <time>}] |
| - set-dst <expr> |
| - set-dst-port <expr> |
| - set-header <name> <fmt> |
| - set-log-level <level> |
| - set-map(<file-name>) <key fmt> <value fmt> |
| - set-mark <mark> |
| - set-method <fmt> |
| - set-nice <nice> |
| - set-path <fmt> |
| - set-pathq <fmt> |
| - set-priority-class <expr> |
| - set-priority-offset <expr> |
| - set-query <fmt> |
| - set-src <expr> |
| - set-src-port <expr> |
| - set-timeout { server | tunnel } { <timeout> | <expr> } |
| - set-tos <tos> |
| - set-uri <fmt> |
| - set-var(<var-name>[,<cond>...]) <expr> |
| - set-var-fmt(<var-name>[,<cond>...]) <fmt> |
| - send-spoe-group <engine-name> <group-name> |
| - silent-drop [ rst-ttl <ttl> ] |
| - strict-mode { on | off } |
| - tarpit [ { status | deny_status } <code>] ... |
| - track-sc0 <key> [table <table>] |
| - track-sc1 <key> [table <table>] |
| - track-sc2 <key> [table <table>] |
| - unset-var(<var-name>) |
| - use-service <service-name> |
| - wait-for-body time <time> [ at-least <bytes> ] |
| - wait-for-handshake |
| - cache-use <name> |
| |
| The supported actions are described below. |
| |
| There is no limit to the number of http-request statements per instance. |
| |
| This directive is only available from named defaults sections, not anonymous |
| ones. Rules defined in the defaults section are evaluated before ones in the |
| associated proxy section. To avoid ambiguities, in this case the same |
| defaults section cannot be used by proxies with the frontend capability and |
| by proxies with the backend capability. It means a listen section cannot use |
| a defaults section defining such rules. |
| |
| Example: |
| acl nagios src 192.168.129.3 |
| acl local_net src 192.168.0.0/16 |
| acl auth_ok http_auth(L1) |
| |
| http-request allow if nagios |
| http-request allow if local_net auth_ok |
| http-request auth realm Gimme if local_net auth_ok |
| http-request deny |
| |
| Example: |
| acl key req.hdr(X-Add-Acl-Key) -m found |
| acl add path /addacl |
| acl del path /delacl |
| |
| acl myhost hdr(Host) -f myhost.lst |
| |
| http-request add-acl(myhost.lst) %[req.hdr(X-Add-Acl-Key)] if key add |
| http-request del-acl(myhost.lst) %[req.hdr(X-Add-Acl-Key)] if key del |
| |
| Example: |
| acl value req.hdr(X-Value) -m found |
| acl setmap path /setmap |
| acl delmap path /delmap |
| |
| use_backend bk_appli if { hdr(Host),map_str(map.lst) -m found } |
| |
| http-request set-map(map.lst) %[src] %[req.hdr(X-Value)] if setmap value |
| http-request del-map(map.lst) %[src] if delmap |
| |
| See also : "stats http-request", section 3.4 about userlists and section 7 |
| about ACL usage. |
| |
| http-request add-acl(<file-name>) <key fmt> [ { if | unless } <condition> ] |
| |
| This is used to add a new entry into an ACL. The ACL must be loaded from a |
| file (even a dummy empty file). The file name of the ACL to be updated is |
| passed between parentheses. It takes one argument: <key fmt>, which follows |
| log-format rules, to collect content of the new entry. It performs a lookup |
| in the ACL before insertion, to avoid duplicated (or more) values. This |
| lookup is done by a linear search and can be expensive with large lists! |
| It is the equivalent of the "add acl" command from the stats socket, but can |
| be triggered by an HTTP request. |
| |
| http-request add-header <name> <fmt> [ { if | unless } <condition> ] |
| |
| This appends an HTTP header field whose name is specified in <name> and |
| whose value is defined by <fmt> which follows the log-format rules (see |
| Custom Log Format in section 8.2.4). This is particularly useful to pass |
| connection-specific information to the server (e.g. the client's SSL |
| certificate), or to combine several headers into one. This rule is not |
| final, so it is possible to add other similar rules. Note that header |
| addition is performed immediately, so one rule might reuse the resulting |
| header from a previous rule. |
| |
| http-request allow [ { if | unless } <condition> ] |
| |
| This stops the evaluation of the rules and lets the request pass the check. |
| No further "http-request" rules are evaluated for the current section. |
| |
| http-request auth [realm <realm>] [ { if | unless } <condition> ] |
| |
| This stops the evaluation of the rules and immediately responds with an |
| HTTP 401 or 407 error code to invite the user to present a valid user name |
| and password. No further "http-request" rules are evaluated. An optional |
| "realm" parameter is supported, it sets the authentication realm that is |
| returned with the response (typically the application's name). |
| |
| The corresponding proxy's error message is used. It may be customized using |
| an "errorfile" or an "http-error" directive. For 401 responses, all |
| occurrences of the WWW-Authenticate header are removed and replaced by a new |
| one with a basic authentication challenge for realm "<realm>". For 407 |
| responses, the same is done on the Proxy-Authenticate header. If the error |
| message must not be altered, consider to use "http-request return" rule |
| instead. |
| |
| Example: |
| acl auth_ok http_auth_group(L1) G1 |
| http-request auth unless auth_ok |
| |
| http-request cache-use <name> [ { if | unless } <condition> ] |
| |
| See section 6.2 about cache setup. |
| |
| http-request capture <sample> [ len <length> | id <id> ] |
| [ { if | unless } <condition> ] |
| |
| This captures sample expression <sample> from the request buffer, and |
| converts it to a string of at most <len> characters. The resulting string is |
| stored into the next request "capture" slot, so it will possibly appear next |
| to some captured HTTP headers. It will then automatically appear in the logs, |
| and it will be possible to extract it using sample fetch rules to feed it |
| into headers or anything. The length should be limited given that this size |
| will be allocated for each capture during the whole session life. |
| Please check section 7.3 (Fetching samples) and "capture request header" for |
| more information. |
| |
| If the keyword "id" is used instead of "len", the action tries to store the |
| captured string in a previously declared capture slot. This is useful to run |
| captures in backends. The slot id can be declared by a previous directive |
| "http-request capture" or with the "declare capture" keyword. |
| |
| When using this action in a backend, double check that the relevant |
| frontend(s) have the required capture slots otherwise, this rule will be |
| ignored at run time. This can't be detected at configuration parsing time |
| due to HAProxy's ability to dynamically resolve backend name at runtime. |
| |
| http-request del-acl(<file-name>) <key fmt> [ { if | unless } <condition> ] |
| |
| This is used to delete an entry from an ACL. The ACL must be loaded from a |
| file (even a dummy empty file). The file name of the ACL to be updated is |
| passed between parentheses. It takes one argument: <key fmt>, which follows |
| log-format rules, to collect content of the entry to delete. |
| It is the equivalent of the "del acl" command from the stats socket, but can |
| be triggered by an HTTP request. |
| |
| http-request del-header <name> [ -m <meth> ] [ { if | unless } <condition> ] |
| |
| This removes all HTTP header fields whose name is specified in <name>. <meth> |
| is the matching method, applied on the header name. Supported matching methods |
| are "str" (exact match), "beg" (prefix match), "end" (suffix match), "sub" |
| (substring match) and "reg" (regex match). If not specified, exact matching |
| method is used. |
| |
| http-request del-map(<file-name>) <key fmt> [ { if | unless } <condition> ] |
| |
| This is used to delete an entry from a MAP. The MAP must be loaded from a |
| file (even a dummy empty file). The file name of the MAP to be updated is |
| passed between parentheses. It takes one argument: <key fmt>, which follows |
| log-format rules, to collect content of the entry to delete. |
| It takes one argument: "file name" It is the equivalent of the "del map" |
| command from the stats socket, but can be triggered by an HTTP request. |
| |
| http-request deny [deny_status <status>] [ { if | unless } <condition> ] |
| http-request deny [ { status | deny_status } <code>] [content-type <type>] |
| [ { default-errorfiles | errorfile <file> | errorfiles <name> | |
| file <file> | lf-file <file> | string <str> | lf-string <fmt> } ] |
| [ hdr <name> <fmt> ]* |
| [ { if | unless } <condition> ] |
| |
| This stops the evaluation of the rules and immediately rejects the request. |
| By default an HTTP 403 error is returned. But the response may be customized |
| using same syntax than "http-request return" rules. Thus, see "http-request |
| return" for details. For compatibility purpose, when no argument is defined, |
| or only "deny_status", the argument "default-errorfiles" is implied. It means |
| "http-request deny [deny_status <status>]" is an alias of |
| "http-request deny [status <status>] default-errorfiles". |
| No further "http-request" rules are evaluated. |
| See also "http-request return". |
| |
| http-request disable-l7-retry [ { if | unless } <condition> ] |
| This disables any attempt to retry the request if it fails for any other |
| reason than a connection failure. This can be useful for example to make |
| sure POST requests aren't retried on failure. |
| |
| http-request do-resolve(<var>,<resolvers>,[ipv4,ipv6]) <expr> |
| [ { if | unless } <condition> ] |
| |
| This action performs a DNS resolution of the output of <expr> and stores |
| the result in the variable <var>. It uses the DNS resolvers section |
| pointed by <resolvers>. |
| It is possible to choose a resolution preference using the optional |
| arguments 'ipv4' or 'ipv6'. |
| When performing the DNS resolution, the client side connection is on |
| pause waiting till the end of the resolution. |
| If an IP address can be found, it is stored into <var>. If any kind of |
| error occurs, then <var> is not set. |
| One can use this action to discover a server IP address at run time and |
| based on information found in the request (IE a Host header). |
| If this action is used to find the server's IP address (using the |
| "set-dst" action), then the server IP address in the backend must be set |
| to 0.0.0.0. The do-resolve action takes an host-only parameter, any port must |
| be removed from the string. |
| |
| Example: |
| resolvers mydns |
| nameserver local 127.0.0.53:53 |
| nameserver google 8.8.8.8:53 |
| timeout retry 1s |
| hold valid 10s |
| hold nx 3s |
| hold other 3s |
| hold obsolete 0s |
| accepted_payload_size 8192 |
| |
| frontend fe |
| bind 10.42.0.1:80 |
| http-request do-resolve(txn.myip,mydns,ipv4) hdr(Host),host_only |
| http-request capture var(txn.myip) len 40 |
| |
| # return 503 when the variable is not set, |
| # which mean DNS resolution error |
| use_backend b_503 unless { var(txn.myip) -m found } |
| |
| default_backend be |
| |
| backend b_503 |
| # dummy backend used to return 503. |
| # one can use the errorfile directive to send a nice |
| # 503 error page to end users |
| |
| backend be |
| # rule to prevent HAProxy from reconnecting to services |
| # on the local network (forged DNS name used to scan the network) |
| http-request deny if { var(txn.myip) -m ip 127.0.0.0/8 10.0.0.0/8 } |
| http-request set-dst var(txn.myip) |
| server clear 0.0.0.0:0 |
| |
| NOTE: Don't forget to set the "protection" rules to ensure HAProxy won't |
| be used to scan the network or worst won't loop over itself... |
| |
| http-request early-hint <name> <fmt> [ { if | unless } <condition> ] |
| |
| This is used to build an HTTP 103 Early Hints response prior to any other one. |
| This appends an HTTP header field to this response whose name is specified in |
| <name> and whose value is defined by <fmt> which follows the log-format rules |
| (see Custom Log Format in section 8.2.4). This is particularly useful to pass |
| to the client some Link headers to preload resources required to render the |
| HTML documents. |
| |
| See RFC 8297 for more information. |
| |
| http-request normalize-uri <normalizer> [ { if | unless } <condition> ] |
| http-request normalize-uri fragment-encode [ { if | unless } <condition> ] |
| http-request normalize-uri fragment-strip [ { if | unless } <condition> ] |
| http-request normalize-uri path-merge-slashes [ { if | unless } <condition> ] |
| http-request normalize-uri path-strip-dot [ { if | unless } <condition> ] |
| http-request normalize-uri path-strip-dotdot [ full ] [ { if | unless } <condition> ] |
| http-request normalize-uri percent-decode-unreserved [ strict ] [ { if | unless } <condition> ] |
| http-request normalize-uri percent-to-uppercase [ strict ] [ { if | unless } <condition> ] |
| http-request normalize-uri query-sort-by-name [ { if | unless } <condition> ] |
| |
| Performs normalization of the request's URI. |
| |
| URI normalization in HAProxy 2.4 is currently available as an experimental |
| technical preview. As such, it requires the global directive |
| 'expose-experimental-directives' first to be able to invoke it. You should be |
| prepared that the behavior of normalizers might change to fix possible |
| issues, possibly breaking proper request processing in your infrastructure. |
| |
| Each normalizer handles a single type of normalization to allow for a |
| fine-grained selection of the level of normalization that is appropriate for |
| the supported backend. |
| |
| As an example the "path-strip-dotdot" normalizer might be useful for a static |
| fileserver that directly maps the requested URI to the path within the local |
| filesystem. However it might break routing of an API that expects a specific |
| number of segments in the path. |
| |
| It is important to note that some normalizers might result in unsafe |
| transformations for broken URIs. It might also be possible that a combination |
| of normalizers that are safe by themselves results in unsafe transformations |
| when improperly combined. |
| |
| As an example the "percent-decode-unreserved" normalizer might result in |
| unexpected results when a broken URI includes bare percent characters. One |
| such a broken URI is "/%%36%36" which would be decoded to "/%66" which in |
| turn is equivalent to "/f". By specifying the "strict" option requests to |
| such a broken URI would safely be rejected. |
| |
| The following normalizers are available: |
| |
| - fragment-encode: Encodes "#" as "%23". |
| |
| The "fragment-strip" normalizer should be preferred, unless it is known |
| that broken clients do not correctly encode '#' within the path component. |
| |
| Example: |
| - /#foo -> /%23foo |
| |
| - fragment-strip: Removes the URI's "fragment" component. |
| |
| According to RFC 3986#3.5 the "fragment" component of an URI should not |
| be sent, but handled by the User Agent after retrieving a resource. |
| |
| This normalizer should be applied first to ensure that the fragment is |
| not interpreted as part of the request's path component. |
| |
| Example: |
| - /#foo -> / |
| |
| - path-strip-dot: Removes "/./" segments within the "path" component |
| (RFC 3986#6.2.2.3). |
| |
| Segments including percent encoded dots ("%2E") will not be detected. Use |
| the "percent-decode-unreserved" normalizer first if this is undesired. |
| |
| Example: |
| - /. -> / |
| - /./bar/ -> /bar/ |
| - /a/./a -> /a/a |
| - /.well-known/ -> /.well-known/ (no change) |
| |
| - path-strip-dotdot: Normalizes "/../" segments within the "path" component |
| (RFC 3986#6.2.2.3). |
| |
| This merges segments that attempt to access the parent directory with |
| their preceding segment. |
| |
| Empty segments do not receive special treatment. Use the "merge-slashes" |
| normalizer first if this is undesired. |
| |
| Segments including percent encoded dots ("%2E") will not be detected. Use |
| the "percent-decode-unreserved" normalizer first if this is undesired. |
| |
| Example: |
| - /foo/../ -> / |
| - /foo/../bar/ -> /bar/ |
| - /foo/bar/../ -> /foo/ |
| - /../bar/ -> /../bar/ |
| - /bar/../../ -> /../ |
| - /foo//../ -> /foo/ |
| - /foo/%2E%2E/ -> /foo/%2E%2E/ |
| |
| If the "full" option is specified then "../" at the beginning will be |
| removed as well: |
| |
| Example: |
| - /../bar/ -> /bar/ |
| - /bar/../../ -> / |
| |
| - path-merge-slashes: Merges adjacent slashes within the "path" component |
| into a single slash. |
| |
| Example: |
| - // -> / |
| - /foo//bar -> /foo/bar |
| |
| - percent-decode-unreserved: Decodes unreserved percent encoded characters to |
| their representation as a regular character (RFC 3986#6.2.2.2). |
| |
| The set of unreserved characters includes all letters, all digits, "-", |
| ".", "_", and "~". |
| |
| Example: |
| - /%61dmin -> /admin |
| - /foo%3Fbar=baz -> /foo%3Fbar=baz (no change) |
| - /%%36%36 -> /%66 (unsafe) |
| - /%ZZ -> /%ZZ |
| |
| If the "strict" option is specified then invalid sequences will result |
| in a HTTP 400 Bad Request being returned. |
| |
| Example: |
| - /%%36%36 -> HTTP 400 |
| - /%ZZ -> HTTP 400 |
| |
| - percent-to-uppercase: Uppercases letters within percent-encoded sequences |
| (RFC 3986#6.2.2.1). |
| |
| Example: |
| - /%6f -> /%6F |
| - /%zz -> /%zz |
| |
| If the "strict" option is specified then invalid sequences will result |
| in a HTTP 400 Bad Request being returned. |
| |
| Example: |
| - /%zz -> HTTP 400 |
| |
| - query-sort-by-name: Sorts the query string parameters by parameter name. |
| Parameters are assumed to be delimited by '&'. Shorter names sort before |
| longer names and identical parameter names maintain their relative order. |
| |
| Example: |
| - /?c=3&a=1&b=2 -> /?a=1&b=2&c=3 |
| - /?aaa=3&a=1&aa=2 -> /?a=1&aa=2&aaa=3 |
| - /?a=3&b=4&a=1&b=5&a=2 -> /?a=3&a=1&a=2&b=4&b=5 |
| |
| http-request redirect <rule> [ { if | unless } <condition> ] |
| |
| This performs an HTTP redirection based on a redirect rule. This is exactly |
| the same as the "redirect" statement except that it inserts a redirect rule |
| which can be processed in the middle of other "http-request" rules and that |
| these rules use the "log-format" strings. See the "redirect" keyword for the |
| rule's syntax. |
| |
| http-request reject [ { if | unless } <condition> ] |
| |
| This stops the evaluation of the rules and immediately closes the connection |
| without sending any response. It acts similarly to the |
| "tcp-request content reject" rules. It can be useful to force an immediate |
| connection closure on HTTP/2 connections. |
| |
| http-request replace-header <name> <match-regex> <replace-fmt> |
| [ { if | unless } <condition> ] |
| |
| This matches the value of all occurrences of header field <name> against |
| <match-regex>. Matching is performed case-sensitively. Matching values are |
| completely replaced by <replace-fmt>. Format characters are allowed in |
| <replace-fmt> and work like <fmt> arguments in "http-request add-header". |
| Standard back-references using the backslash ('\') followed by a number are |
| supported. |
| |
| This action acts on whole header lines, regardless of the number of values |
| they may contain. Thus it is well-suited to process headers naturally |
| containing commas in their value, such as If-Modified-Since. Headers that |
| contain a comma-separated list of values, such as Accept, should be processed |
| using "http-request replace-value". |
| |
| Example: |
| http-request replace-header Cookie foo=([^;]*);(.*) foo=\1;ip=%bi;\2 |
| |
| # applied to: |
| Cookie: foo=foobar; expires=Tue, 14-Jun-2016 01:40:45 GMT; |
| |
| # outputs: |
| Cookie: foo=foobar;ip=192.168.1.20; expires=Tue, 14-Jun-2016 01:40:45 GMT; |
| |
| # assuming the backend IP is 192.168.1.20 |
| |
| http-request replace-header User-Agent curl foo |
| |
| # applied to: |
| User-Agent: curl/7.47.0 |
| |
| # outputs: |
| User-Agent: foo |
| |
| http-request replace-path <match-regex> <replace-fmt> |
| [ { if | unless } <condition> ] |
| |
| This works like "replace-header" except that it works on the request's path |
| component instead of a header. The path component starts at the first '/' |
| after an optional scheme+authority and ends before the question mark. Thus, |
| the replacement does not modify the scheme, the authority and the |
| query-string. |
| |
| It is worth noting that regular expressions may be more expensive to evaluate |
| than certain ACLs, so rare replacements may benefit from a condition to avoid |
| performing the evaluation at all if it does not match. |
| |
| Example: |
| # prefix /foo : turn /bar?q=1 into /foo/bar?q=1 : |
| http-request replace-path (.*) /foo\1 |
| |
| # strip /foo : turn /foo/bar?q=1 into /bar?q=1 |
| http-request replace-path /foo/(.*) /\1 |
| # or more efficient if only some requests match : |
| http-request replace-path /foo/(.*) /\1 if { url_beg /foo/ } |
| |
| http-request replace-pathq <match-regex> <replace-fmt> |
| [ { if | unless } <condition> ] |
| |
| This does the same as "http-request replace-path" except that the path |
| contains the query-string if any is present. Thus, the path and the |
| query-string are replaced. |
| |
| Example: |
| # suffix /foo : turn /bar?q=1 into /bar/foo?q=1 : |
| http-request replace-pathq ([^?]*)(\?(.*))? \1/foo\2 |
| |
| http-request replace-uri <match-regex> <replace-fmt> |
| [ { if | unless } <condition> ] |
| |
| This works like "replace-header" except that it works on the request's URI part |
| instead of a header. The URI part may contain an optional scheme, authority or |
| query string. These are considered to be part of the value that is matched |
| against. |
| |
| It is worth noting that regular expressions may be more expensive to evaluate |
| than certain ACLs, so rare replacements may benefit from a condition to avoid |
| performing the evaluation at all if it does not match. |
| |
| IMPORTANT NOTE: historically in HTTP/1.x, the vast majority of requests sent |
| by browsers use the "origin form", which differs from the "absolute form" in |
| that they do not contain a scheme nor authority in the URI portion. Mostly |
| only requests sent to proxies, those forged by hand and some emitted by |
| certain applications use the absolute form. As such, "replace-uri" usually |
| works fine most of the time in HTTP/1.x with rules starting with a "/". But |
| with HTTP/2, clients are encouraged to send absolute URIs only, which look |
| like the ones HTTP/1 clients use to talk to proxies. Such partial replace-uri |
| rules may then fail in HTTP/2 when they work in HTTP/1. Either the rules need |
| to be adapted to optionally match a scheme and authority, or replace-path |
| should be used. |
| |
| Example: |
| # rewrite all "http" absolute requests to "https": |
| http-request replace-uri ^http://(.*) https://\1 |
| |
| # prefix /foo : turn /bar?q=1 into /foo/bar?q=1 : |
| http-request replace-uri ([^/:]*://[^/]*)?(.*) \1/foo\2 |
| |
| http-request replace-value <name> <match-regex> <replace-fmt> |
| [ { if | unless } <condition> ] |
| |
| This works like "replace-header" except that it matches the regex against |
| every comma-delimited value of the header field <name> instead of the |
| entire header. This is suited for all headers which are allowed to carry |
| more than one value. An example could be the Accept header. |
| |
| Example: |
| http-request replace-value X-Forwarded-For ^192\.168\.(.*)$ 172.16.\1 |
| |
| # applied to: |
| X-Forwarded-For: 192.168.10.1, 192.168.13.24, 10.0.0.37 |
| |
| # outputs: |
| X-Forwarded-For: 172.16.10.1, 172.16.13.24, 10.0.0.37 |
| |
| http-request return [status <code>] [content-type <type>] |
| [ { default-errorfiles | errorfile <file> | errorfiles <name> | |
| file <file> | lf-file <file> | string <str> | lf-string <fmt> } ] |
| [ hdr <name> <fmt> ]* |
| [ { if | unless } <condition> ] |
| |
| This stops the evaluation of the rules and immediately returns a response. The |
| default status code used for the response is 200. It can be optionally |
| specified as an arguments to "status". The response content-type may also be |
| specified as an argument to "content-type". Finally the response itself may |
| be defined. It can be a full HTTP response specifying the errorfile to use, |
| or the response payload specifying the file or the string to use. These rules |
| are followed to create the response : |
| |
| * If neither the errorfile nor the payload to use is defined, a dummy |
| response is returned. Only the "status" argument is considered. It can be |
| any code in the range [200, 599]. The "content-type" argument, if any, is |
| ignored. |
| |
| * If "default-errorfiles" argument is set, the proxy's errorfiles are |
| considered. If the "status" argument is defined, it must be one of the |
| status code handled by HAProxy (200, 400, 403, 404, 405, 408, 410, 413, |
| 425, 429, 500, 501, 502, 503, and 504). The "content-type" argument, if |
| any, is ignored. |
| |
| * If a specific errorfile is defined, with an "errorfile" argument, the |
| corresponding file, containing a full HTTP response, is returned. Only the |
| "status" argument is considered. It must be one of the status code handled |
| by HAProxy (200, 400, 403, 404, 405, 408, 410, 413, 425, 429, 500, 501, |
| 502, 503, and 504). The "content-type" argument, if any, is ignored. |
| |
| * If an http-errors section is defined, with an "errorfiles" argument, the |
| corresponding file in the specified http-errors section, containing a full |
| HTTP response, is returned. Only the "status" argument is considered. It |
| must be one of the status code handled by HAProxy (200, 400, 403, 404, 405, |
| 408, 410, 413, 425, 429, 500, 501, 502, 503, and 504). The "content-type" |
| argument, if any, is ignored. |
| |
| * If a "file" or a "lf-file" argument is specified, the file's content is |
| used as the response payload. If the file is not empty, its content-type |
| must be set as argument to "content-type". Otherwise, any "content-type" |
| argument is ignored. With a "lf-file" argument, the file's content is |
| evaluated as a log-format string. With a "file" argument, it is considered |
| as a raw content. |
| |
| * If a "string" or "lf-string" argument is specified, the defined string is |
| used as the response payload. The content-type must always be set as |
| argument to "content-type". With a "lf-string" argument, the string is |
| evaluated as a log-format string. With a "string" argument, it is |
| considered as a raw string. |
| |
| When the response is not based on an errorfile, it is possible to append HTTP |
| header fields to the response using "hdr" arguments. Otherwise, all "hdr" |
| arguments are ignored. For each one, the header name is specified in <name> |
| and its value is defined by <fmt> which follows the log-format rules. |
| |
| Note that the generated response must be smaller than a buffer. And to avoid |
| any warning, when an errorfile or a raw file is loaded, the buffer space |
| reserved for the headers rewriting should also be free. |
| |
| No further "http-request" rules are evaluated. |
| |
| Example: |
| http-request return errorfile /etc/haproxy/errorfiles/200.http \ |
| if { path /ping } |
| |
| http-request return content-type image/x-icon file /var/www/favicon.ico \ |
| if { path /favicon.ico } |
| |
| http-request return status 403 content-type text/plain \ |
| lf-string "Access denied. IP %[src] is blacklisted." \ |
| if { src -f /etc/haproxy/blacklist.lst } |
| |
| http-request sc-add-gpc(<idx>,<sc-id>) { <int> | <expr> } |
| [ { if | unless } <condition> ] |
| |
| This action increments the General Purpose Counter at the index <idx> of the |
| array associated to the sticky counter designated by <sc-id> by the value of |
| either integer <int> or the integer evaluation of expression <expr>. Integers |
| and expressions are limited to unsigned 32-bit values. If an error occurs, |
| this action silently fails and the actions evaluation continues. <idx> is an |
| integer between 0 and 99 and <sc-id> is an integer between 0 and 2. It also |
| silently fails if the there is no GPC stored at this index. The entry in the |
| table is refreshed even if the value is zero. The 'gpc_rate' is automatically |
| adjusted to reflect the average growth rate of the gpc value. |
| |
| This action applies only to the 'gpc' and 'gpc_rate' array data_types (and |
| not to the legacy 'gpc0', 'gpc1', 'gpc0_rate' nor 'gpc1_rate' data_types). |
| There is no equivalent function for legacy data types, but if the value is |
| always 1, please see 'sc-inc-gpc()', 'sc-inc-gpc0()' and 'sc-inc-gpc1()'. |
| There is no way to decrement the value either, but it is possible to store |
| exact values in a General Purpose Tag using 'sc-set-gpt()' instead. |
| |
| The main use of this action is to count scores or total volumes (e.g. |
| estimated danger per source IP reported by the server or a WAF, total |
| uploaded bytes, etc). |
| |
| http-request sc-inc-gpc(<idx>,<sc-id>) [ { if | unless } <condition> ] |
| |
| This actions increments the General Purpose Counter at the index <idx> |
| of the array associated to the sticky counter designated by <sc-id>. |
| If an error occurs, this action silently fails and the actions evaluation |
| continues. <idx> is an integer between 0 and 99 and <sc-id> is an integer |
| between 0 and 2. It also silently fails if the there is no GPC stored |
| at this index. |
| This action applies only to the 'gpc' and 'gpc_rate' array data_types (and |
| not to the legacy 'gpc0', 'gpc1', 'gpc0_rate' nor 'gpc1_rate' data_types). |
| |
| http-request sc-inc-gpc0(<sc-id>) [ { if | unless } <condition> ] |
| http-request sc-inc-gpc1(<sc-id>) [ { if | unless } <condition> ] |
| |
| This actions increments the GPC0 or GPC1 counter according with the sticky |
| counter designated by <sc-id>. If an error occurs, this action silently fails |
| and the actions evaluation continues. |
| |
| http-request sc-set-gpt(<idx>,<sc-id>) { <int> | <expr> } |
| [ { if | unless } <condition> ] |
| This action sets the 32-bit unsigned GPT at the index <idx> of the array |
| associated to the sticky counter designated by <sc-id> at the value of |
| <int>/<expr>. The expected result is a boolean. |
| If an error occurs, this action silently fails and the actions evaluation |
| continues. <idx> is an integer between 0 and 99 and <sc-id> is an integer |
| between 0 and 2. It also silently fails if the there is no GPT stored |
| at this index. |
| This action applies only to the 'gpt' array data_type (and not to the |
| legacy 'gpt0' data-type). |
| |
| http-request sc-set-gpt0(<sc-id>) { <int> | <expr> } |
| [ { if | unless } <condition> ] |
| |
| This action sets the 32-bit unsigned GPT0 tag according to the sticky counter |
| designated by <sc-id> and the value of <int>/<expr>. The expected result is a |
| boolean. If an error occurs, this action silently fails and the actions |
| evaluation continues. |
| |
| http-request send-spoe-group <engine-name> <group-name> |
| [ { if | unless } <condition> ] |
| |
| This action is used to trigger sending of a group of SPOE messages. To do so, |
| the SPOE engine used to send messages must be defined, as well as the SPOE |
| group to send. Of course, the SPOE engine must refer to an existing SPOE |
| filter. If not engine name is provided on the SPOE filter line, the SPOE |
| agent name must be used. |
| |
| Arguments: |
| <engine-name> The SPOE engine name. |
| |
| <group-name> The SPOE group name as specified in the engine |
| configuration. |
| |
| http-request set-bandwidth-limit <name> [limit { <expr> | <size> }] |
| [period { <expr> | <time> }] [ { if | unless } <condition> ] |
| |
| This action is used to enable the bandwidth limitation filter <name>, either |
| on the upload or download direction depending on the filter type. Custom |
| limit and period may be defined, if and only if <name> references a |
| per-stream bandwidth limitation filter. When a set-bandwidth-limit rule is |
| executed, it first resets all settings of the filter to their defaults prior |
| to enabling it. As a consequence, if several "set-bandwidth-limit" actions |
| are executed for the same filter, only the last one is considered. Several |
| bandwidth limitation filters can be enabled on the same stream. |
| |
| Note that this action cannot be used in a defaults section because bandwidth |
| limitation filters cannot be defined in defaults sections. In addition, only |
| the HTTP payload transfer is limited. The HTTP headers are not considered. |
| |
| Arguments: |
| <expr> Is a standard HAProxy expression formed by a sample-fetch followed |
| by some converters. The result is converted to an integer. It is |
| interpreted as a size in bytes for the "limit" parameter and as a |
| duration in milliseconds for the "period" parameter. |
| |
| <size> Is a number. It follows the HAProxy size format and is expressed in |
| bytes. |
| |
| <time> Is a number. It follows the HAProxy time format and is expressed in |
| milliseconds. |
| |
| Example: |
| http-request set-bandwidth-limit global-limit |
| http-request set-bandwidth-limit my-limit limit 1m period 10s |
| |
| See section 9.7 about bandwidth limitation filter setup. |
| |
| http-request set-dst <expr> [ { if | unless } <condition> ] |
| |
| This is used to set the destination IP address to the value of specified |
| expression. Useful when a proxy in front of HAProxy rewrites destination IP, |
| but provides the correct IP in a HTTP header; or you want to mask the IP for |
| privacy. If you want to connect to the new address/port, use '0.0.0.0:0' as a |
| server address in the backend. |
| |
| Arguments: |
| <expr> Is a standard HAProxy expression formed by a sample-fetch followed |
| by some converters. |
| |
| Example: |
| http-request set-dst hdr(x-dst) |
| http-request set-dst dst,ipmask(24) |
| |
| When possible, set-dst preserves the original destination port as long as the |
| address family allows it, otherwise the destination port is set to 0. |
| |
| http-request set-dst-port <expr> [ { if | unless } <condition> ] |
| |
| This is used to set the destination port address to the value of specified |
| expression. If you want to connect to the new address/port, use '0.0.0.0:0' |
| as a server address in the backend. |
| |
| Arguments: |
| <expr> Is a standard HAProxy expression formed by a sample-fetch |
| followed by some converters. |
| |
| Example: |
| http-request set-dst-port hdr(x-port) |
| http-request set-dst-port int(4000) |
| |
| When possible, set-dst-port preserves the original destination address as |
| long as the address family supports a port, otherwise it forces the |
| destination address to IPv4 "0.0.0.0" before rewriting the port. |
| |
| http-request set-header <name> <fmt> [ { if | unless } <condition> ] |
| |
| This does the same as "http-request add-header" except that the header name |
| is first removed if it existed. This is useful when passing security |
| information to the server, where the header must not be manipulated by |
| external users. Note that the new value is computed before the removal so it |
| is possible to concatenate a value to an existing header. |
| |
| Example: |
| http-request set-header X-Haproxy-Current-Date %T |
| http-request set-header X-SSL %[ssl_fc] |
| http-request set-header X-SSL-Session_ID %[ssl_fc_session_id,hex] |
| http-request set-header X-SSL-Client-Verify %[ssl_c_verify] |
| http-request set-header X-SSL-Client-DN %{+Q}[ssl_c_s_dn] |
| http-request set-header X-SSL-Client-CN %{+Q}[ssl_c_s_dn(cn)] |
| http-request set-header X-SSL-Issuer %{+Q}[ssl_c_i_dn] |
| http-request set-header X-SSL-Client-NotBefore %{+Q}[ssl_c_notbefore] |
| http-request set-header X-SSL-Client-NotAfter %{+Q}[ssl_c_notafter] |
| |
| http-request set-log-level <level> [ { if | unless } <condition> ] |
| |
| This is used to change the log level of the current request when a certain |
| condition is met. Valid levels are the 8 syslog levels (see the "log" |
| keyword) plus the special level "silent" which disables logging for this |
| request. This rule is not final so the last matching rule wins. This rule |
| can be useful to disable health checks coming from another equipment. |
| |
| http-request set-map(<file-name>) <key fmt> <value fmt> |
| [ { if | unless } <condition> ] |
| |
| This is used to add a new entry into a MAP. The MAP must be loaded from a |
| file (even a dummy empty file). The file name of the MAP to be updated is |
| passed between parentheses. It takes 2 arguments: <key fmt>, which follows |
| log-format rules, used to collect MAP key, and <value fmt>, which follows |
| log-format rules, used to collect content for the new entry. |
| It performs a lookup in the MAP before insertion, to avoid duplicated (or |
| more) values. This lookup is done by a linear search and can be expensive |
| with large lists! It is the equivalent of the "set map" command from the |
| stats socket, but can be triggered by an HTTP request. |
| |
| http-request set-mark <mark> [ { if | unless } <condition> ] |
| |
| This is used to set the Netfilter/IPFW MARK on all packets sent to the client |
| to the value passed in <mark> on platforms which support it. This value is an |
| unsigned 32 bit value which can be matched by netfilter/ipfw and by the |
| routing table or monitoring the packets through DTrace. It can be expressed |
| both in decimal or hexadecimal format (prefixed by "0x"). |
| This can be useful to force certain packets to take a different route (for |
| example a cheaper network path for bulk downloads). This works on Linux |
| kernels 2.6.32 and above and requires admin privileges, as well on FreeBSD |
| and OpenBSD. |
| |
| http-request set-method <fmt> [ { if | unless } <condition> ] |
| |
| This rewrites the request method with the result of the evaluation of format |
| string <fmt>. There should be very few valid reasons for having to do so as |
| this is more likely to break something than to fix it. |
| |
| http-request set-nice <nice> [ { if | unless } <condition> ] |
| |
| This sets the "nice" factor of the current request being processed. It only |
| has effect against the other requests being processed at the same time. |
| The default value is 0, unless altered by the "nice" setting on the "bind" |
| line. The accepted range is -1024..1024. The higher the value, the nicest |
| the request will be. Lower values will make the request more important than |
| other ones. This can be useful to improve the speed of some requests, or |
| lower the priority of non-important requests. Using this setting without |
| prior experimentation can cause some major slowdown. |
| |
| http-request set-path <fmt> [ { if | unless } <condition> ] |
| |
| This rewrites the request path with the result of the evaluation of format |
| string <fmt>. The query string, if any, is left intact. If a scheme and |
| authority is found before the path, they are left intact as well. If the |
| request doesn't have a path ("*"), this one is replaced with the format. |
| This can be used to prepend a directory component in front of a path for |
| example. See also "http-request set-query" and "http-request set-uri". |
| |
| Example : |
| # prepend the host name before the path |
| http-request set-path /%[hdr(host)]%[path] |
| |
| http-request set-pathq <fmt> [ { if | unless } <condition> ] |
| |
| This does the same as "http-request set-path" except that the query-string is |
| also rewritten. It may be used to remove the query-string, including the |
| question mark (it is not possible using "http-request set-query"). |
| |
| http-request set-priority-class <expr> [ { if | unless } <condition> ] |
| |
| This is used to set the queue priority class of the current request. |
| The value must be a sample expression which converts to an integer in the |
| range -2047..2047. Results outside this range will be truncated. |
| The priority class determines the order in which queued requests are |
| processed. Lower values have higher priority. |
| |
| http-request set-priority-offset <expr> [ { if | unless } <condition> ] |
| |
| This is used to set the queue priority timestamp offset of the current |
| request. The value must be a sample expression which converts to an integer |
| in the range -524287..524287. Results outside this range will be truncated. |
| When a request is queued, it is ordered first by the priority class, then by |
| the current timestamp adjusted by the given offset in milliseconds. Lower |
| values have higher priority. |
| Note that the resulting timestamp is is only tracked with enough precision |
| for 524,287ms (8m44s287ms). If the request is queued long enough to where the |
| adjusted timestamp exceeds this value, it will be misidentified as highest |
| priority. Thus it is important to set "timeout queue" to a value, where when |
| combined with the offset, does not exceed this limit. |
| |
| http-request set-query <fmt> [ { if | unless } <condition> ] |
| |
| This rewrites the request's query string which appears after the first |
| question mark ("?") with the result of the evaluation of format string <fmt>. |
| The part prior to the question mark is left intact. If the request doesn't |
| contain a question mark and the new value is not empty, then one is added at |
| the end of the URI, followed by the new value. If a question mark was |
| present, it will never be removed even if the value is empty. This can be |
| used to add or remove parameters from the query string. |
| |
| See also "http-request set-query" and "http-request set-uri". |
| |
| Example: |
| # replace "%3D" with "=" in the query string |
| http-request set-query %[query,regsub(%3D,=,g)] |
| |
| http-request set-src <expr> [ { if | unless } <condition> ] |
| This is used to set the source IP address to the value of specified |
| expression. Useful when a proxy in front of HAProxy rewrites source IP, but |
| provides the correct IP in a HTTP header; or you want to mask source IP for |
| privacy. All subsequent calls to "src" fetch will return this value |
| (see example). |
| |
| Arguments : |
| <expr> Is a standard HAProxy expression formed by a sample-fetch followed |
| by some converters. |
| |
| See also "option forwardfor". |
| |
| Example: |
| http-request set-src hdr(x-forwarded-for) |
| http-request set-src src,ipmask(24) |
| |
| # After the masking this will track connections |
| # based on the IP address with the last byte zeroed out. |
| http-request track-sc0 src |
| |
| When possible, set-src preserves the original source port as long as the |
| address family allows it, otherwise the source port is set to 0. |
| |
| http-request set-src-port <expr> [ { if | unless } <condition> ] |
| |
| This is used to set the source port address to the value of specified |
| expression. |
| |
| Arguments: |
| <expr> Is a standard HAProxy expression formed by a sample-fetch followed |
| by some converters. |
| |
| Example: |
| http-request set-src-port hdr(x-port) |
| http-request set-src-port int(4000) |
| |
| When possible, set-src-port preserves the original source address as long as |
| the address family supports a port, otherwise it forces the source address to |
| IPv4 "0.0.0.0" before rewriting the port. |
| |
| http-request set-timeout { server | tunnel } { <timeout> | <expr> } |
| [ { if | unless } <condition> ] |
| |
| This action overrides the specified "server" or "tunnel" timeout for the |
| current stream only. The timeout can be specified in millisecond or with any |
| other unit if the number is suffixed by the unit as explained at the top of |
| this document. It is also possible to write an expression which must returns |
| a number interpreted as a timeout in millisecond. |
| |
| Note that the server/tunnel timeouts are only relevant on the backend side |
| and thus this rule is only available for the proxies with backend |
| capabilities. Also the timeout value must be non-null to obtain the expected |
| results. |
| |
| Example: |
| http-request set-timeout tunnel 5s |
| http-request set-timeout server req.hdr(host),map_int(host.lst) |
| |
| http-request set-tos <tos> [ { if | unless } <condition> ] |
| |
| This is used to set the TOS or DSCP field value of packets sent to the client |
| to the value passed in <tos> on platforms which support this. This value |
| represents the whole 8 bits of the IP TOS field, and can be expressed both in |
| decimal or hexadecimal format (prefixed by "0x"). Note that only the 6 higher |
| bits are used in DSCP or TOS, and the two lower bits are always 0. This can |
| be used to adjust some routing behavior on border routers based on some |
| information from the request. |
| |
| See RFC 2474, 2597, 3260 and 4594 for more information. |
| |
| http-request set-uri <fmt> [ { if | unless } <condition> ] |
| |
| This rewrites the request URI with the result of the evaluation of format |
| string <fmt>. The scheme, authority, path and query string are all replaced |
| at once. This can be used to rewrite hosts in front of proxies, or to perform |
| complex modifications to the URI such as moving parts between the path and |
| the query string. If an absolute URI is set, it will be sent as is to |
| HTTP/1.1 servers. If it is not the desired behavior, the host, the path |
| and/or the query string should be set separately. |
| See also "http-request set-path" and "http-request set-query". |
| |
| http-request set-var(<var-name>[,<cond>...]) <expr> [ { if | unless } <condition> ] |
| http-request set-var-fmt(<var-name>[,<cond>...]) <fmt> [ { if | unless } <condition> ] |
| |
| This is used to set the contents of a variable. The variable is declared |
| inline. |
| |
| Arguments: |
| <var-name> The name of the variable starts with an indication about its |
| scope. The scopes allowed are: |
| "proc" : the variable is shared with the whole process |
| "sess" : the variable is shared with the whole session |
| "txn" : the variable is shared with the transaction |
| (request and response) |
| "req" : the variable is shared only during request |
| processing |
| "res" : the variable is shared only during response |
| processing |
| This prefix is followed by a name. The separator is a '.'. |
| The name may only contain characters 'a-z', 'A-Z', '0-9' |
| and '_'. |
| |
| <cond> A set of conditions that must all be true for the variable to |
| actually be set (such as "ifnotempty", "ifgt" ...). See the |
| set-var converter's description for a full list of possible |
| conditions. |
| |
| <expr> Is a standard HAProxy expression formed by a sample-fetch |
| followed by some converters. |
| |
| <fmt> This is the value expressed using log-format rules (see Custom |
| Log Format in section 8.2.4). |
| |
| Example: |
| http-request set-var(req.my_var) req.fhdr(user-agent),lower |
| http-request set-var-fmt(txn.from) %[src]:%[src_port] |
| |
| http-request silent-drop [ rst-ttl <ttl> ] [ { if | unless } <condition> ] |
| |
| This stops the evaluation of the rules and removes the client-facing |
| connection in a configurable way: When called without the rst-ttl argument, |
| we try to prevent sending any FIN or RST packet back to the client by |
| using TCP_REPAIR. If this fails (mainly because of missing privileges), |
| we fall back to sending a RST packet with a TTL of 1. |
| |
| The effect is that the client still sees an established connection while |
| there is none on HAProxy, saving resources. However, stateful equipment |
| placed between the HAProxy and the client (firewalls, proxies, |
| load balancers) will also keep the established connection in their |
| session tables. |
| |
| The optional rst-ttl changes this behaviour: TCP_REPAIR is not used, |
| and a RST packet with a configurable TTL is sent. When set to a |
| reasonable value, the RST packet travels through your own equipment, |
| deleting the connection in your middle-boxes, but does not arrive at |
| the client. Future packets from the client will then be dropped |
| already by your middle-boxes. These "local RST"s protect your resources, |
| but not the client's. Do not use it unless you fully understand how it works. |
| |
| http-request strict-mode { on | off } [ { if | unless } <condition> ] |
| |
| This enables or disables the strict rewriting mode for following rules. It |
| does not affect rules declared before it and it is only applicable on rules |
| performing a rewrite on the requests. When the strict mode is enabled, any |
| rewrite failure triggers an internal error. Otherwise, such errors are |
| silently ignored. The purpose of the strict rewriting mode is to make some |
| rewrites optional while others must be performed to continue the request |
| processing. |
| |
| By default, the strict rewriting mode is enabled. Its value is also reset |
| when a ruleset evaluation ends. So, for instance, if you change the mode on |
| the frontend, the default mode is restored when HAProxy starts the backend |
| rules evaluation. |
| |
| http-request tarpit [deny_status <status>] [ { if | unless } <condition> ] |
| http-request tarpit [ { status | deny_status } <code>] [content-type <type>] |
| [ { default-errorfiles | errorfile <file> | errorfiles <name> | |
| file <file> | lf-file <file> | string <str> | lf-string <fmt> } ] |
| [ hdr <name> <fmt> ]* |
| [ { if | unless } <condition> ] |
| |
| This stops the evaluation of the rules and immediately blocks the request |
| without responding for a delay specified by "timeout tarpit" or |
| "timeout connect" if the former is not set. After that delay, if the client |
| is still connected, a response is returned so that the client does not |
| suspect it has been tarpitted. Logs will report the flags "PT". The goal of |
| the tarpit rule is to slow down robots during an attack when they're limited |
| on the number of concurrent requests. It can be very efficient against very |
| dumb robots, and will significantly reduce the load on firewalls compared to |
| a "deny" rule. But when facing "correctly" developed robots, it can make |
| things worse by forcing HAProxy and the front firewall to support insane |
| number of concurrent connections. By default an HTTP error 500 is returned. |
| But the response may be customized using same syntax than |
| "http-request return" rules. Thus, see "http-request return" for details. |
| For compatibility purpose, when no argument is defined, or only "deny_status", |
| the argument "default-errorfiles" is implied. It means |
| "http-request tarpit [deny_status <status>]" is an alias of |
| "http-request tarpit [status <status>] default-errorfiles". |
| No further "http-request" rules are evaluated. |
| See also "http-request return" and "http-request silent-drop". |
| |
| http-request track-sc0 <key> [table <table>] [ { if | unless } <condition> ] |
| http-request track-sc1 <key> [table <table>] [ { if | unless } <condition> ] |
| http-request track-sc2 <key> [table <table>] [ { if | unless } <condition> ] |
| |
| This enables tracking of sticky counters from current request. These rules do |
| not stop evaluation and do not change default action. The number of counters |
| that may be simultaneously tracked by the same connection is set by the |
| global "tune.stick-counters" setting, which defaults to MAX_SESS_STKCTR if |
| set at build time (it is reported in haproxy -vv) and which defaults to 3, |
| so the track-sc number is between 0 and (tune.stick-counters-1). The first |
| "track-sc0" rule executed enables tracking of the counters of the specified |
| table as the first set. The first "track-sc1" rule executed enables tracking |
| of the counters of the specified table as the second set. The first |
| "track-sc2" rule executed enables tracking of the counters of the specified |
| table as the third set. It is a recommended practice to use the first set of |
| counters for the per-frontend counters and the second set for the per-backend |
| ones. But this is just a guideline, all may be used everywhere. |
| |
| Arguments : |
| <key> is mandatory, and is a sample expression rule as described in |
| section 7.3. It describes what elements of the incoming request or |
| connection will be analyzed, extracted, combined, and used to |
| select which table entry to update the counters. |
| |
| <table> is an optional table to be used instead of the default one, which |
| is the stick-table declared in the current proxy. All the counters |
| for the matches and updates for the key will then be performed in |
| that table until the session ends. |
| |
| Once a "track-sc*" rule is executed, the key is looked up in the table and if |
| it is not found, an entry is allocated for it. Then a pointer to that entry |
| is kept during all the session's life, and this entry's counters are updated |
| as often as possible, every time the session's counters are updated, and also |
| systematically when the session ends. Counters are only updated for events |
| that happen after the tracking has been started. As an exception, connection |
| counters and request counters are systematically updated so that they reflect |
| useful information. |
| |
| If the entry tracks concurrent connection counters, one connection is counted |
| for as long as the entry is tracked, and the entry will not expire during |
| that time. Tracking counters also provides a performance advantage over just |
| checking the keys, because only one table lookup is performed for all ACL |
| checks that make use of it. |
| |
| http-request unset-var(<var-name>) [ { if | unless } <condition> ] |
| |
| This is used to unset a variable. See above for details about <var-name>. |
| |
| Example: |
| http-request unset-var(req.my_var) |
| |
| http-request use-service <service-name> [ { if | unless } <condition> ] |
| |
| This directive executes the configured HTTP service to reply to the request |
| and stops the evaluation of the rules. An HTTP service may choose to reply by |
| sending any valid HTTP response or it may immediately close the connection |
| without sending any response. Outside natives services, for instance the |
| Prometheus exporter, it is possible to write your own services in Lua. No |
| further "http-request" rules are evaluated. |
| |
| Arguments : |
| <service-name> is mandatory. It is the service to call |
| |
| Example: |
| http-request use-service prometheus-exporter if { path /metrics } |
| |
| http-request wait-for-body time <time> [ at-least <bytes> ] |
| [ { if | unless } <condition> ] |
| |
| This will delay the processing of the request or response until one of the |
| following conditions occurs: |
| - The full request body is received, in which case processing proceeds |
| normally. |
| - <bytes> bytes have been received, when the "at-least" argument is given and |
| <bytes> is non-zero, in which case processing proceeds normally. |
| - The request buffer is full, in which case processing proceeds normally. The |
| size of this buffer is determined by the "tune.bufsize" option. |
| - The request has been waiting for more than <time> milliseconds. In this |
| case HAProxy will respond with a 408 "Request Timeout" error to the client |
| and stop processing the request. Note that if any of the other conditions |
| happens first, this timeout will not occur even if the full body has |
| not yet been recieved. |
| |
| This action may be used as a replacement for "option http-buffer-request". |
| |
| Arguments : |
| |
| <time> is mandatory. It is the maximum time to wait for the body. It |
| follows the HAProxy time format and is expressed in milliseconds. |
| |
| <bytes> is optional. It is the minimum payload size to receive to stop to |
| wait. It follows the HAProxy size format and is expressed in |
| bytes. A value of 0 (the default) means no limit. |
| |
| Example: |
| http-request wait-for-body time 1s at-least 1k if METH_POST |
| |
| See also : "option http-buffer-request" |
| |
| http-request wait-for-handshake [ { if | unless } <condition> ] |
| |
| This will delay the processing of the request until the SSL handshake |
| happened. This is mostly useful to delay processing early data until we're |
| sure they are valid. |
| |
| |
| http-response <action> <options...> [ { if | unless } <condition> ] |
| Access control for Layer 7 responses |
| |
| May be used in sections: defaults | frontend | listen | backend |
| yes(!) | yes | yes | yes |
| |
| The http-response statement defines a set of rules which apply to layer 7 |
| processing. The rules are evaluated in their declaration order when they are |
| met in a frontend, listen or backend section. Any rule may optionally be |
| followed by an ACL-based condition, in which case it will only be evaluated |
| if the condition is true. Since these rules apply on responses, the backend |
| rules are applied first, followed by the frontend's rules. |
| |
| The first keyword is the rule's action. Several types of actions are |
| supported: |
| - add-acl(<file-name>) <key fmt> |
| - add-header <name> <fmt> |
| - allow |
| - cache-store <name> |
| - capture <sample> id <id> |
| - del-acl(<file-name>) <key fmt> |
| - del-header <name> [ -m <meth> ] |
| - del-map(<file-name>) <key fmt> |
| - deny [ { status | deny_status } <code>] ... |
| - redirect <rule> |
| - replace-header <name> <regex-match> <replace-fmt> |
| - replace-value <name> <regex-match> <replace-fmt> |
| - return [status <code>] [content-type <type>] ... |
| - sc-add-gpc(<idx>,<sc-id>) { <int> | <expr> } |
| - sc-inc-gpc(<idx>,<sc-id>) |
| - sc-inc-gpc0(<sc-id>) |
| - sc-inc-gpc1(<sc-id>) |
| - sc-set-gpt(<idx>,<sc-id>) { <int> | <expr> } |
| - sc-set-gpt0(<sc-id>) { <int> | <expr> } |
| - send-spoe-group <engine-name> <group-name> |
| - set-bandwidth-limit <name> [limit {<expr> | <size>}] [period {<expr> | <time>}] |
| - set-header <name> <fmt> |
| - set-log-level <level> |
| - set-map(<file-name>) <key fmt> <value fmt> |
| - set-mark <mark> |
| - set-nice <nice> |
| - set-status <status> [reason <str>] |
| - set-tos <tos> |
| - set-var(<var-name>[,<cond>...]) <expr> |
| - set-var-fmt(<var-name>[,<cond>...]) <fmt> |
| - silent-drop [ rst-ttl <ttl> ] |
| - strict-mode { on | off } |
| - track-sc0 <key> [table <table>] |
| - track-sc1 <key> [table <table>] |
| - track-sc2 <key> [table <table>] |
| - unset-var(<var-name>) |
| - wait-for-body time <time> [ at-least <bytes> ] |
| |
| The supported actions are described below. |
| |
| There is no limit to the number of http-response statements per instance. |
| |
| This directive is only available from named defaults sections, not anonymous |
| ones. Rules defined in the defaults section are evaluated before ones in the |
| associated proxy section. To avoid ambiguities, in this case the same |
| defaults section cannot be used by proxies with the frontend capability and |
| by proxies with the backend capability. It means a listen section cannot use |
| a defaults section defining such rules. |
| |
| Example: |
| acl key_acl res.hdr(X-Acl-Key) -m found |
| |
| acl myhost hdr(Host) -f myhost.lst |
| |
| http-response add-acl(myhost.lst) %[res.hdr(X-Acl-Key)] if key_acl |
| http-response del-acl(myhost.lst) %[res.hdr(X-Acl-Key)] if key_acl |
| |
| Example: |
| acl value res.hdr(X-Value) -m found |
| |
| use_backend bk_appli if { hdr(Host),map_str(map.lst) -m found } |
| |
| http-response set-map(map.lst) %[src] %[res.hdr(X-Value)] if value |
| http-response del-map(map.lst) %[src] if ! value |
| |
| See also : "http-request", section 3.4 about userlists and section 7 about |
| ACL usage. |
| |
| http-response add-acl(<file-name>) <key fmt> [ { if | unless } <condition> ] |
| |
| This is used to add a new entry into an ACL. Please refer to "http-request |
| add-acl" for a complete description. |
| |
| http-response add-header <name> <fmt> [ { if | unless } <condition> ] |
| |
| This appends an HTTP header field whose name is specified in <name> and whose |
| value is defined by <fmt>. Please refer to "http-request add-header" for a |
| complete description. |
| |
| http-response allow [ { if | unless } <condition> ] |
| |
| This stops the evaluation of the rules and lets the response pass the check. |
| No further "http-response" rules are evaluated for the current section. |
| |
| http-response cache-store <name> [ { if | unless } <condition> ] |
| |
| See section 6.2 about cache setup. |
| |
| http-response capture <sample> id <id> [ { if | unless } <condition> ] |
| |
| This captures sample expression <sample> from the response buffer, and |
| converts it to a string. The resulting string is stored into the next request |
| "capture" slot, so it will possibly appear next to some captured HTTP |
| headers. It will then automatically appear in the logs, and it will be |
| possible to extract it using sample fetch rules to feed it into headers or |
| anything. Please check section 7.3 (Fetching samples) and |
| "capture response header" for more information. |
| |
| The keyword "id" is the id of the capture slot which is used for storing the |
| string. The capture slot must be defined in an associated frontend. |
| This is useful to run captures in backends. The slot id can be declared by a |
| previous directive "http-response capture" or with the "declare capture" |
| keyword. |
| |
| When using this action in a backend, double check that the relevant |
| frontend(s) have the required capture slots otherwise, this rule will be |
| ignored at run time. This can't be detected at configuration parsing time |
| due to HAProxy's ability to dynamically resolve backend name at runtime. |
| |
| http-response del-acl(<file-name>) <key fmt> [ { if | unless } <condition> ] |
| |
| This is used to delete an entry from an ACL. Please refer to "http-request |
| del-acl" for a complete description. |
| |
| http-response del-header <name> [ -m <meth> ] [ { if | unless } <condition> ] |
| |
| This removes all HTTP header fields whose name is specified in <name>. Please |
| refer to "http-request del-header" for a complete description. |
| |
| http-response del-map(<file-name>) <key fmt> [ { if | unless } <condition> ] |
| |
| This is used to delete an entry from a MAP. Please refer to "http-request |
| del-map" for a complete description. |
| |
| http-response deny [deny_status <status>] [ { if | unless } <condition> ] |
| http-response deny [ { status | deny_status } <code>] [content-type <type>] |
| [ { default-errorfiles | errorfile <file> | errorfiles <name> | |
| file <file> | lf-file <file> | string <str> | lf-string <fmt> } ] |
| [ hdr <name> <fmt> ]* |
| [ { if | unless } <condition> ] |
| |
| This stops the evaluation of the rules and immediately rejects the response. |
| By default an HTTP 502 error is returned. But the response may be customized |
| using same syntax than "http-response return" rules. Thus, see |
| "http-response return" for details. For compatibility purpose, when no |
| argument is defined, or only "deny_status", the argument "default-errorfiles" |
| is implied. It means "http-response deny [deny_status <status>]" is an alias |
| of "http-response deny [status <status>] default-errorfiles". |
| No further "http-response" rules are evaluated. |
| See also "http-response return". |
| |
| http-response redirect <rule> [ { if | unless } <condition> ] |
| |
| This performs an HTTP redirection based on a redirect rule. |
| This supports a format string similarly to "http-request redirect" rules, |
| with the exception that only the "location" type of redirect is possible on |
| the response. See the "redirect" keyword for the rule's syntax. When a |
| redirect rule is applied during a response, connections to the server are |
| closed so that no data can be forwarded from the server to the client. |
| |
| http-response replace-header <name> <regex-match> <replace-fmt> |
| [ { if | unless } <condition> ] |
| |
| This works like "http-request replace-header" except that it works on the |
| server's response instead of the client's request. |
| |
| Example: |
| http-response replace-header Set-Cookie (C=[^;]*);(.*) \1;ip=%bi;\2 |
| |
| # applied to: |
| Set-Cookie: C=1; expires=Tue, 14-Jun-2016 01:40:45 GMT |
| |
| # outputs: |
| Set-Cookie: C=1;ip=192.168.1.20; expires=Tue, 14-Jun-2016 01:40:45 GMT |
| |
| # assuming the backend IP is 192.168.1.20. |
| |
| http-response replace-value <name> <regex-match> <replace-fmt> |
| [ { if | unless } <condition> ] |
| |
| This works like "http-request replace-value" except that it works on the |
| server's response instead of the client's request. |
| |
| Example: |
| http-response replace-value Cache-control ^public$ private |
| |
| # applied to: |
| Cache-Control: max-age=3600, public |
| |
| # outputs: |
| Cache-Control: max-age=3600, private |
| |
| http-response return [status <code>] [content-type <type>] |
| [ { default-errorfiles | errorfile <file> | errorfiles <name> | |
| file <file> | lf-file <file> | string <str> | lf-string <fmt> } ] |
| [ hdr <name> <value> ]* |
| [ { if | unless } <condition> ] |
| |
| This stops the evaluation of the rules and immediately returns a |
| response. Please refer to "http-request return" for a complete |
| description. No further "http-response" rules are evaluated. |
| |
| http-response sc-add-gpc(<idx>,<sc-id>) { <int> | <expr> } |
| [ { if | unless } <condition> ] |
| |
| This action increments the General Purpose Counter according to the sticky |
| counter designated by <sc-id>. Please refer to "http-request sc-add-gpc" for |
| a complete description. |
| |
| http-response sc-inc-gpc(<idx>,<sc-id>) [ { if | unless } <condition> ] |
| http-response sc-inc-gpc0(<sc-id>) [ { if | unless } <condition> ] |
| http-response sc-inc-gpc1(<sc-id>) [ { if | unless } <condition> ] |
| |
| These actions increment the General Purppose Counters according to the sticky |
| counter designated by <sc-id>. Please refer to "http-request sc-inc-gpc", |
| "http-request sc-inc-gpc0" and "http-request sc-inc-gpc1" for a complete |
| description. |
| |
| http-response sc-set-gpt(<idx>,<sc-id>) { <int> | <expr> } |
| [ { if | unless } <condition> ] |
| http-response sc-set-gpt0(<sc-id>) { <int> | <expr> } |
| [ { if | unless } <condition> ] |
| |
| These actions set the 32-bit unsigned General Purpose Tags according to the |
| sticky counter designated by <sc-id>. Please refer to "http-request |
| sc-set-gpt" and "http-request sc-set-gpt0" for a complete description. |
| |
| http-response send-spoe-group <engine-name> <group-name> |
| [ { if | unless } <condition> ] |
| |
| This action is used to trigger sending of a group of SPOE messages. Please |
| refer to "http-request send-spoe-group" for a complete description. |
| |
| http-response set-bandwidth-limit <name> [limit { <expr> | <size> }] |
| [period { <expr> | <time> }] [ { if | unless } <condition> ] |
| |
| This action is used to enable the bandwidth limitation filter <name>, either |
| on the upload or download direction depending on the filter type. Please |
| refer to "http-request set-bandwidth-limit" for a complete description. |
| |
| http-response set-header <name> <fmt> [ { if | unless } <condition> ] |
| |
| This does the same as "http-response add-header" except that the header name |
| is first removed if it existed. This is useful when passing security |
| information to the server, where the header must not be manipulated by |
| external users. |
| |
| http-response set-log-level <level> [ { if | unless } <condition> ] |
| |
| This is used to change the log level of the current response. Please refer to |
| "http-request set-log-level" for a complete description. |
| |
| http-response set-map(<file-name>) <key fmt> <value fmt> |
| |
| This is used to add a new entry into a MAP. Please refer to "http-request |
| set-map" for a complete description. |
| |
| http-response set-mark <mark> [ { if | unless } <condition> ] |
| |
| This action is used to set the Netfilter/IPFW MARK in all packets sent to the |
| client to the value passed in <mark> on platforms which support it. Please |
| refer to "http-request set-mark" for a complete description. |
| |
| http-response set-nice <nice> [ { if | unless } <condition> ] |
| |
| This sets the "nice" factor of the current request being processed. Please |
| refer to "http-request set-nice" for a complete description. |
| |
| http-response set-status <status> [reason <str>] |
| [ { if | unless } <condition> ] |
| |
| This replaces the response status code with <status> which must be an integer |
| between 100 and 999. Optionally, a custom reason text can be provided defined |
| by <str>, or the default reason for the specified code will be used as a |
| fallback. |
| |
| Example: |
| # return "431 Request Header Fields Too Large" |
| http-response set-status 431 |
| # return "503 Slow Down", custom reason |
| http-response set-status 503 reason "Slow Down". |
| |
| http-response set-tos <tos> [ { if | unless } <condition> ] |
| |
| This is used to set the TOS or DSCP field value of packets sent to the client |
| to the value passed in <tos> on platforms which support this. Please refer to |
| "http-request set-tos" for a complete description. |
| |
| http-response set-var(<var-name>[,<cond>...]) <expr> [ { if | unless } <condition> ] |
| http-response set-var-fmt(<var-name>[,<cond>...]) <fmt> [ { if | unless } <condition> ] |
| |
| This is used to set the contents of a variable. The variable is declared |
| inline. Please refer to "http-request set-var" and "http-request set-var-fmt" |
| for a complete description. |
| |
| http-response silent-drop [ rst-ttl <ttl> ] [ { if | unless } <condition> ] |
| |
| This stops the evaluation of the rules and makes the client-facing connection |
| suddenly disappear using a system-dependent way that tries to prevent the |
| client from being notified. Please refer to "http-request silent-drop" for a |
| complete description. |
| |
| http-response strict-mode { on | off } [ { if | unless } <condition> ] |
| |
| This enables or disables the strict rewriting mode for following |
| rules. Please refer to "http-request strict-mode" for a complete description. |
| |
| http-response track-sc0 <key> [table <table>] [ { if | unless } <condition> ] |
| http-response track-sc1 <key> [table <table>] [ { if | unless } <condition> ] |
| http-response track-sc2 <key> [table <table>] [ { if | unless } <condition> ] |
| |
| This enables tracking of sticky counters from current connection. Please |
| refer to "http-request track-sc0", "http-request track-sc1" and "http-request |
| track-sc2" for a complete description. |
| |
| http-response unset-var(<var-name>) [ { if | unless } <condition> ] |
| |
| This is used to unset a variable. See "http-request set-var" for details |
| about <var-name>. |
| |
| http-response wait-for-body time <time> [ at-least <bytes> ] |
| [ { if | unless } <condition> ] |
| |
| This will delay the processing of the response waiting for the payload for at |
| most <time> milliseconds. Please refer to "http-request wait-for-body" for a |
| complete description. |
| |
| |
| http-reuse { never | safe | aggressive | always } |
| Declare how idle HTTP connections may be shared between requests |
| |
| May be used in sections: defaults | frontend | listen | backend |
| yes | no | yes | yes |
| |
| By default, a connection established between HAProxy and the backend server |
| which is considered safe for reuse is moved back to the server's idle |
| connections pool so that any other request can make use of it. This is the |
| "safe" strategy below. |
| |
| The argument indicates the desired connection reuse strategy : |
| |
| - "never" : idle connections are never shared between sessions. This mode |
| may be enforced to cancel a different strategy inherited from |
| a defaults section or for troubleshooting. For example, if an |
| old bogus application considers that multiple requests over |
| the same connection come from the same client and it is not |
| possible to fix the application, it may be desirable to |
| disable connection sharing in a single backend. An example of |
| such an application could be an old HAProxy using cookie |
| insertion in tunnel mode and not checking any request past the |
| first one. |
| |
| - "safe" : this is the default and the recommended strategy. The first |
| request of a session is always sent over its own connection, |
| and only subsequent requests may be dispatched over other |
| existing connections. This ensures that in case the server |
| closes the connection when the request is being sent, the |
| browser can decide to silently retry it. Since it is exactly |
| equivalent to regular keep-alive, there should be no side |
| effects. There is also a special handling for the connections |
| using protocols subject to Head-of-line blocking (backend with |
| h2 or fcgi). In this case, when at least one stream is |
| processed, the used connection is reserved to handle streams |
| of the same session. When no more streams are processed, the |
| connection is released and can be reused. |
| |
| - "aggressive" : this mode may be useful in webservices environments where |
| all servers are not necessarily known and where it would be |
| appreciable to deliver most first requests over existing |
| connections. In this case, first requests are only delivered |
| over existing connections that have been reused at least once, |
| proving that the server correctly supports connection reuse. |
| It should only be used when it's sure that the client can |
| retry a failed request once in a while and where the benefit |
| of aggressive connection reuse significantly outweighs the |
| downsides of rare connection failures. |
| |
| - "always" : this mode is only recommended when the path to the server is |
| known for never breaking existing connections quickly after |
| releasing them. It allows the first request of a session to be |
| sent to an existing connection. This can provide a significant |
| performance increase over the "safe" strategy when the backend |
| is a cache farm, since such components tend to show a |
| consistent behavior and will benefit from the connection |
| sharing. It is recommended that the "http-keep-alive" timeout |
| remains low in this mode so that no dead connections remain |
| usable. In most cases, this will lead to the same performance |
| gains as "aggressive" but with more risks. It should only be |
| used when it improves the situation over "aggressive". |
| |
| When http connection sharing is enabled, a great care is taken to respect the |
| connection properties and compatibility. Indeed, some properties are specific |
| and it is not possibly to reuse it blindly. Those are the SSL SNI, source |
| and destination address and proxy protocol block. A connection is reused only |
| if it shares the same set of properties with the request. |
| |
| Also note that connections with certain bogus authentication schemes (relying |
| on the connection) like NTLM are marked private if possible and never shared. |
| This won't be the case however when using a protocol with multiplexing |
| abilities and using reuse mode level value greater than the default "safe" |
| strategy as in this case nothing prevents the connection from being already |
| shared. |
| |
| A connection pool is involved and configurable with "pool-max-conn". |
| |
| Note: connection reuse improves the accuracy of the "server maxconn" setting, |
| because almost no new connection will be established while idle connections |
| remain available. This is particularly true with the "always" strategy. |
| |
| The rules to decide to keep an idle connection opened or to close it after |
| processing are also governed by the "tune.pool-low-fd-ratio" (default: 20%) |
| and "tune.pool-high-fd-ratio" (default: 25%). These correspond to the |
| percentage of total file descriptors spent in idle connections above which |
| haproxy will respectively refrain from keeping a connection opened after a |
| response, and actively kill idle connections. Some setups using a very high |
| ratio of idle connections, either because of too low a global "maxconn", or |
| due to a lot of HTTP/2 or HTTP/3 traffic on the frontend (few connections) |
| but HTTP/1 connections on the backend, may observe a lower reuse rate because |
| too few connections are kept open. It may be desirable in this case to adjust |
| such thresholds or simply to increase the global "maxconn" value. |
| |
| Similarly, when thread groups are explicitly enabled, it is important to |
| understand that idle connections are only usable between threads from a same |
| group. As such it may happen that unfair load between groups leads to more |
| idle connections being needed, causing a lower reuse rate. The same solution |
| may then be applied (increase global "maxconn" or increase pool ratios). |
| |
| See also : "option http-keep-alive", "server maxconn", "thread-groups", |
| "tune.pool-high-fd-ratio", "tune.pool-low-fd-ratio" |
| |
| |
| http-send-name-header [<header>] |
| Add the server name to a request. Use the header string given by <header> |
| May be used in sections: defaults | frontend | listen | backend |
| yes | no | yes | yes |
| Arguments : |
| <header> The header string to use to send the server name |
| |
| The "http-send-name-header" statement causes the header field named <header> |
| to be set to the name of the target server at the moment the request is about |
| to be sent on the wire. Any existing occurrences of this header are removed. |
| Upon retries and redispatches, the header field is updated to always reflect |
| the server being attempted to connect to. Given that this header is modified |
| very late in the connection setup, it may have unexpected effects on already |
| modified headers. For example using it with transport-level header such as |
| connection, content-length, transfer-encoding and so on will likely result in |
| invalid requests being sent to the server. Additionally it has been reported |
| that this directive is currently being used as a way to overwrite the Host |
| header field in outgoing requests; while this trick has been known to work |
| as a side effect of the feature for some time, it is not officially supported |
| and might possibly not work anymore in a future version depending on the |
| technical difficulties this feature induces. A long-term solution instead |
| consists in fixing the application which required this trick so that it binds |
| to the correct host name. |
| |
| See also : "server" |
| |
| id <value> |
| Set a persistent ID to a proxy. |
| May be used in sections : defaults | frontend | listen | backend |
| no | yes | yes | yes |
| Arguments : none |
| |
| Set a persistent ID for the proxy. This ID must be unique and positive. |
| An unused ID will automatically be assigned if unset. The first assigned |
| value will be 1. This ID is currently only returned in statistics. |
| |
| |
| ignore-persist { if | unless } <condition> |
| Declare a condition to ignore persistence |
| May be used in sections: defaults | frontend | listen | backend |
| no | no | yes | yes |
| |
| By default, when cookie persistence is enabled, every requests containing |
| the cookie are unconditionally persistent (assuming the target server is up |
| and running). |
| |
| The "ignore-persist" statement allows one to declare various ACL-based |
| conditions which, when met, will cause a request to ignore persistence. |
| This is sometimes useful to load balance requests for static files, which |
| often don't require persistence. This can also be used to fully disable |
| persistence for a specific User-Agent (for example, some web crawler bots). |
| |
| The persistence is ignored when an "if" condition is met, or unless an |
| "unless" condition is met. |
| |
| Example: |
| acl url_static path_beg /static /images /img /css |
| acl url_static path_end .gif .png .jpg .css .js |
| ignore-persist if url_static |
| |
| See also : "force-persist", "cookie", and section 7 about ACL usage. |
| |
| load-server-state-from-file { global | local | none } |
| Allow seamless reload of HAProxy |
| May be used in sections: defaults | frontend | listen | backend |
| yes | no | yes | yes |
| |
| This directive points HAProxy to a file where server state from previous |
| running process has been saved. That way, when starting up, before handling |
| traffic, the new process can apply old states to servers exactly has if no |
| reload occurred. The purpose of the "load-server-state-from-file" directive is |
| to tell HAProxy which file to use. For now, only 2 arguments to either prevent |
| loading state or load states from a file containing all backends and servers. |
| The state file can be generated by running the command "show servers state" |
| over the stats socket and redirect output. |
| |
| The format of the file is versioned and is very specific. To understand it, |
| please read the documentation of the "show servers state" command (chapter |
| 9.3 of Management Guide). |
| |
| Arguments: |
| global load the content of the file pointed by the global directive |
| named "server-state-file". |
| |
| local load the content of the file pointed by the directive |
| "server-state-file-name" if set. If not set, then the backend |
| name is used as a file name. |
| |
| none don't load any stat for this backend |
| |
| Notes: |
| - server's IP address is preserved across reloads by default, but the |
| order can be changed thanks to the server's "init-addr" setting. This |
| means that an IP address change performed on the CLI at run time will |
| be preserved, and that any change to the local resolver (e.g. /etc/hosts) |
| will possibly not have any effect if the state file is in use. |
| |
| - server's weight is applied from previous running process unless it has |
| has changed between previous and new configuration files. |
| |
| Example: Minimal configuration |
| |
| global |
| stats socket /tmp/socket |
| server-state-file /tmp/server_state |
| |
| defaults |
| load-server-state-from-file global |
| |
| backend bk |
| server s1 127.0.0.1:22 check weight 11 |
| server s2 127.0.0.1:22 check weight 12 |
| |
| |
| Then one can run : |
| |
| socat /tmp/socket - <<< "show servers state" > /tmp/server_state |
| |
| Content of the file /tmp/server_state would be like this: |
| |
| 1 |
| # <field names skipped for the doc example> |
| 1 bk 1 s1 127.0.0.1 2 0 11 11 4 6 3 4 6 0 0 |
| 1 bk 2 s2 127.0.0.1 2 0 12 12 4 6 3 4 6 0 0 |
| |
| Example: Minimal configuration |
| |
| global |
| stats socket /tmp/socket |
| server-state-base /etc/haproxy/states |
| |
| defaults |
| load-server-state-from-file local |
| |
| backend bk |
| server s1 127.0.0.1:22 check weight 11 |
| server s2 127.0.0.1:22 check weight 12 |
| |
| |
| Then one can run : |
| |
| socat /tmp/socket - <<< "show servers state bk" > /etc/haproxy/states/bk |
| |
| Content of the file /etc/haproxy/states/bk would be like this: |
| |
| 1 |
| # <field names skipped for the doc example> |
| 1 bk 1 s1 127.0.0.1 2 0 11 11 4 6 3 4 6 0 0 |
| 1 bk 2 s2 127.0.0.1 2 0 12 12 4 6 3 4 6 0 0 |
| |
| See also: "server-state-file", "server-state-file-name", and |
| "show servers state" |
| |
| |
| log global |
| log <address> [len <length>] [format <format>] [sample <ranges>:<sample_size>] |
| <facility> [<level> [<minlevel>]] |
| no log |
| Enable per-instance logging of events and traffic. |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | yes |
| |
| Prefix : |
| no should be used when the logger list must be flushed. For example, |
| if you don't want to inherit from the default logger list. This |
| prefix does not allow arguments. |
| |
| Arguments : |
| global should be used when the instance's logging parameters are the |
| same as the global ones. This is the most common usage. "global" |
| replaces <address>, <facility> and <level> with those of the log |
| entries found in the "global" section. Only one "log global" |
| statement may be used per instance, and this form takes no other |
| parameter. |
| |
| <address> indicates where to send the logs. It takes the same format as |
| for the "global" section's logs, and can be one of : |
| |
| - An IPv4 address optionally followed by a colon (':') and a UDP |
| port. If no port is specified, 514 is used by default (the |
| standard syslog port). |
| |
| - An IPv6 address followed by a colon (':') and optionally a UDP |
| port. If no port is specified, 514 is used by default (the |
| standard syslog port). |
| |
| - A filesystem path to a UNIX domain socket, keeping in mind |
| considerations for chroot (be sure the path is accessible |
| inside the chroot) and uid/gid (be sure the path is |
| appropriately writable). |
| |
| - A file descriptor number in the form "fd@<number>", which may |
| point to a pipe, terminal, or socket. In this case unbuffered |
| logs are used and one writev() call per log is performed. This |
| is a bit expensive but acceptable for most workloads. Messages |
| sent this way will not be truncated but may be dropped, in |
| which case the DroppedLogs counter will be incremented. The |
| writev() call is atomic even on pipes for messages up to |
| PIPE_BUF size, which POSIX recommends to be at least 512 and |
| which is 4096 bytes on most modern operating systems. Any |
| larger message may be interleaved with messages from other |
| processes. Exceptionally for debugging purposes the file |
| descriptor may also be directed to a file, but doing so will |
| significantly slow HAProxy down as non-blocking calls will be |
| ignored. Also there will be no way to purge nor rotate this |
| file without restarting the process. Note that the configured |
| syslog format is preserved, so the output is suitable for use |
| with a TCP syslog server. See also the "short" and "raw" |
| formats below. |
| |
| - "stdout" / "stderr", which are respectively aliases for "fd@1" |
| and "fd@2", see above. |
| |
| - A ring buffer in the form "ring@<name>", which will correspond |
| to an in-memory ring buffer accessible over the CLI using the |
| "show events" command, which will also list existing rings and |
| their sizes. Such buffers are lost on reload or restart but |
| when used as a complement this can help troubleshooting by |
| having the logs instantly available. |
| |
| - An explicit stream address prefix such as "tcp@","tcp6@", |
| "tcp4@" or "uxst@" will allocate an implicit ring buffer with |
| a stream forward server targeting the given address. |
| |
| You may want to reference some environment variables in the |
| address parameter, see section 2.3 about environment variables. |
| |
| <length> is an optional maximum line length. Log lines larger than this |
| value will be truncated before being sent. The reason is that |
| syslog servers act differently on log line length. All servers |
| support the default value of 1024, but some servers simply drop |
| larger lines while others do log them. If a server supports long |
| lines, it may make sense to set this value here in order to avoid |
| truncating long lines. Similarly, if a server drops long lines, |
| it is preferable to truncate them before sending them. Accepted |
| values are 80 to 65535 inclusive. The default value of 1024 is |
| generally fine for all standard usages. Some specific cases of |
| long captures or JSON-formatted logs may require larger values. |
| |
| <ranges> A list of comma-separated ranges to identify the logs to sample. |
| This is used to balance the load of the logs to send to the log |
| server. The limits of the ranges cannot be null. They are numbered |
| from 1. The size or period (in number of logs) of the sample must |
| be set with <sample_size> parameter. |
| |
| <sample_size> |
| The size of the sample in number of logs to consider when balancing |
| their logging loads. It is used to balance the load of the logs to |
| send to the syslog server. This size must be greater or equal to the |
| maximum of the high limits of the ranges. |
| (see also <ranges> parameter). |
| |
| <format> is the log format used when generating syslog messages. It may be |
| one of the following : |
| |
| local Analog to rfc3164 syslog message format except that hostname |
| field is stripped. This is the default. |
| Note: option "log-send-hostname" switches the default to |
| rfc3164. |
| |
| rfc3164 The RFC3164 syslog message format. |
| (https://tools.ietf.org/html/rfc3164) |
| |
| rfc5424 The RFC5424 syslog message format. |
| (https://tools.ietf.org/html/rfc5424) |
| |
| priority A message containing only a level plus syslog facility between |
| angle brackets such as '<63>', followed by the text. The PID, |
| date, time, process name and system name are omitted. This is |
| designed to be used with a local log server. |
| |
| short A message containing only a level between angle brackets such as |
| '<3>', followed by the text. The PID, date, time, process name |
| and system name are omitted. This is designed to be used with a |
| local log server. This format is compatible with what the |
| systemd logger consumes. |
| |
| timed A message containing only a level between angle brackets such as |
| '<3>', followed by ISO date and by the text. The PID, process |
| name and system name are omitted. This is designed to be |
| used with a local log server. |
| |
| iso A message containing only the ISO date, followed by the text. |
| The PID, process name and system name are omitted. This is |
| designed to be used with a local log server. |
| |
| raw A message containing only the text. The level, PID, date, time, |
| process name and system name are omitted. This is designed to |
| be used in containers or during development, where the severity |
| only depends on the file descriptor used (stdout/stderr). |
| |
| <facility> must be one of the 24 standard syslog facilities : |
| |
| kern user mail daemon auth syslog lpr news |
| uucp cron auth2 ftp ntp audit alert cron2 |
| local0 local1 local2 local3 local4 local5 local6 local7 |
| |
| Note that the facility is ignored for the "short" and "raw" |
| formats, but still required as a positional field. It is |
| recommended to use "daemon" in this case to make it clear that |
| it's only supposed to be used locally. |
| |
| <level> is optional and can be specified to filter outgoing messages. By |
| default, all messages are sent. If a level is specified, only |
| messages with a severity at least as important as this level |
| will be sent. An optional minimum level can be specified. If it |
| is set, logs emitted with a more severe level than this one will |
| be capped to this level. This is used to avoid sending "emerg" |
| messages on all terminals on some default syslog configurations. |
| Eight levels are known : |
| |
| emerg alert crit err warning notice info debug |
| |
| It is important to keep in mind that it is the frontend which decides what to |
| log from a connection, and that in case of content switching, the log entries |
| from the backend will be ignored. Connections are logged at level "info". |
| |
| However, backend log declaration define how and where servers status changes |
| will be logged. Level "notice" will be used to indicate a server going up, |
| "warning" will be used for termination signals and definitive service |
| termination, and "alert" will be used for when a server goes down. |
| |
| Note : According to RFC3164, messages are truncated to 1024 bytes before |
| being emitted. |
| |
| Example : |
| log global |
| log stdout format short daemon # send log to systemd |
| log stdout format raw daemon # send everything to stdout |
| log stderr format raw daemon notice # send important events to stderr |
| log 127.0.0.1:514 local0 notice # only send important events |
| log tcp@127.0.0.1:514 local0 notice notice # same but limit output |
| # level and send in tcp |
| log "${LOCAL_SYSLOG}:514" local0 notice # send to local server |
| |
| |
| log-format <string> |
| Specifies the log format string to use for traffic logs |
| May be used in sections: defaults | frontend | listen | backend |
| yes | yes | yes | no |
| |
| This directive specifies the log format string that will be used for all logs |
| resulting from traffic passing through the frontend using this line. If the |
| directive is used in a defaults section, all subsequent frontends will use |
| the same log format. Please see section 8.2.6 which covers the custom log |
| format string in depth. |
| |
| A specific log-format used only in case of connection error can also be |
| defined, see the "error-log-format" option. |
| |
| "log-format" directive overrides previous "option tcplog", "log-format", |
| "option httplog" and "option httpslog" directives. |
| |
| log-format-sd <string> |
| Specifies the RFC5424 structured-data log format string |
| May be used in sections: defaults | frontend | listen | backend |
| yes | yes | yes | no |
| |
| This directive specifies the RFC5424 structured-data log format string that |
| will be used for all logs resulting from traffic passing through the frontend |
| using this line. If the directive is used in a defaults section, all |
| subsequent frontends will use the same log format. Please see section 8.2.6 |
| which covers the log format string in depth. |
| |
| See https://tools.ietf.org/html/rfc5424#section-6.3 for more information |
| about the RFC5424 structured-data part. |
| |
| Note : This log format string will be used only for loggers that have set |
| log format to "rfc5424". |
| |
| Example : |
| log-format-sd [exampleSDID@1234\ bytes=\"%B\"\ status=\"%ST\"] |
| |
| |
| log-tag <string> |
| Specifies the log tag to use for all outgoing logs |
| May be used in sections: defaults | frontend | listen | backend |
| yes | yes | yes | yes |
| |
| Sets the tag field in the syslog header to this string. It defaults to the |
| log-tag set in the global section, otherwise the program name as launched |
| from the command line, which usually is "HAProxy". Sometimes it can be useful |
| to differentiate between multiple processes running on the same host, or to |
| differentiate customer instances running in the same process. In the backend, |
| logs about servers up/down will use this tag. As a hint, it can be convenient |
| to set a log-tag related to a hosted customer in a defaults section then put |
| all the frontends and backends for that customer, then start another customer |
| in a new defaults section. See also the global "log-tag" directive. |
| |
| max-keep-alive-queue <value> |
| Set the maximum server queue size for maintaining keep-alive connections |
| May be used in sections: defaults | frontend | listen | backend |
| yes | no | yes | yes |
| |
| HTTP keep-alive tries to reuse the same server connection whenever possible, |
| but sometimes it can be counter-productive, for example if a server has a lot |
| of connections while other ones are idle. This is especially true for static |
| servers. |
| |
| The purpose of this setting is to set a threshold on the number of queued |
| connections at which HAProxy stops trying to reuse the same server and prefers |
| to find another one. The default value, -1, means there is no limit. A value |
| of zero means that keep-alive requests will never be queued. For very close |
| servers which can be reached with a low latency and which are not sensible to |
| breaking keep-alive, a low value is recommended (e.g. local static server can |
| use a value of 10 or less). For remote servers suffering from a high latency, |
| higher values might be needed to cover for the latency and/or the cost of |
| picking a different server. |
| |
| Note that this has no impact on responses which are maintained to the same |
| server consecutively to a 401 response. They will still go to the same server |
| even if they have to be queued. |
| |
| See also : "option http-server-close", "option prefer-last-server", server |
| "maxconn" and cookie persistence. |
| |
| max-session-srv-conns <nb> |
| Set the maximum number of outgoing connections we can keep idling for a given |
| client session. The default is 5 (it precisely equals MAX_SRV_LIST which is |
| defined at build time). |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | no |
| |
| maxconn <conns> |
| Fix the maximum number of concurrent connections on a frontend |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | no |
| Arguments : |
| <conns> is the maximum number of concurrent connections the frontend will |
| accept to serve. Excess connections will be queued by the system |
| in the socket's listen queue and will be served once a connection |
| closes. |
| |
| If the system supports it, it can be useful on big sites to raise this limit |
| very high so that HAProxy manages connection queues, instead of leaving the |
| clients with unanswered connection attempts. This value should not exceed the |
| global maxconn. Also, keep in mind that a connection contains two buffers |
| of tune.bufsize (16kB by default) each, as well as some other data resulting |
| in about 33 kB of RAM being consumed per established connection. That means |
| that a medium system equipped with 1GB of RAM can withstand around |
| 20000-25000 concurrent connections if properly tuned. |
| |
| Also, when <conns> is set to large values, it is possible that the servers |
| are not sized to accept such loads, and for this reason it is generally wise |
| to assign them some reasonable connection limits. |
| |
| When this value is set to zero, which is the default, the global "maxconn" |
| value is used. |
| |
| See also : "server", global section's "maxconn", "fullconn" |
| |
| |
| mode { tcp|http } |
| Set the running mode or protocol of the instance |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | yes |
| Arguments : |
| tcp The instance will work in pure TCP mode. A full-duplex connection |
| will be established between clients and servers, and no layer 7 |
| examination will be performed. This is the default mode. It |
| should be used for SSL, SSH, SMTP, ... |
| |
| http The instance will work in HTTP mode. The client request will be |
| analyzed in depth before connecting to any server. Any request |
| which is not RFC-compliant will be rejected. Layer 7 filtering, |
| processing and switching will be possible. This is the mode which |
| brings HAProxy most of its value. |
| |
| When doing content switching, it is mandatory that the frontend and the |
| backend are in the same mode (generally HTTP), otherwise the configuration |
| will be refused. |
| |
| Example : |
| defaults http_instances |
| mode http |
| |
| |
| monitor fail { if | unless } <condition> |
| Add a condition to report a failure to a monitor HTTP request. |
| May be used in sections : defaults | frontend | listen | backend |
| no | yes | yes | no |
| Arguments : |
| if <cond> the monitor request will fail if the condition is satisfied, |
| and will succeed otherwise. The condition should describe a |
| combined test which must induce a failure if all conditions |
| are met, for instance a low number of servers both in a |
| backend and its backup. |
| |
| unless <cond> the monitor request will succeed only if the condition is |
| satisfied, and will fail otherwise. Such a condition may be |
| based on a test on the presence of a minimum number of active |
| servers in a list of backends. |
| |
| This statement adds a condition which can force the response to a monitor |
| request to report a failure. By default, when an external component queries |
| the URI dedicated to monitoring, a 200 response is returned. When one of the |
| conditions above is met, HAProxy will return 503 instead of 200. This is |
| very useful to report a site failure to an external component which may base |
| routing advertisements between multiple sites on the availability reported by |
| HAProxy. In this case, one would rely on an ACL involving the "nbsrv" |
| criterion. Note that "monitor fail" only works in HTTP mode. Both status |
| messages may be tweaked using "errorfile" or "errorloc" if needed. |
| |
| Example: |
| frontend www |
| mode http |
| acl site_dead nbsrv(dynamic) lt 2 |
| acl site_dead nbsrv(static) lt 2 |
| monitor-uri /site_alive |
| monitor fail if site_dead |
| |
| See also : "monitor-uri", "errorfile", "errorloc" |
| |
| |
| monitor-uri <uri> |
| Intercept a URI used by external components' monitor requests |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | no |
| Arguments : |
| <uri> is the exact URI which we want to intercept to return HAProxy's |
| health status instead of forwarding the request. |
| |
| When an HTTP request referencing <uri> will be received on a frontend, |
| HAProxy will not forward it nor log it, but instead will return either |
| "HTTP/1.0 200 OK" or "HTTP/1.0 503 Service unavailable", depending on failure |
| conditions defined with "monitor fail". This is normally enough for any |
| front-end HTTP probe to detect that the service is UP and running without |
| forwarding the request to a backend server. Note that the HTTP method, the |
| version and all headers are ignored, but the request must at least be valid |
| at the HTTP level. This keyword may only be used with an HTTP-mode frontend. |
| |
| Monitor requests are processed very early, just after the request is parsed |
| and even before any "http-request". The only rulesets applied before are the |
| tcp-request ones. They cannot be logged either, and it is the intended |
| purpose. Only one URI may be configured for monitoring; when multiple |
| "monitor-uri" statements are present, the last one will define the URI to |
| be used. They are only used to report HAProxy's health to an upper component, |
| nothing more. However, it is possible to add any number of conditions using |
| "monitor fail" and ACLs so that the result can be adjusted to whatever check |
| can be imagined (most often the number of available servers in a backend). |
| |
| Note: if <uri> starts by a slash ('/'), the matching is performed against the |
| request's path instead of the request's uri. It is a workaround to let |
| the HTTP/2 requests match the monitor-uri. Indeed, in HTTP/2, clients |
| are encouraged to send absolute URIs only. |
| |
| Example : |
| # Use /haproxy_test to report HAProxy's status |
| frontend www |
| mode http |
| monitor-uri /haproxy_test |
| |
| See also : "monitor fail" |
| |
| |
| option abortonclose |
| no option abortonclose |
| Enable or disable early dropping of aborted requests pending in queues. |
| May be used in sections : defaults | frontend | listen | backend |
| yes | no | yes | yes |
| Arguments : none |
| |
| In presence of very high loads, the servers will take some time to respond. |
| The per-instance connection queue will inflate, and the response time will |
| increase respective to the size of the queue times the average per-session |
| response time. When clients will wait for more than a few seconds, they will |
| often hit the "STOP" button on their browser, leaving a useless request in |
| the queue, and slowing down other users, and the servers as well, because the |
| request will eventually be served, then aborted at the first error |
| encountered while delivering the response. |
| |
| As there is no way to distinguish between a full STOP and a simple output |
| close on the client side, HTTP agents should be conservative and consider |
| that the client might only have closed its output channel while waiting for |
| the response. However, this introduces risks of congestion when lots of users |
| do the same, and is completely useless nowadays because probably no client at |
| all will close the session while waiting for the response. Some HTTP agents |
| support this behavior (Squid, Apache, HAProxy), and others do not (TUX, most |
| hardware-based load balancers). So the probability for a closed input channel |
| to represent a user hitting the "STOP" button is close to 100%, and the risk |
| of being the single component to break rare but valid traffic is extremely |
| low, which adds to the temptation to be able to abort a session early while |
| still not served and not pollute the servers. |
| |
| In HAProxy, the user can choose the desired behavior using the option |
| "abortonclose". By default (without the option) the behavior is HTTP |
| compliant and aborted requests will be served. But when the option is |
| specified, a session with an incoming channel closed will be aborted while |
| it is still possible, either pending in the queue for a connection slot, or |
| during the connection establishment if the server has not yet acknowledged |
| the connection request. This considerably reduces the queue size and the load |
| on saturated servers when users are tempted to click on STOP, which in turn |
| reduces the response time for other users. |
| |
| If this option has been enabled in a "defaults" section, it can be disabled |
| in a specific instance by prepending the "no" keyword before it. |
| |
| See also : "timeout queue" and server's "maxconn" and "maxqueue" parameters |
| |
| |
| option accept-invalid-http-request |
| no option accept-invalid-http-request |
| Enable or disable relaxing of HTTP request parsing |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | no |
| Arguments : none |
| |
| By default, HAProxy complies with RFC7230 in terms of message parsing. This |
| means that invalid characters in header names are not permitted and cause an |
| error to be returned to the client. This is the desired behavior as such |
| forbidden characters are essentially used to build attacks exploiting server |
| weaknesses, and bypass security filtering. Sometimes, a buggy browser or |
| server will emit invalid header names for whatever reason (configuration, |
| implementation) and the issue will not be immediately fixed. In such a case, |
| it is possible to relax HAProxy's header name parser to accept any character |
| even if that does not make sense, by specifying this option. Similarly, the |
| list of characters allowed to appear in a URI is well defined by RFC3986, and |
| chars 0-31, 32 (space), 34 ('"'), 60 ('<'), 62 ('>'), 92 ('\'), 94 ('^'), 96 |
| ('`'), 123 ('{'), 124 ('|'), 125 ('}'), 127 (delete) and anything above are |
| not allowed at all. HAProxy always blocks a number of them (0..32, 127). The |
| remaining ones are blocked by default unless this option is enabled. This |
| option also relaxes the test on the HTTP version, it allows HTTP/0.9 requests |
| to pass through (no version specified), as well as different protocol names |
| (e.g. RTSP), and multiple digits for both the major and the minor version. |
| Finally, this option also allows incoming URLs to contain fragment references |
| ('#' after the path). |
| |
| This option should never be enabled by default as it hides application bugs |
| and open security breaches. It should only be deployed after a problem has |
| been confirmed. |
| |
| When this option is enabled, erroneous header names will still be accepted in |
| requests, but the complete request will be captured in order to permit later |
| analysis using the "show errors" request on the UNIX stats socket. Similarly, |
| requests containing invalid chars in the URI part will be logged. Doing this |
| also helps confirming that the issue has been solved. |
| |
| If this option has been enabled in a "defaults" section, it can be disabled |
| in a specific instance by prepending the "no" keyword before it. |
| |
| See also : "option accept-invalid-http-response" and "show errors" on the |
| stats socket. |
| |
| |
| option accept-invalid-http-response |
| no option accept-invalid-http-response |
| Enable or disable relaxing of HTTP response parsing |
| May be used in sections : defaults | frontend | listen | backend |
| yes | no | yes | yes |
| Arguments : none |
| |
| By default, HAProxy complies with RFC7230 in terms of message parsing. This |
| means that invalid characters in header names are not permitted and cause an |
| error to be returned to the client. This is the desired behavior as such |
| forbidden characters are essentially used to build attacks exploiting server |
| weaknesses, and bypass security filtering. Sometimes, a buggy browser or |
| server will emit invalid header names for whatever reason (configuration, |
| implementation) and the issue will not be immediately fixed. In such a case, |
| it is possible to relax HAProxy's header name parser to accept any character |
| even if that does not make sense, by specifying this option. This option also |
| relaxes the test on the HTTP version format, it allows multiple digits for |
| both the major and the minor version. |
| |
| This option should never be enabled by default as it hides application bugs |
| and open security breaches. It should only be deployed after a problem has |
| been confirmed. |
| |
| When this option is enabled, erroneous header names will still be accepted in |
| responses, but the complete response will be captured in order to permit |
| later analysis using the "show errors" request on the UNIX stats socket. |
| Doing this also helps confirming that the issue has been solved. |
| |
| If this option has been enabled in a "defaults" section, it can be disabled |
| in a specific instance by prepending the "no" keyword before it. |
| |
| See also : "option accept-invalid-http-request" and "show errors" on the |
| stats socket. |
| |
| |
| option allbackups |
| no option allbackups |
| Use either all backup servers at a time or only the first one |
| May be used in sections : defaults | frontend | listen | backend |
| yes | no | yes | yes |
| Arguments : none |
| |
| By default, the first operational backup server gets all traffic when normal |
| servers are all down. Sometimes, it may be preferred to use multiple backups |
| at once, because one will not be enough. When "option allbackups" is enabled, |
| the load balancing will be performed among all backup servers when all normal |
| ones are unavailable. The same load balancing algorithm will be used and the |
| servers' weights will be respected. Thus, there will not be any priority |
| order between the backup servers anymore. |
| |
| This option is mostly used with static server farms dedicated to return a |
| "sorry" page when an application is completely offline. |
| |
| If this option has been enabled in a "defaults" section, it can be disabled |
| in a specific instance by prepending the "no" keyword before it. |
| |
| |
| option checkcache |
| no option checkcache |
| Analyze all server responses and block responses with cacheable cookies |
| May be used in sections : defaults | frontend | listen | backend |
| yes | no | yes | yes |
| Arguments : none |
| |
| Some high-level frameworks set application cookies everywhere and do not |
| always let enough control to the developer to manage how the responses should |
| be cached. When a session cookie is returned on a cacheable object, there is a |
| high risk of session crossing or stealing between users traversing the same |
| caches. In some situations, it is better to block the response than to let |
| some sensitive session information go in the wild. |
| |
| The option "checkcache" enables deep inspection of all server responses for |
| strict compliance with HTTP specification in terms of cacheability. It |
| carefully checks "Cache-control", "Pragma" and "Set-cookie" headers in server |
| response to check if there's a risk of caching a cookie on a client-side |
| proxy. When this option is enabled, the only responses which can be delivered |
| to the client are : |
| - all those without "Set-Cookie" header; |
| - all those with a return code other than 200, 203, 204, 206, 300, 301, |
| 404, 405, 410, 414, 501, provided that the server has not set a |
| "Cache-control: public" header field; |
| - all those that result from a request using a method other than GET, HEAD, |
| OPTIONS, TRACE, provided that the server has not set a 'Cache-Control: |
| public' header field; |
| - those with a 'Pragma: no-cache' header |
| - those with a 'Cache-control: private' header |
| - those with a 'Cache-control: no-store' header |
| - those with a 'Cache-control: max-age=0' header |
| - those with a 'Cache-control: s-maxage=0' header |
| - those with a 'Cache-control: no-cache' header |
| - those with a 'Cache-control: no-cache="set-cookie"' header |
| - those with a 'Cache-control: no-cache="set-cookie,' header |
| (allowing other fields after set-cookie) |
| |
| If a response doesn't respect these requirements, then it will be blocked |
| just as if it was from an "http-response deny" rule, with an "HTTP 502 bad |
| gateway". The session state shows "PH--" meaning that the proxy blocked the |
| response during headers processing. Additionally, an alert will be sent in |
| the logs so that admins are informed that there's something to be fixed. |
| |
| Due to the high impact on the application, the application should be tested |
| in depth with the option enabled before going to production. It is also a |
| good practice to always activate it during tests, even if it is not used in |
| production, as it will report potentially dangerous application behaviors. |
| |
| If this option has been enabled in a "defaults" section, it can be disabled |
| in a specific instance by prepending the "no" keyword before it. |
| |
| |
| option clitcpka |
| no option clitcpka |
| Enable or disable the sending of TCP keepalive packets on the client side |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | no |
| Arguments : none |
| |
| When there is a firewall or any session-aware component between a client and |
| a server, and when the protocol involves very long sessions with long idle |
| periods (e.g. remote desktops), there is a risk that one of the intermediate |
| components decides to expire a session which has remained idle for too long. |
| |
| Enabling socket-level TCP keep-alives makes the system regularly send packets |
| to the other end of the connection, leaving it active. The delay between |
| keep-alive probes is controlled by the system only and depends both on the |
| operating system and its tuning parameters. |
| |
| It is important to understand that keep-alive packets are neither emitted nor |
| received at the application level. It is only the network stacks which sees |
| them. For this reason, even if one side of the proxy already uses keep-alives |
| to maintain its connection alive, those keep-alive packets will not be |
| forwarded to the other side of the proxy. |
| |
| Please note that this has nothing to do with HTTP keep-alive. |
| |
| Using option "clitcpka" enables the emission of TCP keep-alive probes on the |
| client side of a connection, which should help when session expirations are |
| noticed between HAProxy and a client. |
| |
| If this option has been enabled in a "defaults" section, it can be disabled |
| in a specific instance by prepending the "no" keyword before it. |
| |
| See also : "option srvtcpka", "option tcpka" |
| |
| |
| option contstats |
| Enable continuous traffic statistics updates |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | no |
| Arguments : none |
| |
| By default, counters used for statistics calculation are incremented |
| only when a session finishes. It works quite well when serving small |
| objects, but with big ones (for example large images or archives) or |
| with A/V streaming, a graph generated from HAProxy counters looks like |
| a hedgehog. With this option enabled counters get incremented frequently |
| along the session, typically every 5 seconds, which is often enough to |
| produce clean graphs. Recounting touches a hotpath directly so it is not |
| not enabled by default, as it can cause a lot of wakeups for very large |
| session counts and cause a small performance drop. |
| |
| option disable-h2-upgrade |
| no option disable-h2-upgrade |
| Enable or disable the implicit HTTP/2 upgrade from an HTTP/1.x client |
| connection. |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | no |
| Arguments : none |
| |
| By default, HAProxy is able to implicitly upgrade an HTTP/1.x client |
| connection to an HTTP/2 connection if the first request it receives from a |
| given HTTP connection matches the HTTP/2 connection preface (i.e. the string |
| "PRI * HTTP/2.0\r\n\r\nSM\r\n\r\n"). This way, it is possible to support |
| HTTP/1.x and HTTP/2 clients on a non-SSL connections. This option must be |
| used to disable the implicit upgrade. Note this implicit upgrade is only |
| supported for HTTP proxies, thus this option too. Note also it is possible to |
| force the HTTP/2 on clear connections by specifying "proto h2" on the bind |
| line. Finally, this option is applied on all bind lines. To disable implicit |
| HTTP/2 upgrades for a specific bind line, it is possible to use "proto h1". |
| |
| If this option has been enabled in a "defaults" section, it can be disabled |
| in a specific instance by prepending the "no" keyword before it. |
| |
| option dontlog-normal |
| no option dontlog-normal |
| Enable or disable logging of normal, successful connections |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | no |
| Arguments : none |
| |
| There are large sites dealing with several thousand connections per second |
| and for which logging is a major pain. Some of them are even forced to turn |
| logs off and cannot debug production issues. Setting this option ensures that |
| normal connections, those which experience no error, no timeout, no retry nor |
| redispatch, will not be logged. This leaves disk space for anomalies. In HTTP |
| mode, the response status code is checked and return codes 5xx will still be |
| logged. |
| |
| It is strongly discouraged to use this option as most of the time, the key to |
| complex issues is in the normal logs which will not be logged here. If you |
| need to separate logs, see the "log-separate-errors" option instead. |
| |
| See also : "log", "dontlognull", "log-separate-errors" and section 8 about |
| logging. |
| |
| |
| option dontlognull |
| no option dontlognull |
| Enable or disable logging of null connections |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | no |
| Arguments : none |
| |
| In certain environments, there are components which will regularly connect to |
| various systems to ensure that they are still alive. It can be the case from |
| another load balancer as well as from monitoring systems. By default, even a |
| simple port probe or scan will produce a log. If those connections pollute |
| the logs too much, it is possible to enable option "dontlognull" to indicate |
| that a connection on which no data has been transferred will not be logged, |
| which typically corresponds to those probes. Note that errors will still be |
| returned to the client and accounted for in the stats. If this is not what is |
| desired, option http-ignore-probes can be used instead. |
| |
| It is generally recommended not to use this option in uncontrolled |
| environments (e.g. internet), otherwise scans and other malicious activities |
| would not be logged. |
| |
| If this option has been enabled in a "defaults" section, it can be disabled |
| in a specific instance by prepending the "no" keyword before it. |
| |
| See also : "log", "http-ignore-probes", "monitor-uri", and |
| section 8 about logging. |
| |
| option forwarded [ proto ] |
| [ host | host-expr <host_expr> ] |
| [ by | by-expr <by_expr> ] [ by_port | by_port-expr <by_port_expr>] |
| [ for | for-expr <for_expr> ] [ for_port | for_port-expr <for_port_expr>] |
| no option forwarded |
| Enable insertion of the rfc 7239 forwarded header in requests sent to servers |
| May be used in sections : defaults | frontend | listen | backend |
| yes | no | yes | yes |
| Arguments : |
| <host_expr> optional argument to specify a custom sample expression |
| those result will be used as 'host' parameter value |
| |
| <by_expr> optional argument to specicy a custom sample expression |
| those result will be used as 'by' parameter nodename value |
| |
| <for_expr> optional argument to specicy a custom sample expression |
| those result will be used as 'for' parameter nodename value |
| |
| <by_port_expr> optional argument to specicy a custom sample expression |
| those result will be used as 'by' parameter nodeport value |
| |
| <for_port_expr> optional argument to specicy a custom sample expression |
| those result will be used as 'for' parameter nodeport value |
| |
| |
| Since HAProxy works in reverse-proxy mode, servers are losing some request |
| context (request origin: client ip address, protocol used...) |
| |
| A common way to address this limitation is to use the well known |
| x-forward-for and x-forward-* friends to expose some of this context to the |
| underlying servers/applications. |
| While this use to work and is widely deployed, it is not officially supported |
| by the IETF and can be the root of some interoperability as well as security |
| issues. |
| |
| To solve this, a new HTTP extension has been described by the IETF: |
| forwarded header (RFC7239). |
| More information here: https://www.rfc-editor.org/rfc/rfc7239.html |
| |
| The use of this single header allow to convey numerous details |
| within the same header, and most importantly, fixes the proxy chaining |
| issue. (the rfc allows for multiple chained proxies to append their own |
| values to an already existing header). |
| |
| This option may be specified in defaults, listen or backend section, but it |
| will be ignored for frontend sections. |
| |
| Setting option forwarded without arguments results in using default implicit |
| behavior. |
| Default behavior enables proto parameter and injects original client ip. |
| |
| The equivalent explicit/manual configuration would be: |
| option forwarded proto for |
| |
| The keyword 'by' is used to enable 'by' parameter ("nodename") in |
| forwarded header. It allows to embed request proxy information. |
| 'by' value will be set to proxy ip (destination address) |
| If not available (ie: UNIX listener), 'by' will be set to |
| "unknown". |
| |
| The keyword 'by-expr' is used to enable 'by' parameter ("nodename") in |
| forwarded header. It allows to embed request proxy information. |
| 'by' value will be set to the result of the sample expression |
| <by_expr>, if valid, otherwise it will be set to "unknown". |
| |
| The keyword 'for' is used to enable 'for' parameter ("nodename") in |
| forwarded header. It allows to embed request client information. |
| 'for' value will be set to client ip (source address) |
| If not available (ie: UNIX listener), 'for' will be set to |
| "unknown". |
| |
| The keyword 'for-expr' is used to enable 'for' parameter ("nodename") in |
| forwarded header. It allows to embed request client information. |
| 'for' value will be set to the result of the sample expression |
| <for_expr>, if valid, otherwise it will be set to "unknown". |
| |
| The keyword 'by_port' is used to provide "nodeport" info to |
| 'by' parameter. 'by_port' requires 'by' or 'by-expr' to be set or |
| it will be ignored. |
| "nodeport" will be set to proxy (destination) port if available, |
| otherwise it will be ignored. |
| |
| The keyword 'by_port-expr' is used to provide "nodeport" info to |
| 'by' parameter. 'by_port-expr' requires 'by' or 'by-expr' to be set or |
| it will be ignored. |
| "nodeport" will be set to the result of the sample expression |
| <by_port_expr>, if valid, otherwise it will be ignored. |
| |
| The keyword 'for_port' is used to provide "nodeport" info to |
| 'for' parameter. 'for_port' requires 'for' or 'for-expr' to be set or |
| it will be ignored. |
| "nodeport" will be set to client (source) port if available, |
| otherwise it will be ignored. |
| |
| The keyword 'for_port-expr' is used to provide "nodeport" info to |
| 'for' parameter. 'for_port-expr' requires 'for' or 'for-expr' to be set or |
| it will be ignored. |
| "nodeport" will be set to the result of the sample expression |
| <for_port_expr>, if valid, otherwise it will be ignored. |
| |
| Examples : |
| # Those servers want the ip address and protocol of the client request |
| # Resulting header would look like this: |
| # forwarded: proto=http;for=127.0.0.1 |
| backend www_default |
| mode http |
| option forwarded |
| #equivalent to: option forwarded proto for |
| |
| # Those servers want the requested host and hashed client ip address |
| # as well as client source port (you should use seed for xxh32 if ensuring |
| # ip privacy is a concern) |
| # Resulting header would look like this: |
| # forwarded: host="haproxy.org";for="_000000007F2F367E:60138" |
| backend www_host |
| mode http |
| option forwarded host for-expr src,xxh32,hex for_port |
| |
| # Those servers want custom data in host, for and by parameters |
| # Resulting header would look like this: |
| # forwarded: host="host.com";by=_haproxy;for="[::1]:10" |
| backend www_custom |
| mode http |
| option forwarded host-expr str(host.com) by-expr str(_haproxy) for for_port-expr int(10) |
| |
| # Those servers want random 'for' obfuscated identifiers for request |
| # tracing purposes while protecting sensitive IP information |
| # Resulting header would look like this: |
| # forwarded: for=_000000002B1F4D63 |
| backend www_for_hide |
| mode http |
| option forwarded for-expr rand,hex |
| |
| See also : "option forwardfor", "option originalto" |
| |
| option forwardfor [ except <network> ] [ header <name> ] [ if-none ] |
| Enable insertion of the X-Forwarded-For header to requests sent to servers |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | yes |
| Arguments : |
| <network> is an optional argument used to disable this option for sources |
| matching <network> |
| <name> an optional argument to specify a different "X-Forwarded-For" |
| header name. |
| |
| Since HAProxy works in reverse-proxy mode, the servers see its IP address as |
| their client address. This is sometimes annoying when the client's IP address |
| is expected in server logs. To solve this problem, the well-known HTTP header |
| "X-Forwarded-For" may be added by HAProxy to all requests sent to the server. |
| This header contains a value representing the client's IP address. Since this |
| header is always appended at the end of the existing header list, the server |
| must be configured to always use the last occurrence of this header only. See |
| the server's manual to find how to enable use of this standard header. Note |
| that only the last occurrence of the header must be used, since it is really |
| possible that the client has already brought one. |
| |
| The keyword "header" may be used to supply a different header name to replace |
| the default "X-Forwarded-For". This can be useful where you might already |
| have a "X-Forwarded-For" header from a different application (e.g. stunnel), |
| and you need preserve it. Also if your backend server doesn't use the |
| "X-Forwarded-For" header and requires different one (e.g. Zeus Web Servers |
| require "X-Cluster-Client-IP"). |
| |
| Sometimes, a same HAProxy instance may be shared between a direct client |
| access and a reverse-proxy access (for instance when an SSL reverse-proxy is |
| used to decrypt HTTPS traffic). It is possible to disable the addition of the |
| header for a known source address or network by adding the "except" keyword |
| followed by the network address. In this case, any source IP matching the |
| network will not cause an addition of this header. Most common uses are with |
| private networks or 127.0.0.1. IPv4 and IPv6 are both supported. |
| |
| Alternatively, the keyword "if-none" states that the header will only be |
| added if it is not present. This should only be used in perfectly trusted |
| environment, as this might cause a security issue if headers reaching HAProxy |
| are under the control of the end-user. |
| |
| This option may be specified either in the frontend or in the backend. If at |
| least one of them uses it, the header will be added. Note that the backend's |
| setting of the header subargument takes precedence over the frontend's if |
| both are defined. In the case of the "if-none" argument, if at least one of |
| the frontend or the backend does not specify it, it wants the addition to be |
| mandatory, so it wins. |
| |
| Example : |
| # Public HTTP address also used by stunnel on the same machine |
| frontend www |
| mode http |
| option forwardfor except 127.0.0.1 # stunnel already adds the header |
| |
| # Those servers want the IP Address in X-Client |
| backend www |
| mode http |
| option forwardfor header X-Client |
| |
| See also : "option httpclose", "option http-server-close", |
| "option http-keep-alive" |
| |
| |
| option h1-case-adjust-bogus-client |
| no option h1-case-adjust-bogus-client |
| Enable or disable the case adjustment of HTTP/1 headers sent to bogus clients |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | no |
| Arguments : none |
| |
| There is no standard case for header names because, as stated in RFC7230, |
| they are case-insensitive. So applications must handle them in a case- |
| insensitive manner. But some bogus applications violate the standards and |
| erroneously rely on the cases most commonly used by browsers. This problem |
| becomes critical with HTTP/2 because all header names must be exchanged in |
| lower case, and HAProxy follows the same convention. All header names are |
| sent in lower case to clients and servers, regardless of the HTTP version. |
| |
| When HAProxy receives an HTTP/1 response, its header names are converted to |
| lower case and manipulated and sent this way to the clients. If a client is |
| known to violate the HTTP standards and to fail to process a response coming |
| from HAProxy, it is possible to transform the lower case header names to a |
| different format when the response is formatted and sent to the client, by |
| enabling this option and specifying the list of headers to be reformatted |
| using the global directives "h1-case-adjust" or "h1-case-adjust-file". This |
| must only be a temporary workaround for the time it takes the client to be |
| fixed, because clients which require such workarounds might be vulnerable to |
| content smuggling attacks and must absolutely be fixed. |
| |
| Please note that this option will not affect standards-compliant clients. |
| |
| If this option has been enabled in a "defaults" section, it can be disabled |
| in a specific instance by prepending the "no" keyword before it. |
| |
| See also: "option h1-case-adjust-bogus-server", "h1-case-adjust", |
| "h1-case-adjust-file". |
| |
| |
| option h1-case-adjust-bogus-server |
| no option h1-case-adjust-bogus-server |
| Enable or disable the case adjustment of HTTP/1 headers sent to bogus servers |
| May be used in sections : defaults | frontend | listen | backend |
| yes | no | yes | yes |
| Arguments : none |
| |
| There is no standard case for header names because, as stated in RFC7230, |
| they are case-insensitive. So applications must handle them in a case- |
| insensitive manner. But some bogus applications violate the standards and |
| erroneously rely on the cases most commonly used by browsers. This problem |
| becomes critical with HTTP/2 because all header names must be exchanged in |
| lower case, and HAProxy follows the same convention. All header names are |
| sent in lower case to clients and servers, regardless of the HTTP version. |
| |
| When HAProxy receives an HTTP/1 request, its header names are converted to |
| lower case and manipulated and sent this way to the servers. If a server is |
| known to violate the HTTP standards and to fail to process a request coming |
| from HAProxy, it is possible to transform the lower case header names to a |
| different format when the request is formatted and sent to the server, by |
| enabling this option and specifying the list of headers to be reformatted |
| using the global directives "h1-case-adjust" or "h1-case-adjust-file". This |
| must only be a temporary workaround for the time it takes the server to be |
| fixed, because servers which require such workarounds might be vulnerable to |
| content smuggling attacks and must absolutely be fixed. |
| |
| Please note that this option will not affect standards-compliant servers. |
| |
| If this option has been enabled in a "defaults" section, it can be disabled |
| in a specific instance by prepending the "no" keyword before it. |
| |
| See also: "option h1-case-adjust-bogus-client", "h1-case-adjust", |
| "h1-case-adjust-file". |
| |
| |
| option http-buffer-request |
| no option http-buffer-request |
| Enable or disable waiting for whole HTTP request body before proceeding |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | yes |
| Arguments : none |
| |
| It is sometimes desirable to wait for the body of an HTTP request before |
| taking a decision. This is what is being done by "balance url_param" for |
| example. The first use case is to buffer requests from slow clients before |
| connecting to the server. Another use case consists in taking the routing |
| decision based on the request body's contents. This option placed in a |
| frontend or backend forces the HTTP processing to wait until either the whole |
| body is received or the request buffer is full. It can have undesired side |
| effects with some applications abusing HTTP by expecting unbuffered |
| transmissions between the frontend and the backend, so this should definitely |
| not be used by default. |
| |
| See also : "option http-no-delay", "timeout http-request", |
| "http-request wait-for-body" |
| |
| |
| option http-ignore-probes |
| no option http-ignore-probes |
| Enable or disable logging of null connections and request timeouts |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | no |
| Arguments : none |
| |
| Recently some browsers started to implement a "pre-connect" feature |
| consisting in speculatively connecting to some recently visited web sites |
| just in case the user would like to visit them. This results in many |
| connections being established to web sites, which end up in 408 Request |
| Timeout if the timeout strikes first, or 400 Bad Request when the browser |
| decides to close them first. These ones pollute the log and feed the error |
| counters. There was already "option dontlognull" but it's insufficient in |
| this case. Instead, this option does the following things : |
| - prevent any 400/408 message from being sent to the client if nothing |
| was received over a connection before it was closed; |
| - prevent any log from being emitted in this situation; |
| - prevent any error counter from being incremented |
| |
| That way the empty connection is silently ignored. Note that it is better |
| not to use this unless it is clear that it is needed, because it will hide |
| real problems. The most common reason for not receiving a request and seeing |
| a 408 is due to an MTU inconsistency between the client and an intermediary |
| element such as a VPN, which blocks too large packets. These issues are |
| generally seen with POST requests as well as GET with large cookies. The logs |
| are often the only way to detect them. |
| |
| If this option has been enabled in a "defaults" section, it can be disabled |
| in a specific instance by prepending the "no" keyword before it. |
| |
| See also : "log", "dontlognull", "errorfile", and section 8 about logging. |
| |
| |
| option http-keep-alive |
| no option http-keep-alive |
| Enable or disable HTTP keep-alive from client to server for HTTP/1.x |
| connections |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | yes |
| Arguments : none |
| |
| By default HAProxy operates in keep-alive mode with regards to persistent |
| HTTP/1.x connections: for each connection it processes each request and |
| response, and leaves the connection idle on both sides. This mode may be |
| changed by several options such as "option http-server-close" or "option |
| httpclose". This option allows to set back the keep-alive mode, which can be |
| useful when another mode was used in a defaults section. |
| |
| Setting "option http-keep-alive" enables HTTP keep-alive mode on the client- |
| and server- sides. This provides the lowest latency on the client side (slow |
| network) and the fastest session reuse on the server side at the expense |
| of maintaining idle connections to the servers. In general, it is possible |
| with this option to achieve approximately twice the request rate that the |
| "http-server-close" option achieves on small objects. There are mainly two |
| situations where this option may be useful : |
| |
| - when the server is non-HTTP compliant and authenticates the connection |
| instead of requests (e.g. NTLM authentication) |
| |
| - when the cost of establishing the connection to the server is significant |
| compared to the cost of retrieving the associated object from the server. |
| |
| This last case can happen when the server is a fast static server of cache. |
| |
| At the moment, logs will not indicate whether requests came from the same |
| session or not. The accept date reported in the logs corresponds to the end |
| of the previous request, and the request time corresponds to the time spent |
| waiting for a new request. The keep-alive request time is still bound to the |
| timeout defined by "timeout http-keep-alive" or "timeout http-request" if |
| not set. |
| |
| This option disables and replaces any previous "option httpclose" or "option |
| http-server-close". |
| |
| See also : "option httpclose",, "option http-server-close", |
| "option prefer-last-server" and "option http-pretend-keepalive". |
| |
| |
| option http-no-delay |
| no option http-no-delay |
| Instruct the system to favor low interactive delays over performance in HTTP |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | yes |
| Arguments : none |
| |
| In HTTP, each payload is unidirectional and has no notion of interactivity. |
| Any agent is expected to queue data somewhat for a reasonably low delay. |
| There are some very rare server-to-server applications that abuse the HTTP |
| protocol and expect the payload phase to be highly interactive, with many |
| interleaved data chunks in both directions within a single request. This is |
| absolutely not supported by the HTTP specification and will not work across |
| most proxies or servers. When such applications attempt to do this through |
| HAProxy, it works but they will experience high delays due to the network |
| optimizations which favor performance by instructing the system to wait for |
| enough data to be available in order to only send full packets. Typical |
| delays are around 200 ms per round trip. Note that this only happens with |
| abnormal uses. Normal uses such as CONNECT requests nor WebSockets are not |
| affected. |
| |
| When "option http-no-delay" is present in either the frontend or the backend |
| used by a connection, all such optimizations will be disabled in order to |
| make the exchanges as fast as possible. Of course this offers no guarantee on |
| the functionality, as it may break at any other place. But if it works via |
| HAProxy, it will work as fast as possible. This option should never be used |
| by default, and should never be used at all unless such a buggy application |
| is discovered. The impact of using this option is an increase of bandwidth |
| usage and CPU usage, which may significantly lower performance in high |
| latency environments. |
| |
| See also : "option http-buffer-request" |
| |
| |
| option http-pretend-keepalive |
| no option http-pretend-keepalive |
| Define whether HAProxy will announce keepalive for HTTP/1.x connection to the |
| server or not |
| May be used in sections : defaults | frontend | listen | backend |
| yes | no | yes | yes |
| Arguments : none |
| |
| When running with "option http-server-close" or "option httpclose", HAProxy |
| adds a "Connection: close" header to the HTTP/1.x request forwarded to the |
| server. Unfortunately, when some servers see this header, they automatically |
| refrain from using the chunked encoding for responses of unknown length, |
| while this is totally unrelated. The effect is that a client or a cache could |
| receive an incomplete response without being aware of it, and consider the |
| response complete. |
| |
| By setting "option http-pretend-keepalive", HAProxy will make the server |
| believe it will keep the connection alive. The server will then not fall back |
| to the abnormal undesired above. When HAProxy gets the whole response, it |
| will close the connection with the server just as it would do with the |
| "option httpclose". That way the client gets a normal response and the |
| connection is correctly closed on the server side. |
| |
| It is recommended not to enable this option by default, because most servers |
| will more efficiently close the connection themselves after the last packet, |
| and release its buffers slightly earlier. Also, the added packet on the |
| network could slightly reduce the overall peak performance. However it is |
| worth noting that when this option is enabled, HAProxy will have slightly |
| less work to do. So if HAProxy is the bottleneck on the whole architecture, |
| enabling this option might save a few CPU cycles. |
| |
| This option may be set in backend and listen sections. Using it in a frontend |
| section will be ignored and a warning will be reported during startup. It is |
| a backend related option, so there is no real reason to set it on a |
| frontend. |
| |
| If this option has been enabled in a "defaults" section, it can be disabled |
| in a specific instance by prepending the "no" keyword before it. |
| |
| See also : "option httpclose", "option http-server-close", and |
| "option http-keep-alive" |
| |
| option http-restrict-req-hdr-names { preserve | delete | reject } |
| Set HAProxy policy about HTTP request header names containing characters |
| outside the "[a-zA-Z0-9-]" charset |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | yes |
| Arguments : |
| preserve disable the filtering. It is the default mode for HTTP proxies |
| with no FastCGI application configured. |
| |
| delete remove request headers with a name containing a character |
| outside the "[a-zA-Z0-9-]" charset. It is the default mode for |
| HTTP backends with a configured FastCGI application. |
| |
| reject reject the request with a 403-Forbidden response if it contains a |
| header name with a character outside the "[a-zA-Z0-9-]" charset. |
| |
| This option may be used to restrict the request header names to alphanumeric |
| and hyphen characters ([A-Za-z0-9-]). This may be mandatory to interoperate |
| with non-HTTP compliant servers that fail to handle some characters in header |
| names. It may also be mandatory for FastCGI applications because all |
| non-alphanumeric characters in header names are replaced by an underscore |
| ('_'). Thus, it is easily possible to mix up header names and bypass some |
| rules. For instance, "X-Forwarded-For" and "X_Forwarded-For" headers are both |
| converted to "HTTP_X_FORWARDED_FOR" in FastCGI. |
| |
| Note this option is evaluated per proxy and after the http-request rules |
| evaluation. |
| |
| option http-server-close |
| no option http-server-close |
| Enable or disable HTTP/1.x connection closing on the server side |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | yes |
| Arguments : none |
| |
| By default HAProxy operates in keep-alive mode with regards to persistent |
| HTTP/1.x connections: for each connection it processes each request and |
| response, and leaves the connection idle on both sides. This mode may be |
| changed by several options such as "option http-server-close" or "option |
| httpclose". Setting "option http-server-close" enables HTTP connection-close |
| mode on the server side while keeping the ability to support HTTP keep-alive |
| and pipelining on the client side. This provides the lowest latency on the |
| client side (slow network) and the fastest session reuse on the server side |
| to save server resources, similarly to "option httpclose". It also permits |
| non-keepalive capable servers to be served in keep-alive mode to the clients |
| if they conform to the requirements of RFC7230. Please note that some servers |
| do not always conform to those requirements when they see "Connection: close" |
| in the request. The effect will be that keep-alive will never be used. A |
| workaround consists in enabling "option http-pretend-keepalive". |
| |
| At the moment, logs will not indicate whether requests came from the same |
| session or not. The accept date reported in the logs corresponds to the end |
| of the previous request, and the request time corresponds to the time spent |
| waiting for a new request. The keep-alive request time is still bound to the |
| timeout defined by "timeout http-keep-alive" or "timeout http-request" if |
| not set. |
| |
| This option may be set both in a frontend and in a backend. It is enabled if |
| at least one of the frontend or backend holding a connection has it enabled. |
| It disables and replaces any previous "option httpclose" or "option |
| http-keep-alive". Please check section 4 ("Proxies") to see how this option |
| combines with others when frontend and backend options differ. |
| |
| If this option has been enabled in a "defaults" section, it can be disabled |
| in a specific instance by prepending the "no" keyword before it. |
| |
| See also : "option httpclose", "option http-pretend-keepalive" and |
| "option http-keep-alive". |
| |
| option http-use-proxy-header |
| no option http-use-proxy-header |
| Make use of non-standard Proxy-Connection header instead of Connection |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | no |
| Arguments : none |
| |
| While RFC7230 explicitly states that HTTP/1.1 agents must use the |
| Connection header to indicate their wish of persistent or non-persistent |
| connections, both browsers and proxies ignore this header for proxied |
| connections and make use of the undocumented, non-standard Proxy-Connection |
| header instead. The issue begins when trying to put a load balancer between |
| browsers and such proxies, because there will be a difference between what |
| HAProxy understands and what the client and the proxy agree on. |
| |
| By setting this option in a frontend, HAProxy can automatically switch to use |
| that non-standard header if it sees proxied requests. A proxied request is |
| defined here as one where the URI begins with neither a '/' nor a '*'. This |
| is incompatible with the HTTP tunnel mode. Note that this option can only be |
| specified in a frontend and will affect the request along its whole life. |
| |
| Also, when this option is set, a request which requires authentication will |
| automatically switch to use proxy authentication headers if it is itself a |
| proxied request. That makes it possible to check or enforce authentication in |
| front of an existing proxy. |
| |
| This option should normally never be used, except in front of a proxy. |
| |
| See also : "option httpclose", and "option http-server-close". |
| |
| option httpchk |
| option httpchk <uri> |
| option httpchk <method> <uri> |
| option httpchk <method> <uri> <version> |
| Enables HTTP protocol to check on the servers health |
| May be used in sections : defaults | frontend | listen | backend |
| yes | no | yes | yes |
| Arguments : |
| <method> is the optional HTTP method used with the requests. When not set, |
| the "OPTIONS" method is used, as it generally requires low server |
| processing and is easy to filter out from the logs. Any method |
| may be used, though it is not recommended to invent non-standard |
| ones. |
| |
| <uri> is the URI referenced in the HTTP requests. It defaults to " / " |
| which is accessible by default on almost any server, but may be |
| changed to any other URI. Query strings are permitted. |
| |
| <version> is the optional HTTP version string. It defaults to "HTTP/1.0" |
| but some servers might behave incorrectly in HTTP 1.0, so turning |
| it to HTTP/1.1 may sometimes help. Note that the Host field is |
| mandatory in HTTP/1.1, use "http-check send" directive to add it. |
| |
| By default, server health checks only consist in trying to establish a TCP |
| connection. When "option httpchk" is specified, a complete HTTP request is |
| sent once the TCP connection is established, and responses 2xx and 3xx are |
| considered valid, while all other ones indicate a server failure, including |
| the lack of any response. |
| |
| Combined with "http-check" directives, it is possible to customize the |
| request sent during the HTTP health checks or the matching rules on the |
| response. It is also possible to configure a send/expect sequence, just like |
| with the directive "tcp-check" for TCP health checks. |
| |
| The server configuration is used by default to open connections to perform |
| HTTP health checks. By it is also possible to overwrite server parameters |
| using "http-check connect" rules. |
| |
| "httpchk" option does not necessarily require an HTTP backend, it also works |
| with plain TCP backends. This is particularly useful to check simple scripts |
| bound to some dedicated ports using the inetd daemon. However, it will always |
| internally relies on an HTX multiplexer. Thus, it means the request |
| formatting and the response parsing will be strict. |
| |
| Examples : |
| # Relay HTTPS traffic to Apache instance and check service availability |
| # using HTTP request "OPTIONS * HTTP/1.1" on port 80. |
| backend https_relay |
| mode tcp |
| option httpchk OPTIONS * HTTP/1.1 |
| http-check send hdr Host www |
| server apache1 192.168.1.1:443 check port 80 |
| |
| See also : "option ssl-hello-chk", "option smtpchk", "option mysql-check", |
| "option pgsql-check", "http-check" and the "check", "port" and |
| "inter" server options. |
| |
| |
| option httpclose |
| no option httpclose |
| Enable or disable HTTP/1.x connection closing |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | yes |
| Arguments : none |
| |
| By default HAProxy operates in keep-alive mode with regards to persistent |
| HTTP/1.x connections: for each connection it processes each request and |
| response, and leaves the connection idle on both sides. This mode may be |
| changed by several options such as "option http-server-close" or "option |
| httpclose". |
| |
| If "option httpclose" is set, HAProxy will close the client or the server |
| connection, depending where the option is set. The frontend is considered for |
| client connections while the backend is considered for server ones. If the |
| option is set on a listener, it is applied both on client and server |
| connections. It will check if a "Connection: close" header is already set in |
| each direction, and will add one if missing. |
| |
| This option may also be combined with "option http-pretend-keepalive", which |
| will disable sending of the "Connection: close" request header, but will |
| still cause the connection to be closed once the whole response is received. |
| |
| It disables and replaces any previous "option http-server-close" or "option |
| http-keep-alive". |
| |
| If this option has been enabled in a "defaults" section, it can be disabled |
| in a specific instance by prepending the "no" keyword before it. |
| |
| See also : "option http-server-close". |
| |
| |
| option httplog [ clf ] |
| Enable logging of HTTP request, session state and timers |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | no |
| Arguments : |
| clf if the "clf" argument is added, then the output format will be |
| the CLF format instead of HAProxy's default HTTP format. You can |
| use this when you need to feed HAProxy's logs through a specific |
| log analyzer which only support the CLF format and which is not |
| extensible. |
| |
| By default, the log output format is very poor, as it only contains the |
| source and destination addresses, and the instance name. By specifying |
| "option httplog", each log line turns into a much richer format including, |
| but not limited to, the HTTP request, the connection timers, the session |
| status, the connections numbers, the captured headers and cookies, the |
| frontend, backend and server name, and of course the source address and |
| ports. |
| |
| Specifying only "option httplog" will automatically clear the 'clf' mode |
| if it was set by default. |
| |
| "option httplog" overrides any previous "log-format" directive. |
| |
| See also : section 8 about logging. |
| |
| option httpslog |
| Enable logging of HTTPS request, session state and timers |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | no |
| |
| By default, the log output format is very poor, as it only contains the |
| source and destination addresses, and the instance name. By specifying |
| "option httpslog", each log line turns into a much richer format including, |
| but not limited to, the HTTP request, the connection timers, the session |
| status, the connections numbers, the captured headers and cookies, the |
| frontend, backend and server name, the SSL certificate verification and SSL |
| handshake statuses, and of course the source address and ports. |
| |
| "option httpslog" overrides any previous "log-format" directive. |
| |
| See also : section 8 about logging. |
| |
| |
| option independent-streams |
| no option independent-streams |
| Enable or disable independent timeout processing for both directions |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | yes |
| Arguments : none |
| |
| By default, when data is sent over a socket, both the write timeout and the |
| read timeout for that socket are refreshed, because we consider that there is |
| activity on that socket, and we have no other means of guessing if we should |
| receive data or not. |
| |
| While this default behavior is desirable for almost all applications, there |
| exists a situation where it is desirable to disable it, and only refresh the |
| read timeout if there are incoming data. This happens on sessions with large |
| timeouts and low amounts of exchanged data such as telnet session. If the |
| server suddenly disappears, the output data accumulates in the system's |
| socket buffers, both timeouts are correctly refreshed, and there is no way |
| to know the server does not receive them, so we don't timeout. However, when |
| the underlying protocol always echoes sent data, it would be enough by itself |
| to detect the issue using the read timeout. Note that this problem does not |
| happen with more verbose protocols because data won't accumulate long in the |
| socket buffers. |
| |
| When this option is set on the frontend, it will disable read timeout updates |
| on data sent to the client. There probably is little use of this case. When |
| the option is set on the backend, it will disable read timeout updates on |
| data sent to the server. Doing so will typically break large HTTP posts from |
| slow lines, so use it with caution. |
| |
| See also : "timeout client", "timeout server" and "timeout tunnel" |
| |
| |
| option ldap-check |
| Use LDAPv3 health checks for server testing |
| May be used in sections : defaults | frontend | listen | backend |
| yes | no | yes | yes |
| Arguments : none |
| |
| It is possible to test that the server correctly talks LDAPv3 instead of just |
| testing that it accepts the TCP connection. When this option is set, an |
| LDAPv3 anonymous simple bind message is sent to the server, and the response |
| is analyzed to find an LDAPv3 bind response message. |
| |
| The server is considered valid only when the LDAP response contains success |
| resultCode (http://tools.ietf.org/html/rfc4511#section-4.1.9). |
| |
| Logging of bind requests is server dependent see your documentation how to |
| configure it. |
| |
| Example : |
| option ldap-check |
| |
| See also : "option httpchk" |
| |
| |
| option external-check |
| Use external processes for server health checks |
| May be used in sections : defaults | frontend | listen | backend |
| yes | no | yes | yes |
| |
| It is possible to test the health of a server using an external command. |
| This is achieved by running the executable set using "external-check |
| command". |
| |
| Requires the "external-check" global to be set. |
| |
| See also : "external-check", "external-check command", "external-check path" |
| |
| |
| option idle-close-on-response |
| no option idle-close-on-response |
| Avoid closing idle frontend connections if a soft stop is in progress |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | no |
| Arguments : none |
| |
| By default, idle connections will be closed during a soft stop. In some |
| environments, a client talking to the proxy may have prepared some idle |
| connections in order to send requests later. If there is no proper retry on |
| write errors, this can result in errors while haproxy is reloading. Even |
| though a proper implementation should retry on connection/write errors, this |
| option was introduced to support backwards compatibility with haproxy prior |
| to version 2.4. Indeed before v2.4, haproxy used to wait for a last request |
| and response to add a "connection: close" header before closing, thus |
| notifying the client that the connection would not be reusable. |
| |
| In a real life example, this behavior was seen in AWS using the ALB in front |
| of a haproxy. The end result was ALB sending 502 during haproxy reloads. |
| |
| Users are warned that using this option may increase the number of old |
| processes if connections remain idle for too long. Adjusting the client |
| timeouts and/or the "hard-stop-after" parameter accordingly might be |
| needed in case of frequent reloads. |
| |
| See also: "timeout client", "timeout client-fin", "timeout http-request", |
| "hard-stop-after" |
| |
| |
| option log-health-checks |
| no option log-health-checks |
| Enable or disable logging of health checks status updates |
| May be used in sections : defaults | frontend | listen | backend |
| yes | no | yes | yes |
| Arguments : none |
| |
| By default, failed health check are logged if server is UP and successful |
| health checks are logged if server is DOWN, so the amount of additional |
| information is limited. |
| |
| When this option is enabled, any change of the health check status or to |
| the server's health will be logged, so that it becomes possible to know |
| that a server was failing occasional checks before crashing, or exactly when |
| it failed to respond a valid HTTP status, then when the port started to |
| reject connections, then when the server stopped responding at all. |
| |
| Note that status changes not caused by health checks (e.g. enable/disable on |
| the CLI) are intentionally not logged by this option. |
| |
| See also: "option httpchk", "option ldap-check", "option mysql-check", |
| "option pgsql-check", "option redis-check", "option smtpchk", |
| "option tcp-check", "log" and section 8 about logging. |
| |
| |
| option log-separate-errors |
| no option log-separate-errors |
| Change log level for non-completely successful connections |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | no |
| Arguments : none |
| |
| Sometimes looking for errors in logs is not easy. This option makes HAProxy |
| raise the level of logs containing potentially interesting information such |
| as errors, timeouts, retries, redispatches, or HTTP status codes 5xx. The |
| level changes from "info" to "err". This makes it possible to log them |
| separately to a different file with most syslog daemons. Be careful not to |
| remove them from the original file, otherwise you would lose ordering which |
| provides very important information. |
| |
| Using this option, large sites dealing with several thousand connections per |
| second may log normal traffic to a rotating buffer and only archive smaller |
| error logs. |
| |
| See also : "log", "dontlognull", "dontlog-normal" and section 8 about |
| logging. |
| |
| |
| option logasap |
| no option logasap |
| Enable or disable early logging. |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | no |
| Arguments : none |
| |
| By default, logs are emitted when all the log format variables and sample |
| fetches used in the definition of the log-format string return a value, or |
| when the session is terminated. This allows the built in log-format strings |
| to account for the transfer time, or the number of bytes in log messages. |
| |
| When handling long lived connections such as large file transfers or RDP, |
| it may take a while for the request or connection to appear in the logs. |
| Using "option logasap", the log message is created as soon as the server |
| connection is established in mode tcp, or as soon as the server sends the |
| complete headers in mode http. Missing information in the logs will be the |
| total number of bytes which will only indicate the amount of data transferred |
| before the message was created and the total time which will not take the |
| remainder of the connection life or transfer time into account. For the case |
| of HTTP, it is good practice to capture the Content-Length response header |
| so that the logs at least indicate how many bytes are expected to be |
| transferred. |
| |
| Examples : |
| listen http_proxy 0.0.0.0:80 |
| mode http |
| option httplog |
| option logasap |
| log 192.168.2.200 local3 |
| |
| >>> Feb 6 12:14:14 localhost \ |
| haproxy[14389]: 10.0.1.2:33317 [06/Feb/2009:12:14:14.655] http-in \ |
| static/srv1 9/10/7/14/+30 200 +243 - - ---- 3/1/1/1/0 1/0 \ |
| "GET /image.iso HTTP/1.0" |
| |
| See also : "option httplog", "capture response header", and section 8 about |
| logging. |
| |
| |
| option mysql-check [ user <username> [ { post-41 | pre-41 } ] ] |
| Use MySQL health checks for server testing |
| May be used in sections : defaults | frontend | listen | backend |
| yes | no | yes | yes |
| Arguments : |
| <username> This is the username which will be used when connecting to MySQL |
| server. |
| post-41 Send post v4.1 client compatible checks (the default) |
| pre-41 Send pre v4.1 client compatible checks |
| |
| If you specify a username, the check consists of sending two MySQL packet, |
| one Client Authentication packet, and one QUIT packet, to correctly close |
| MySQL session. We then parse the MySQL Handshake Initialization packet and/or |
| Error packet. It is a basic but useful test which does not produce error nor |
| aborted connect on the server. However, it requires an unlocked authorised |
| user without a password. To create a basic limited user in MySQL with |
| optional resource limits: |
| |
| CREATE USER '<username>'@'<ip_of_haproxy|network_of_haproxy/netmask>' |
| /*!50701 WITH MAX_QUERIES_PER_HOUR 1 MAX_UPDATES_PER_HOUR 0 */ |
| /*M!100201 MAX_STATEMENT_TIME 0.0001 */; |
| |
| If you don't specify a username (it is deprecated and not recommended), the |
| check only consists in parsing the Mysql Handshake Initialization packet or |
| Error packet, we don't send anything in this mode. It was reported that it |
| can generate lockout if check is too frequent and/or if there is not enough |
| traffic. In fact, you need in this case to check MySQL "max_connect_errors" |
| value as if a connection is established successfully within fewer than MySQL |
| "max_connect_errors" attempts after a previous connection was interrupted, |
| the error count for the host is cleared to zero. If HAProxy's server get |
| blocked, the "FLUSH HOSTS" statement is the only way to unblock it. |
| |
| Remember that this does not check database presence nor database consistency. |
| To do this, you can use an external check with xinetd for example. |
| |
| The check requires MySQL >=3.22, for older version, please use TCP check. |
| |
| Most often, an incoming MySQL server needs to see the client's IP address for |
| various purposes, including IP privilege matching and connection logging. |
| When possible, it is often wise to masquerade the client's IP address when |
| connecting to the server using the "usesrc" argument of the "source" keyword, |
| which requires the transparent proxy feature to be compiled in, and the MySQL |
| server to route the client via the machine hosting HAProxy. |
| |
| See also: "option httpchk" |
| |
| |
| option nolinger |
| no option nolinger |
| Enable or disable immediate session resource cleaning after close |
| May be used in sections: defaults | frontend | listen | backend |
| yes | yes | yes | yes |
| Arguments : none |
| |
| When clients or servers abort connections in a dirty way (e.g. they are |
| physically disconnected), the session timeouts triggers and the session is |
| closed. But it will remain in FIN_WAIT1 state for some time in the system, |
| using some resources and possibly limiting the ability to establish newer |
| connections. |
| |
| When this happens, it is possible to activate "option nolinger" which forces |
| the system to immediately remove any socket's pending data on close. Thus, |
| a TCP RST is emitted, any pending data are truncated, and the session is |
| instantly purged from the system's tables. The generally visible effect for |
| a client is that responses are truncated if the close happens with a last |
| block of data (e.g. on a redirect or error response). On the server side, |
| it may help release the source ports immediately when forwarding a client |
| aborts in tunnels. In both cases, TCP resets are emitted and given that |
| the session is instantly destroyed, there will be no retransmit. On a lossy |
| network this can increase problems, especially when there is a firewall on |
| the lossy side, because the firewall might see and process the reset (hence |
| purge its session) and block any further traffic for this session,, including |
| retransmits from the other side. So if the other side doesn't receive it, |
| it will never receive any RST again, and the firewall might log many blocked |
| packets. |
| |
| For all these reasons, it is strongly recommended NOT to use this option, |
| unless absolutely needed as a last resort. In most situations, using the |
| "client-fin" or "server-fin" timeouts achieves similar results with a more |
| reliable behavior. On Linux it's also possible to use the "tcp-ut" bind or |
| server setting. |
| |
| This option may be used both on frontends and backends, depending on the side |
| where it is required. Use it on the frontend for clients, and on the backend |
| for servers. While this option is technically supported in "defaults" |
| sections, it must really not be used there as it risks to accidentally |
| propagate to sections that must no use it and to cause problems there. |
| |
| If this option has been enabled in a "defaults" section, it can be disabled |
| in a specific instance by prepending the "no" keyword before it. |
| |
| See also: "timeout client-fin", "timeout server-fin", "tcp-ut" bind or server |
| keywords. |
| |
| option originalto [ except <network> ] [ header <name> ] |
| Enable insertion of the X-Original-To header to requests sent to servers |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | yes |
| Arguments : |
| <network> is an optional argument used to disable this option for sources |
| matching <network> |
| <name> an optional argument to specify a different "X-Original-To" |
| header name. |
| |
| Since HAProxy can work in transparent mode, every request from a client can |
| be redirected to the proxy and HAProxy itself can proxy every request to a |
| complex SQUID environment and the destination host from SO_ORIGINAL_DST will |
| be lost. This is annoying when you want access rules based on destination ip |
| addresses. To solve this problem, a new HTTP header "X-Original-To" may be |
| added by HAProxy to all requests sent to the server. This header contains a |
| value representing the original destination IP address. Since this must be |
| configured to always use the last occurrence of this header only. Note that |
| only the last occurrence of the header must be used, since it is really |
| possible that the client has already brought one. |
| |
| The keyword "header" may be used to supply a different header name to replace |
| the default "X-Original-To". This can be useful where you might already |
| have a "X-Original-To" header from a different application, and you need |
| preserve it. Also if your backend server doesn't use the "X-Original-To" |
| header and requires different one. |
| |
| Sometimes, a same HAProxy instance may be shared between a direct client |
| access and a reverse-proxy access (for instance when an SSL reverse-proxy is |
| used to decrypt HTTPS traffic). It is possible to disable the addition of the |
| header for a known destination address or network by adding the "except" |
| keyword followed by the network address. In this case, any destination IP |
| matching the network will not cause an addition of this header. Most common |
| uses are with private networks or 127.0.0.1. IPv4 and IPv6 are both |
| supported. |
| |
| This option may be specified either in the frontend or in the backend. If at |
| least one of them uses it, the header will be added. Note that the backend's |
| setting of the header subargument takes precedence over the frontend's if |
| both are defined. |
| |
| Examples : |
| # Original Destination address |
| frontend www |
| mode http |
| option originalto except 127.0.0.1 |
| |
| # Those servers want the IP Address in X-Client-Dst |
| backend www |
| mode http |
| option originalto header X-Client-Dst |
| |
| See also : "option httpclose", "option http-server-close". |
| |
| |
| option persist |
| no option persist |
| Enable or disable forced persistence on down servers |
| May be used in sections: defaults | frontend | listen | backend |
| yes | no | yes | yes |
| Arguments : none |
| |
| When an HTTP request reaches a backend with a cookie which references a dead |
| server, by default it is redispatched to another server. It is possible to |
| force the request to be sent to the dead server first using "option persist" |
| if absolutely needed. A common use case is when servers are under extreme |
| load and spend their time flapping. In this case, the users would still be |
| directed to the server they opened the session on, in the hope they would be |
| correctly served. It is recommended to use "option redispatch" in conjunction |
| with this option so that in the event it would not be possible to connect to |
| the server at all (server definitely dead), the client would finally be |
| redirected to another valid server. |
| |
| If this option has been enabled in a "defaults" section, it can be disabled |
| in a specific instance by prepending the "no" keyword before it. |
| |
| See also : "option redispatch", "retries", "force-persist" |
| |
| |
| option pgsql-check user <username> |
| Use PostgreSQL health checks for server testing |
| May be used in sections : defaults | frontend | listen | backend |
| yes | no | yes | yes |
| Arguments : |
| <username> This is the username which will be used when connecting to |
| PostgreSQL server. |
| |
| The check sends a PostgreSQL StartupMessage and waits for either |
| Authentication request or ErrorResponse message. It is a basic but useful |
| test which does not produce error nor aborted connect on the server. |
| This check is identical with the "mysql-check". |
| |
| See also: "option httpchk" |
| |
| |
| option prefer-last-server |
| no option prefer-last-server |
| Allow multiple load balanced requests to remain on the same server |
| May be used in sections: defaults | frontend | listen | backend |
| yes | no | yes | yes |
| Arguments : none |
| |
| When the load balancing algorithm in use is not deterministic, and a previous |
| request was sent to a server to which HAProxy still holds a connection, it is |
| sometimes desirable that subsequent requests on a same session go to the same |
| server as much as possible. Note that this is different from persistence, as |
| we only indicate a preference which HAProxy tries to apply without any form |
| of warranty. The real use is for keep-alive connections sent to servers. When |
| this option is used, HAProxy will try to reuse the same connection that is |
| attached to the server instead of rebalancing to another server, causing a |
| close of the connection. This can make sense for static file servers. It does |
| not make much sense to use this in combination with hashing algorithms. Note, |
| HAProxy already automatically tries to stick to a server which sends a 401 or |
| to a proxy which sends a 407 (authentication required), when the load |
| balancing algorithm is not deterministic. This is mandatory for use with the |
| broken NTLM authentication challenge, and significantly helps in |
| troubleshooting some faulty applications. Option prefer-last-server might be |
| desirable in these environments as well, to avoid redistributing the traffic |
| after every other response. |
| |
| If this option has been enabled in a "defaults" section, it can be disabled |
| in a specific instance by prepending the "no" keyword before it. |
| |
| See also: "option http-keep-alive" |
| |
| |
| option redispatch |
| option redispatch <interval> |
| no option redispatch |
| Enable or disable session redistribution in case of connection failure |
| May be used in sections: defaults | frontend | listen | backend |
| yes | no | yes | yes |
| Arguments : |
| <interval> The optional integer value that controls how often redispatches |
| occur when retrying connections. Positive value P indicates a |
| redispatch is desired on every Pth retry, and negative value |
| N indicate a redispatch is desired on the Nth retry prior to the |
| last retry. For example, the default of -1 preserves the |
| historical behavior of redispatching on the last retry, a |
| positive value of 1 would indicate a redispatch on every retry, |
| and a positive value of 3 would indicate a redispatch on every |
| third retry. You can disable redispatches with a value of 0. |
| |
| |
| In HTTP mode, if a server designated by a cookie is down, clients may |
| definitely stick to it because they cannot flush the cookie, so they will not |
| be able to access the service anymore. |
| |
| Specifying "option redispatch" will allow the proxy to break cookie or |
| consistent hash based persistence and redistribute them to a working server. |
| |
| Active servers are selected from a subset of the list of available |
| servers. Active servers that are not down or in maintenance (i.e., whose |
| health is not checked or that have been checked as "up"), are selected in the |
| following order: |
| |
| 1. Any active, non-backup server, if any, or, |
| |
| 2. If the "allbackups" option is not set, the first backup server in the |
| list, or |
| |
| 3. If the "allbackups" option is set, any backup server. |
| |
| When a retry occurs, HAProxy tries to select another server than the last |
| one. The new server is selected from the current list of servers. |
| |
| Sometimes, if the list is updated between retries (e.g., if numerous retries |
| occur and last longer than the time needed to check that a server is down, |
| remove it from the list and fall back on the list of backup servers), |
| connections may be redirected to a backup server, though. |
| |
| It also allows to retry connections to another server in case of multiple |
| connection failures. Of course, it requires having "retries" set to a nonzero |
| value. |
| |
| If this option has been enabled in a "defaults" section, it can be disabled |
| in a specific instance by prepending the "no" keyword before it. |
| |
| See also : "retries", "force-persist" |
| |
| |
| option redis-check |
| Use redis health checks for server testing |
| May be used in sections : defaults | frontend | listen | backend |
| yes | no | yes | yes |
| Arguments : none |
| |
| It is possible to test that the server correctly talks REDIS protocol instead |
| of just testing that it accepts the TCP connection. When this option is set, |
| a PING redis command is sent to the server, and the response is analyzed to |
| find the "+PONG" response message. |
| |
| Example : |
| option redis-check |
| |
| See also : "option httpchk", "option tcp-check", "tcp-check expect" |
| |
| |
| option smtpchk |
| option smtpchk <hello> <domain> |
| Use SMTP health checks for server testing |
| May be used in sections : defaults | frontend | listen | backend |
| yes | no | yes | yes |
| Arguments : |
| <hello> is an optional argument. It is the "hello" command to use. It can |
| be either "HELO" (for SMTP) or "EHLO" (for ESMTP). All other |
| values will be turned into the default command ("HELO"). |
| |
| <domain> is the domain name to present to the server. It may only be |
| specified (and is mandatory) if the hello command has been |
| specified. By default, "localhost" is used. |
| |
| When "option smtpchk" is set, the health checks will consist in TCP |
| connections followed by an SMTP command. By default, this command is |
| "HELO localhost". The server's return code is analyzed and only return codes |
| starting with a "2" will be considered as valid. All other responses, |
| including a lack of response will constitute an error and will indicate a |
| dead server. |
| |
| This test is meant to be used with SMTP servers or relays. Depending on the |
| request, it is possible that some servers do not log each connection attempt, |
| so you may want to experiment to improve the behavior. Using telnet on port |
| 25 is often easier than adjusting the configuration. |
| |
| Most often, an incoming SMTP server needs to see the client's IP address for |
| various purposes, including spam filtering, anti-spoofing and logging. When |
| possible, it is often wise to masquerade the client's IP address when |
| connecting to the server using the "usesrc" argument of the "source" keyword, |
| which requires the transparent proxy feature to be compiled in. |
| |
| Example : |
| option smtpchk HELO mydomain.org |
| |
| See also : "option httpchk", "source" |
| |
| |
| option socket-stats |
| no option socket-stats |
| |
| Enable or disable collecting & providing separate statistics for each socket. |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | no |
| |
| Arguments : none |
| |
| |
| option splice-auto |
| no option splice-auto |
| Enable or disable automatic kernel acceleration on sockets in both directions |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | yes |
| Arguments : none |
| |
| When this option is enabled either on a frontend or on a backend, HAProxy |
| will automatically evaluate the opportunity to use kernel tcp splicing to |
| forward data between the client and the server, in either direction. HAProxy |
| uses heuristics to estimate if kernel splicing might improve performance or |
| not. Both directions are handled independently. Note that the heuristics used |
| are not much aggressive in order to limit excessive use of splicing. This |
| option requires splicing to be enabled at compile time, and may be globally |
| disabled with the global option "nosplice". Since splice uses pipes, using it |
| requires that there are enough spare pipes. |
| |
| Important note: kernel-based TCP splicing is a Linux-specific feature which |
| first appeared in kernel 2.6.25. It offers kernel-based acceleration to |
| transfer data between sockets without copying these data to user-space, thus |
| providing noticeable performance gains and CPU cycles savings. Since many |
| early implementations are buggy, corrupt data and/or are inefficient, this |
| feature is not enabled by default, and it should be used with extreme care. |
| While it is not possible to detect the correctness of an implementation, |
| 2.6.29 is the first version offering a properly working implementation. In |
| case of doubt, splicing may be globally disabled using the global "nosplice" |
| keyword. |
| |
| Example : |
| option splice-auto |
| |
| If this option has been enabled in a "defaults" section, it can be disabled |
| in a specific instance by prepending the "no" keyword before it. |
| |
| See also : "option splice-request", "option splice-response", and global |
| options "nosplice" and "maxpipes" |
| |
| |
| option splice-request |
| no option splice-request |
| Enable or disable automatic kernel acceleration on sockets for requests |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | yes |
| Arguments : none |
| |
| When this option is enabled either on a frontend or on a backend, HAProxy |
| will use kernel tcp splicing whenever possible to forward data going from |
| the client to the server. It might still use the recv/send scheme if there |
| are no spare pipes left. This option requires splicing to be enabled at |
| compile time, and may be globally disabled with the global option "nosplice". |
| Since splice uses pipes, using it requires that there are enough spare pipes. |
| |
| Important note: see "option splice-auto" for usage limitations. |
| |
| Example : |
| option splice-request |
| |
| If this option has been enabled in a "defaults" section, it can be disabled |
| in a specific instance by prepending the "no" keyword before it. |
| |
| See also : "option splice-auto", "option splice-response", and global options |
| "nosplice" and "maxpipes" |
| |
| |
| option splice-response |
| no option splice-response |
| Enable or disable automatic kernel acceleration on sockets for responses |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | yes |
| Arguments : none |
| |
| When this option is enabled either on a frontend or on a backend, HAProxy |
| will use kernel tcp splicing whenever possible to forward data going from |
| the server to the client. It might still use the recv/send scheme if there |
| are no spare pipes left. This option requires splicing to be enabled at |
| compile time, and may be globally disabled with the global option "nosplice". |
| Since splice uses pipes, using it requires that there are enough spare pipes. |
| |
| Important note: see "option splice-auto" for usage limitations. |
| |
| Example : |
| option splice-response |
| |
| If this option has been enabled in a "defaults" section, it can be disabled |
| in a specific instance by prepending the "no" keyword before it. |
| |
| See also : "option splice-auto", "option splice-request", and global options |
| "nosplice" and "maxpipes" |
| |
| |
| option spop-check |
| Use SPOP health checks for server testing |
| May be used in sections : defaults | frontend | listen | backend |
| yes | no | yes | yes |
| Arguments : none |
| |
| It is possible to test that the server correctly talks SPOP protocol instead |
| of just testing that it accepts the TCP connection. When this option is set, |
| a HELLO handshake is performed between HAProxy and the server, and the |
| response is analyzed to check no error is reported. |
| |
| Example : |
| option spop-check |
| |
| See also : "option httpchk" |
| |
| |
| option srvtcpka |
| no option srvtcpka |
| Enable or disable the sending of TCP keepalive packets on the server side |
| May be used in sections : defaults | frontend | listen | backend |
| yes | no | yes | yes |
| Arguments : none |
| |
| When there is a firewall or any session-aware component between a client and |
| a server, and when the protocol involves very long sessions with long idle |
| periods (e.g. remote desktops), there is a risk that one of the intermediate |
| components decides to expire a session which has remained idle for too long. |
| |
| Enabling socket-level TCP keep-alives makes the system regularly send packets |
| to the other end of the connection, leaving it active. The delay between |
| keep-alive probes is controlled by the system only and depends both on the |
| operating system and its tuning parameters. |
| |
| It is important to understand that keep-alive packets are neither emitted nor |
| received at the application level. It is only the network stacks which sees |
| them. For this reason, even if one side of the proxy already uses keep-alives |
| to maintain its connection alive, those keep-alive packets will not be |
| forwarded to the other side of the proxy. |
| |
| Please note that this has nothing to do with HTTP keep-alive. |
| |
| Using option "srvtcpka" enables the emission of TCP keep-alive probes on the |
| server side of a connection, which should help when session expirations are |
| noticed between HAProxy and a server. |
| |
| If this option has been enabled in a "defaults" section, it can be disabled |
| in a specific instance by prepending the "no" keyword before it. |
| |
| See also : "option clitcpka", "option tcpka" |
| |
| |
| option ssl-hello-chk |
| Use SSLv3 client hello health checks for server testing |
| May be used in sections : defaults | frontend | listen | backend |
| yes | no | yes | yes |
| Arguments : none |
| |
| When some SSL-based protocols are relayed in TCP mode through HAProxy, it is |
| possible to test that the server correctly talks SSL instead of just testing |
| that it accepts the TCP connection. When "option ssl-hello-chk" is set, pure |
| SSLv3 client hello messages are sent once the connection is established to |
| the server, and the response is analyzed to find an SSL server hello message. |
| The server is considered valid only when the response contains this server |
| hello message. |
| |
| All servers tested till there correctly reply to SSLv3 client hello messages, |
| and most servers tested do not even log the requests containing only hello |
| messages, which is appreciable. |
| |
| Note that this check works even when SSL support was not built into HAProxy |
| because it forges the SSL message. When SSL support is available, it is best |
| to use native SSL health checks instead of this one. |
| |
| See also: "option httpchk", "check-ssl" |
| |
| |
| option tcp-check |
| Perform health checks using tcp-check send/expect sequences |
| May be used in sections: defaults | frontend | listen | backend |
| yes | no | yes | yes |
| |
| This health check method is intended to be combined with "tcp-check" command |
| lists in order to support send/expect types of health check sequences. |
| |
| TCP checks currently support 4 modes of operations : |
| - no "tcp-check" directive : the health check only consists in a connection |
| attempt, which remains the default mode. |
| |
| - "tcp-check send" or "tcp-check send-binary" only is mentioned : this is |
| used to send a string along with a connection opening. With some |
| protocols, it helps sending a "QUIT" message for example that prevents |
| the server from logging a connection error for each health check. The |
| check result will still be based on the ability to open the connection |
| only. |
| |
| - "tcp-check expect" only is mentioned : this is used to test a banner. |
| The connection is opened and HAProxy waits for the server to present some |
| contents which must validate some rules. The check result will be based |
| on the matching between the contents and the rules. This is suited for |
| POP, IMAP, SMTP, FTP, SSH, TELNET. |
| |
| - both "tcp-check send" and "tcp-check expect" are mentioned : this is |
| used to test a hello-type protocol. HAProxy sends a message, the server |
| responds and its response is analyzed. the check result will be based on |
| the matching between the response contents and the rules. This is often |
| suited for protocols which require a binding or a request/response model. |
| LDAP, MySQL, Redis and SSL are example of such protocols, though they |
| already all have their dedicated checks with a deeper understanding of |
| the respective protocols. |
| In this mode, many questions may be sent and many answers may be |
| analyzed. |
| |
| A fifth mode can be used to insert comments in different steps of the script. |
| |
| For each tcp-check rule you create, you can add a "comment" directive, |
| followed by a string. This string will be reported in the log and stderr in |
| debug mode. It is useful to make user-friendly error reporting. The |
| "comment" is of course optional. |
| |
| During the execution of a health check, a variable scope is made available to |
| store data samples, using the "tcp-check set-var" operation. Freeing those |
| variable is possible using "tcp-check unset-var". |
| |
| |
| Examples : |
| # perform a POP check (analyze only server's banner) |
| option tcp-check |
| tcp-check expect string +OK\ POP3\ ready comment POP\ protocol |
| |
| # perform an IMAP check (analyze only server's banner) |
| option tcp-check |
| tcp-check expect string *\ OK\ IMAP4\ ready comment IMAP\ protocol |
| |
| # look for the redis master server after ensuring it speaks well |
| # redis protocol, then it exits properly. |
| # (send a command then analyze the response 3 times) |
| option tcp-check |
| tcp-check comment PING\ phase |
| tcp-check send PING\r\n |
| tcp-check expect string +PONG |
| tcp-check comment role\ check |
| tcp-check send info\ replication\r\n |
| tcp-check expect string role:master |
| tcp-check comment QUIT\ phase |
| tcp-check send QUIT\r\n |
| tcp-check expect string +OK |
| |
| forge a HTTP request, then analyze the response |
| (send many headers before analyzing) |
| option tcp-check |
| tcp-check comment forge\ and\ send\ HTTP\ request |
| tcp-check send HEAD\ /\ HTTP/1.1\r\n |
| tcp-check send Host:\ www.mydomain.com\r\n |
| tcp-check send User-Agent:\ HAProxy\ tcpcheck\r\n |
| tcp-check send \r\n |
| tcp-check expect rstring HTTP/1\..\ (2..|3..) comment check\ HTTP\ response |
| |
| |
| See also : "tcp-check connect", "tcp-check expect" and "tcp-check send". |
| |
| |
| option tcp-smart-accept |
| no option tcp-smart-accept |
| Enable or disable the saving of one ACK packet during the accept sequence |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | no |
| Arguments : none |
| |
| When an HTTP connection request comes in, the system acknowledges it on |
| behalf of HAProxy, then the client immediately sends its request, and the |
| system acknowledges it too while it is notifying HAProxy about the new |
| connection. HAProxy then reads the request and responds. This means that we |
| have one TCP ACK sent by the system for nothing, because the request could |
| very well be acknowledged by HAProxy when it sends its response. |
| |
| For this reason, in HTTP mode, HAProxy automatically asks the system to avoid |
| sending this useless ACK on platforms which support it (currently at least |
| Linux). It must not cause any problem, because the system will send it anyway |
| after 40 ms if the response takes more time than expected to come. |
| |
| During complex network debugging sessions, it may be desirable to disable |
| this optimization because delayed ACKs can make troubleshooting more complex |
| when trying to identify where packets are delayed. It is then possible to |
| fall back to normal behavior by specifying "no option tcp-smart-accept". |
| |
| It is also possible to force it for non-HTTP proxies by simply specifying |
| "option tcp-smart-accept". For instance, it can make sense with some services |
| such as SMTP where the server speaks first. |
| |
| It is recommended to avoid forcing this option in a defaults section. In case |
| of doubt, consider setting it back to automatic values by prepending the |
| "default" keyword before it, or disabling it using the "no" keyword. |
| |
| See also : "option tcp-smart-connect" |
| |
| |
| option tcp-smart-connect |
| no option tcp-smart-connect |
| Enable or disable the saving of one ACK packet during the connect sequence |
| May be used in sections : defaults | frontend | listen | backend |
| yes | no | yes | yes |
| Arguments : none |
| |
| On certain systems (at least Linux), HAProxy can ask the kernel not to |
| immediately send an empty ACK upon a connection request, but to directly |
| send the buffer request instead. This saves one packet on the network and |
| thus boosts performance. It can also be useful for some servers, because they |
| immediately get the request along with the incoming connection. |
| |
| This feature is enabled when "option tcp-smart-connect" is set in a backend. |
| It is not enabled by default because it makes network troubleshooting more |
| complex. |
| |
| It only makes sense to enable it with protocols where the client speaks first |
| such as HTTP. In other situations, if there is no data to send in place of |
| the ACK, a normal ACK is sent. |
| |
| If this option has been enabled in a "defaults" section, it can be disabled |
| in a specific instance by prepending the "no" keyword before it. |
| |
| See also : "option tcp-smart-accept" |
| |
| |
| option tcpka |
| Enable or disable the sending of TCP keepalive packets on both sides |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | yes |
| Arguments : none |
| |
| When there is a firewall or any session-aware component between a client and |
| a server, and when the protocol involves very long sessions with long idle |
| periods (e.g. remote desktops), there is a risk that one of the intermediate |
| components decides to expire a session which has remained idle for too long. |
| |
| Enabling socket-level TCP keep-alives makes the system regularly send packets |
| to the other end of the connection, leaving it active. The delay between |
| keep-alive probes is controlled by the system only and depends both on the |
| operating system and its tuning parameters. |
| |
| It is important to understand that keep-alive packets are neither emitted nor |
| received at the application level. It is only the network stacks which sees |
| them. For this reason, even if one side of the proxy already uses keep-alives |
| to maintain its connection alive, those keep-alive packets will not be |
| forwarded to the other side of the proxy. |
| |
| Please note that this has nothing to do with HTTP keep-alive. |
| |
| Using option "tcpka" enables the emission of TCP keep-alive probes on both |
| the client and server sides of a connection. Note that this is meaningful |
| only in "defaults" or "listen" sections. If this option is used in a |
| frontend, only the client side will get keep-alives, and if this option is |
| used in a backend, only the server side will get keep-alives. For this |
| reason, it is strongly recommended to explicitly use "option clitcpka" and |
| "option srvtcpka" when the configuration is split between frontends and |
| backends. |
| |
| See also : "option clitcpka", "option srvtcpka" |
| |
| |
| option tcplog |
| Enable advanced logging of TCP connections with session state and timers |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | no |
| Arguments : none |
| |
| By default, the log output format is very poor, as it only contains the |
| source and destination addresses, and the instance name. By specifying |
| "option tcplog", each log line turns into a much richer format including, but |
| not limited to, the connection timers, the session status, the connections |
| numbers, the frontend, backend and server name, and of course the source |
| address and ports. This option is useful for pure TCP proxies in order to |
| find which of the client or server disconnects or times out. For normal HTTP |
| proxies, it's better to use "option httplog" which is even more complete. |
| |
| "option tcplog" overrides any previous "log-format" directive. |
| |
| See also : "option httplog", and section 8 about logging. |
| |
| |
| option transparent |
| no option transparent |
| Enable client-side transparent proxying |
| May be used in sections : defaults | frontend | listen | backend |
| yes | no | yes | yes |
| Arguments : none |
| |
| This option was introduced in order to provide layer 7 persistence to layer 3 |
| load balancers. The idea is to use the OS's ability to redirect an incoming |
| connection for a remote address to a local process (here HAProxy), and let |
| this process know what address was initially requested. When this option is |
| used, sessions without cookies will be forwarded to the original destination |
| IP address of the incoming request (which should match that of another |
| equipment), while requests with cookies will still be forwarded to the |
| appropriate server. |
| |
| Note that contrary to a common belief, this option does NOT make HAProxy |
| present the client's IP to the server when establishing the connection. |
| |
| See also: the "usesrc" argument of the "source" keyword, and the |
| "transparent" option of the "bind" keyword. |
| |
| |
| external-check command <command> |
| Executable to run when performing an external-check |
| May be used in sections : defaults | frontend | listen | backend |
| yes | no | yes | yes |
| |
| Arguments : |
| <command> is the external command to run |
| |
| The arguments passed to the to the command are: |
| |
| <proxy_address> <proxy_port> <server_address> <server_port> |
| |
| The <proxy_address> and <proxy_port> are derived from the first listener |
| that is either IPv4, IPv6 or a UNIX socket. In the case of a UNIX socket |
| listener the proxy_address will be the path of the socket and the |
| <proxy_port> will be the string "NOT_USED". In a backend section, it's not |
| possible to determine a listener, and both <proxy_address> and <proxy_port> |
| will have the string value "NOT_USED". |
| |
| Some values are also provided through environment variables. |
| |
| Environment variables : |
| HAPROXY_PROXY_ADDR The first bind address if available (or empty if not |
| applicable, for example in a "backend" section). |
| |
| HAPROXY_PROXY_ID The backend id. |
| |
| HAPROXY_PROXY_NAME The backend name. |
| |
| HAPROXY_PROXY_PORT The first bind port if available (or empty if not |
| applicable, for example in a "backend" section or |
| for a UNIX socket). |
| |
| HAPROXY_SERVER_ADDR The server address. |
| |
| HAPROXY_SERVER_CURCONN The current number of connections on the server. |
| |
| HAPROXY_SERVER_ID The server id. |
| |
| HAPROXY_SERVER_MAXCONN The server max connections. |
| |
| HAPROXY_SERVER_NAME The server name. |
| |
| HAPROXY_SERVER_PORT The server port if available (or empty for a UNIX |
| socket). |
| |
| HAPROXY_SERVER_SSL "0" when SSL is not used, "1" when it is used |
| |
| HAPROXY_SERVER_PROTO The protocol used by this server, which can be one |
| of "cli" (the haproxy CLI), "syslog" (syslog TCP |
| server), "peers" (peers TCP server), "h1" (HTTP/1.x |
| server), "h2" (HTTP/2 server), or "tcp" (any other |
| TCP server). |
| |
| PATH The PATH environment variable used when executing |
| the command may be set using "external-check path". |
| |
| See also "2.3. Environment variables" for other variables. |
| |
| If the command executed and exits with a zero status then the check is |
| considered to have passed, otherwise the check is considered to have |
| failed. |
| |
| Example : |
| external-check command /bin/true |
| |
| See also : "external-check", "option external-check", "external-check path" |
| |
| |
| external-check path <path> |
| The value of the PATH environment variable used when running an external-check |
| May be used in sections : defaults | frontend | listen | backend |
| yes | no | yes | yes |
| |
| Arguments : |
| <path> is the path used when executing external command to run |
| |
| The default path is "". |
| |
| Example : |
| external-check path "/usr/bin:/bin" |
| |
| See also : "external-check", "option external-check", |
| "external-check command" |
| |
| |
| persist rdp-cookie |
| persist rdp-cookie(<name>) |
| Enable RDP cookie-based persistence |
| May be used in sections : defaults | frontend | listen | backend |
| yes | no | yes | yes |
| Arguments : |
| <name> is the optional name of the RDP cookie to check. If omitted, the |
| default cookie name "msts" will be used. There currently is no |
| valid reason to change this name. |
| |
| This statement enables persistence based on an RDP cookie. The RDP cookie |
| contains all information required to find the server in the list of known |
| servers. So when this option is set in the backend, the request is analyzed |
| and if an RDP cookie is found, it is decoded. If it matches a known server |
| which is still UP (or if "option persist" is set), then the connection is |
| forwarded to this server. |
| |
| Note that this only makes sense in a TCP backend, but for this to work, the |
| frontend must have waited long enough to ensure that an RDP cookie is present |
| in the request buffer. This is the same requirement as with the "rdp-cookie" |
| load-balancing method. Thus it is highly recommended to put all statements in |
| a single "listen" section. |
| |
| Also, it is important to understand that the terminal server will emit this |
| RDP cookie only if it is configured for "token redirection mode", which means |
| that the "IP address redirection" option is disabled. |
| |
| Example : |
| listen tse-farm |
| bind :3389 |
| # wait up to 5s for an RDP cookie in the request |
| tcp-request inspect-delay 5s |
| tcp-request content accept if RDP_COOKIE |
| # apply RDP cookie persistence |
| persist rdp-cookie |
| # if server is unknown, let's balance on the same cookie. |
| # alternatively, "balance leastconn" may be useful too. |
| balance rdp-cookie |
| server srv1 1.1.1.1:3389 |
| server srv2 1.1.1.2:3389 |
| |
| See also : "balance rdp-cookie", "tcp-request" and the "req.rdp_cookie" ACL. |
| |
| |
| rate-limit sessions <rate> |
| Set a limit on the number of new sessions accepted per second on a frontend |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | no |
| Arguments : |
| <rate> The <rate> parameter is an integer designating the maximum number |
| of new sessions per second to accept on the frontend. |
| |
| When the frontend reaches the specified number of new sessions per second, it |
| stops accepting new connections until the rate drops below the limit again. |
| During this time, the pending sessions will be kept in the socket's backlog |
| (in system buffers) and HAProxy will not even be aware that sessions are |
| pending. When applying very low limit on a highly loaded service, it may make |
| sense to increase the socket's backlog using the "backlog" keyword. |
| |
| This feature is particularly efficient at blocking connection-based attacks |
| or service abuse on fragile servers. Since the session rate is measured every |
| millisecond, it is extremely accurate. Also, the limit applies immediately, |
| no delay is needed at all to detect the threshold. |
| |
| Example : limit the connection rate on SMTP to 10 per second max |
| listen smtp |
| mode tcp |
| bind :25 |
| rate-limit sessions 10 |
| server smtp1 127.0.0.1:1025 |
| |
| Note : when the maximum rate is reached, the frontend's status is not changed |
| but its sockets appear as "WAITING" in the statistics if the |
| "socket-stats" option is enabled. |
| |
| See also : the "backlog" keyword and the "fe_sess_rate" ACL criterion. |
| |
| |
| redirect location <loc> [code <code>] <option> [{if | unless} <condition>] |
| redirect prefix <pfx> [code <code>] <option> [{if | unless} <condition>] |
| redirect scheme <sch> [code <code>] <option> [{if | unless} <condition>] |
| Return an HTTP redirection if/unless a condition is matched |
| May be used in sections : defaults | frontend | listen | backend |
| no | yes | yes | yes |
| |
| If/unless the condition is matched, the HTTP request will lead to a redirect |
| response. If no condition is specified, the redirect applies unconditionally. |
| |
| Arguments : |
| <loc> With "redirect location", the exact value in <loc> is placed into |
| the HTTP "Location" header. When used in an "http-request" rule, |
| <loc> value follows the log-format rules and can include some |
| dynamic values (see Custom Log Format in section 8.2.6). |
| |
| <pfx> With "redirect prefix", the "Location" header is built from the |
| concatenation of <pfx> and the complete URI path, including the |
| query string, unless the "drop-query" option is specified (see |
| below). As a special case, if <pfx> equals exactly "/", then |
| nothing is inserted before the original URI. It allows one to |
| redirect to the same URL (for instance, to insert a cookie). When |
| used in an "http-request" rule, <pfx> value follows the log-format |
| rules and can include some dynamic values (see Custom Log Format |
| in section 8.2.6). |
| |
| <sch> With "redirect scheme", then the "Location" header is built by |
| concatenating <sch> with "://" then the first occurrence of the |
| "Host" header, and then the URI path, including the query string |
| unless the "drop-query" option is specified (see below). If no |
| path is found or if the path is "*", then "/" is used instead. If |
| no "Host" header is found, then an empty host component will be |
| returned, which most recent browsers interpret as redirecting to |
| the same host. This directive is mostly used to redirect HTTP to |
| HTTPS. When used in an "http-request" rule, <sch> value follows |
| the log-format rules and can include some dynamic values (see |
| Custom Log Format in section 8.2.6). |
| |
| <code> The code is optional. It indicates which type of HTTP redirection |
| is desired. Only codes 301, 302, 303, 307 and 308 are supported, |
| with 302 used by default if no code is specified. 301 means |
| "Moved permanently", and a browser may cache the Location. 302 |
| means "Moved temporarily" and means that the browser should not |
| cache the redirection. 303 is equivalent to 302 except that the |
| browser will fetch the location with a GET method. 307 is just |
| like 302 but makes it clear that the same method must be reused. |
| Likewise, 308 replaces 301 if the same method must be used. |
| |
| <option> There are several options which can be specified to adjust the |
| expected behavior of a redirection : |
| |
| - "drop-query" |
| When this keyword is used in a prefix-based redirection, then the |
| location will be set without any possible query-string, which is useful |
| for directing users to a non-secure page for instance. It has no effect |
| with a location-type redirect. |
| |
| - "append-slash" |
| This keyword may be used in conjunction with "drop-query" to redirect |
| users who use a URL not ending with a '/' to the same one with the '/'. |
| It can be useful to ensure that search engines will only see one URL. |
| For this, a return code 301 is preferred. |
| |
| - "ignore-empty" |
| This keyword only has effect when a location is produced using a log |
| format expression (i.e. when used in http-request or http-response). |
| It indicates that if the result of the expression is empty, the rule |
| should silently be skipped. The main use is to allow mass-redirects |
| of known paths using a simple map. |
| |
| - "set-cookie NAME[=value]" |
| A "Set-Cookie" header will be added with NAME (and optionally "=value") |
| to the response. This is sometimes used to indicate that a user has |
| been seen, for instance to protect against some types of DoS. No other |
| cookie option is added, so the cookie will be a session cookie. Note |
| that for a browser, a sole cookie name without an equal sign is |
| different from a cookie with an equal sign. |
| |
| - "clear-cookie NAME[=]" |
| A "Set-Cookie" header will be added with NAME (and optionally "="), but |
| with the "Max-Age" attribute set to zero. This will tell the browser to |
| delete this cookie. It is useful for instance on logout pages. It is |
| important to note that clearing the cookie "NAME" will not remove a |
| cookie set with "NAME=value". You have to clear the cookie "NAME=" for |
| that, because the browser makes the difference. |
| |
| Example: move the login URL only to HTTPS. |
| acl clear dst_port 80 |
| acl secure dst_port 8080 |
| acl login_page url_beg /login |
| acl logout url_beg /logout |
| acl uid_given url_reg /login?userid=[^&]+ |
| acl cookie_set hdr_sub(cookie) SEEN=1 |
| |
| redirect prefix https://mysite.com set-cookie SEEN=1 if !cookie_set |
| redirect prefix https://mysite.com if login_page !secure |
| redirect prefix http://mysite.com drop-query if login_page !uid_given |
| redirect location http://mysite.com/ if !login_page secure |
| redirect location / clear-cookie USERID= if logout |
| |
| Example: send redirects for request for articles without a '/'. |
| acl missing_slash path_reg ^/article/[^/]*$ |
| redirect code 301 prefix / drop-query append-slash if missing_slash |
| |
| Example: redirect all HTTP traffic to HTTPS when SSL is handled by HAProxy. |
| redirect scheme https if !{ ssl_fc } |
| |
| Example: append 'www.' prefix in front of all hosts not having it |
| http-request redirect code 301 location \ |
| http://www.%[hdr(host)]%[capture.req.uri] \ |
| unless { hdr_beg(host) -i www } |
| |
| Example: permanently redirect only old URLs to new ones |
| http-request redirect code 301 location \ |
| %[path,map_str(old-blog-articles.map)] ignore-empty |
| |
| See section 7 about ACL usage. |
| |
| |
| retries <value> |
| Set the number of retries to perform on a server after a failure |
| May be used in sections: defaults | frontend | listen | backend |
| yes | no | yes | yes |
| Arguments : |
| <value> is the number of times a request or connection attempt should be |
| retried on a server after a failure. |
| |
| By default, retries apply only to new connection attempts. However, when |
| the "retry-on" directive is used, other conditions might trigger a retry |
| (e.g. empty response, undesired status code), and each of them will count |
| one attempt, and when the total number attempts reaches the value here, an |
| error will be returned. |
| |
| In order to avoid immediate reconnections to a server which is restarting, |
| a turn-around timer of min("timeout connect", one second) is applied before |
| a retry occurs on the same server. |
| |
| When "option redispatch" is set, some retries may be performed on another |
| server even if a cookie references a different server. By default this will |
| only be the last retry unless an argument is passed to "option redispatch". |
| |
| See also : "option redispatch" |
| |
| |
| retry-on [space-delimited list of keywords] |
| Specify when to attempt to automatically retry a failed request. |
| This setting is only valid when "mode" is set to http and is silently ignored |
| otherwise. |
| May be used in sections: defaults | frontend | listen | backend |
| yes | no | yes | yes |
| Arguments : |
| <keywords> is a space-delimited list of keywords or HTTP status codes, each |
| representing a type of failure event on which an attempt to |
| retry the request is desired. Please read the notes at the |
| bottom before changing this setting. The following keywords are |
| supported : |
| |
| none never retry |
| |
| conn-failure retry when the connection or the SSL handshake failed |
| and the request could not be sent. This is the default. |
| |
| empty-response retry when the server connection was closed after part |
| of the request was sent, and nothing was received from |
| the server. This type of failure may be caused by the |
| request timeout on the server side, poor network |
| condition, or a server crash or restart while |
| processing the request. |
| |
| junk-response retry when the server returned something not looking |
| like a complete HTTP response. This includes partial |
| responses headers as well as non-HTTP contents. It |
| usually is a bad idea to retry on such events, which |
| may be caused a configuration issue (wrong server port) |
| or by the request being harmful to the server (buffer |
| overflow attack for example). |
| |
| response-timeout the server timeout stroke while waiting for the server |
| to respond to the request. This may be caused by poor |
| network condition, the reuse of an idle connection |
| which has expired on the path, or by the request being |
| extremely expensive to process. It generally is a bad |
| idea to retry on such events on servers dealing with |
| heavy database processing (full scans, etc) as it may |
| amplify denial of service attacks. |
| |
| 0rtt-rejected retry requests which were sent over early data and were |
| rejected by the server. These requests are generally |
| considered to be safe to retry. |
| |
| <status> any HTTP status code among "401" (Unauthorized), "403" |
| (Forbidden), "404" (Not Found), "408" (Request Timeout), |
| "425" (Too Early), "500" (Server Error), "501" (Not |
| Implemented), "502" (Bad Gateway), "503" (Service |
| Unavailable), "504" (Gateway Timeout). |
| |
| all-retryable-errors |
| retry request for any error that are considered |
| retryable. This currently activates "conn-failure", |
| "empty-response", "junk-response", "response-timeout", |
| "0rtt-rejected", "500", "502", "503", and "504". |
| |
| Using this directive replaces any previous settings with the new ones; it is |
| not cumulative. |
| |
| Please note that using anything other than "none" and "conn-failure" requires |
| to allocate a buffer and copy the whole request into it, so it has memory and |
| performance impacts. Requests not fitting in a single buffer will never be |
| retried (see the global tune.bufsize setting). |
| |
| You have to make sure the application has a replay protection mechanism built |
| in such as a unique transaction IDs passed in requests, or that replaying the |
| same request has no consequence, or it is very dangerous to use any retry-on |
| value beside "conn-failure" and "none". Static file servers and caches are |
| generally considered safe against any type of retry. Using a status code can |
| be useful to quickly leave a server showing an abnormal behavior (out of |
| memory, file system issues, etc), but in this case it may be a good idea to |
| immediately redispatch the connection to another server (please see "option |
| redispatch" for this). Last, it is important to understand that most causes |
| of failures are the requests themselves and that retrying a request causing a |
| server to misbehave will often make the situation even worse for this server, |
| or for the whole service in case of redispatch. |
| |
| Unless you know exactly how the application deals with replayed requests, you |
| should not use this directive. |
| |
| The default is "conn-failure". |
| |
| Example: |
| retry-on 503 504 |
| |
| See also: "retries", "option redispatch", "tune.bufsize" |
| |
| server <name> <address>[:[port]] [param*] |
| Declare a server in a backend |
| May be used in sections : defaults | frontend | listen | backend |
| no | no | yes | yes |
| Arguments : |
| <name> is the internal name assigned to this server. This name will |
| appear in logs and alerts. If "http-send-name-header" is |
| set, it will be added to the request header sent to the server. |
| |
| <address> is the IPv4 or IPv6 address of the server. Alternatively, a |
| resolvable hostname is supported, but this name will be resolved |
| during start-up. Address "0.0.0.0" or "*" has a special meaning. |
| It indicates that the connection will be forwarded to the same IP |
| address as the one from the client connection. This is useful in |
| transparent proxy architectures where the client's connection is |
| intercepted and HAProxy must forward to the original destination |
| address. This is more or less what the "transparent" keyword does |
| except that with a server it's possible to limit concurrency and |
| to report statistics. Optionally, an address family prefix may be |
| used before the address to force the family regardless of the |
| address format, which can be useful to specify a path to a unix |
| socket with no slash ('/'). Currently supported prefixes are : |
| - 'ipv4@' -> address is always IPv4 |
| - 'ipv6@' -> address is always IPv6 |
| - 'unix@' -> address is a path to a local unix socket |
| - 'abns@' -> address is in abstract namespace (Linux only) |
| - 'sockpair@' -> address is the FD of a connected unix |
| socket or of a socketpair. During a connection, the |
| backend creates a pair of connected sockets, and passes |
| one of them over the FD. The bind part will use the |
| received socket as the client FD. Should be used |
| carefully. |
| You may want to reference some environment variables in the |
| address parameter, see section 2.3 about environment |
| variables. The "init-addr" setting can be used to modify the way |
| IP addresses should be resolved upon startup. |
| |
| <port> is an optional port specification. If set, all connections will |
| be sent to this port. If unset, the same port the client |
| connected to will be used. The port may also be prefixed by a "+" |
| or a "-". In this case, the server's port will be determined by |
| adding this value to the client's port. |
| |
| <param*> is a list of parameters for this server. The "server" keywords |
| accepts an important number of options and has a complete section |
| dedicated to it. Please refer to section 5 for more details. |
| |
| Examples : |
| server first 10.1.1.1:1080 cookie first check inter 1000 |
| server second 10.1.1.2:1080 cookie second check inter 1000 |
| server transp ipv4@ |
| server backup "${SRV_BACKUP}:1080" backup |
| server www1_dc1 "${LAN_DC1}.101:80" |
| server www1_dc2 "${LAN_DC2}.101:80" |
| |
| Note: regarding Linux's abstract namespace sockets, HAProxy uses the whole |
| sun_path length is used for the address length. Some other programs |
| such as socat use the string length only by default. Pass the option |
| ",unix-tightsocklen=0" to any abstract socket definition in socat to |
| make it compatible with HAProxy's. |
| |
| See also: "default-server", "http-send-name-header" and section 5 about |
| server options |
| |
| server-state-file-name [ { use-backend-name | <file> } ] |
| Set the server state file to read, load and apply to servers available in |
| this backend. |
| May be used in sections: defaults | frontend | listen | backend |
| no | no | yes | yes |
| |
| It only applies when the directive "load-server-state-from-file" is set to |
| "local". When <file> is not provided, if "use-backend-name" is used or if |
| this directive is not set, then backend name is used. If <file> starts with a |
| slash '/', then it is considered as an absolute path. Otherwise, <file> is |
| concatenated to the global directive "server-state-base". |
| |
| Example: the minimal configuration below would make HAProxy look for the |
| state server file '/etc/haproxy/states/bk': |
| |
| global |
| server-state-file-base /etc/haproxy/states |
| |
| backend bk |
| load-server-state-from-file |
| |
| See also: "server-state-base", "load-server-state-from-file", and |
| "show servers state" |
| |
| server-template <prefix> <num | range> <fqdn>[:<port>] [params*] |
| Set a template to initialize servers with shared parameters. |
| The names of these servers are built from <prefix> and <num | range> parameters. |
| May be used in sections : defaults | frontend | listen | backend |
| no | no | yes | yes |
| |
| Arguments: |
| <prefix> A prefix for the server names to be built. |
| |
| <num | range> |
| If <num> is provided, this template initializes <num> servers |
| with 1 up to <num> as server name suffixes. A range of numbers |
| <num_low>-<num_high> may also be used to use <num_low> up to |
| <num_high> as server name suffixes. |
| |
| <fqdn> A FQDN for all the servers this template initializes. |
| |
| <port> Same meaning as "server" <port> argument (see "server" keyword). |
| |
| <params*> |
| Remaining server parameters among all those supported by "server" |
| keyword. |
| |
| Examples: |
| # Initializes 3 servers with srv1, srv2 and srv3 as names, |
| # google.com as FQDN, and health-check enabled. |
| server-template srv 1-3 google.com:80 check |
| |
| # or |
| server-template srv 3 google.com:80 check |
| |
| # would be equivalent to: |
| server srv1 google.com:80 check |
| server srv2 google.com:80 check |
| server srv3 google.com:80 check |
| |
| |
| |
| source <addr>[:<port>] [usesrc { <addr2>[:<port2>] | client | clientip } ] |
| source <addr>[:<port>] [usesrc { <addr2>[:<port2>] | hdr_ip(<hdr>[,<occ>]) } ] |
| source <addr>[:<port>] [interface <name>] |
| Set the source address for outgoing connections |
| May be used in sections : defaults | frontend | listen | backend |
| yes | no | yes | yes |
| Arguments : |
| <addr> is the IPv4 address HAProxy will bind to before connecting to a |
| server. This address is also used as a source for health checks. |
| |
| The default value of 0.0.0.0 means that the system will select |
| the most appropriate address to reach its destination. Optionally |
| an address family prefix may be used before the address to force |
| the family regardless of the address format, which can be useful |
| to specify a path to a unix socket with no slash ('/'). Currently |
| supported prefixes are : |
| - 'ipv4@' -> address is always IPv4 |
| - 'ipv6@' -> address is always IPv6 |
| - 'unix@' -> address is a path to a local unix socket |
| - 'abns@' -> address is in abstract namespace (Linux only) |
| You may want to reference some environment variables in the |
| address parameter, see section 2.3 about environment variables. |
| |
| <port> is an optional port. It is normally not needed but may be useful |
| in some very specific contexts. The default value of zero means |
| the system will select a free port. Note that port ranges are not |
| supported in the backend. If you want to force port ranges, you |
| have to specify them on each "server" line. |
| |
| <addr2> is the IP address to present to the server when connections are |
| forwarded in full transparent proxy mode. This is currently only |
| supported on some patched Linux kernels. When this address is |
| specified, clients connecting to the server will be presented |
| with this address, while health checks will still use the address |
| <addr>. |
| |
| <port2> is the optional port to present to the server when connections |
| are forwarded in full transparent proxy mode (see <addr2> above). |
| The default value of zero means the system will select a free |
| port. |
| |
| <hdr> is the name of a HTTP header in which to fetch the IP to bind to. |
| This is the name of a comma-separated header list which can |
| contain multiple IP addresses. By default, the last occurrence is |
| used. This is designed to work with the X-Forwarded-For header |
| and to automatically bind to the client's IP address as seen |
| by previous proxy, typically Stunnel. In order to use another |
| occurrence from the last one, please see the <occ> parameter |
| below. When the header (or occurrence) is not found, no binding |
| is performed so that the proxy's default IP address is used. Also |
| keep in mind that the header name is case insensitive, as for any |
| HTTP header. |
| |
| <occ> is the occurrence number of a value to be used in a multi-value |
| header. This is to be used in conjunction with "hdr_ip(<hdr>)", |
| in order to specify which occurrence to use for the source IP |
| address. Positive values indicate a position from the first |
| occurrence, 1 being the first one. Negative values indicate |
| positions relative to the last one, -1 being the last one. This |
| is helpful for situations where an X-Forwarded-For header is set |
| at the entry point of an infrastructure and must be used several |
| proxy layers away. When this value is not specified, -1 is |
| assumed. Passing a zero here disables the feature. |
| |
| <name> is an optional interface name to which to bind to for outgoing |
| traffic. On systems supporting this features (currently, only |
| Linux), this allows one to bind all traffic to the server to |
| this interface even if it is not the one the system would select |
| based on routing tables. This should be used with extreme care. |
| Note that using this option requires root privileges. |
| |
| The "source" keyword is useful in complex environments where a specific |
| address only is allowed to connect to the servers. It may be needed when a |
| private address must be used through a public gateway for instance, and it is |
| known that the system cannot determine the adequate source address by itself. |
| |
| An extension which is available on certain patched Linux kernels may be used |
| through the "usesrc" optional keyword. It makes it possible to connect to the |
| servers with an IP address which does not belong to the system itself. This |
| is called "full transparent proxy mode". For this to work, the destination |
| servers have to route their traffic back to this address through the machine |
| running HAProxy, and IP forwarding must generally be enabled on this machine. |
| |
| In this "full transparent proxy" mode, it is possible to force a specific IP |
| address to be presented to the servers. This is not much used in fact. A more |
| common use is to tell HAProxy to present the client's IP address. For this, |
| there are two methods : |
| |
| - present the client's IP and port addresses. This is the most transparent |
| mode, but it can cause problems when IP connection tracking is enabled on |
| the machine, because a same connection may be seen twice with different |
| states. However, this solution presents the huge advantage of not |
| limiting the system to the 64k outgoing address+port couples, because all |
| of the client ranges may be used. |
| |
| - present only the client's IP address and select a spare port. This |
| solution is still quite elegant but slightly less transparent (downstream |
| firewalls logs will not match upstream's). It also presents the downside |
| of limiting the number of concurrent connections to the usual 64k ports. |
| However, since the upstream and downstream ports are different, local IP |
| connection tracking on the machine will not be upset by the reuse of the |
| same session. |
| |
| This option sets the default source for all servers in the backend. It may |
| also be specified in a "defaults" section. Finer source address specification |
| is possible at the server level using the "source" server option. Refer to |
| section 5 for more information. |
| |
| In order to work, "usesrc" requires root privileges, or on supported systems, |
| the "cap_net_raw" capability. See also the "setcap" global directive. |
| |
| Examples : |
| backend private |
| # Connect to the servers using our 192.168.1.200 source address |
| source 192.168.1.200 |
| |
| backend transparent_ssl1 |
| # Connect to the SSL farm from the client's source address |
| source 192.168.1.200 usesrc clientip |
| |
| backend transparent_ssl2 |
| # Connect to the SSL farm from the client's source address and port |
| # not recommended if IP conntrack is present on the local machine. |
| source 192.168.1.200 usesrc client |
| |
| backend transparent_ssl3 |
| # Connect to the SSL farm from the client's source address. It |
| # is more conntrack-friendly. |
| source 192.168.1.200 usesrc clientip |
| |
| backend transparent_smtp |
| # Connect to the SMTP farm from the client's source address/port |
| # with Tproxy version 4. |
| source 0.0.0.0 usesrc clientip |
| |
| backend transparent_http |
| # Connect to the servers using the client's IP as seen by previous |
| # proxy. |
| source 0.0.0.0 usesrc hdr_ip(x-forwarded-for,-1) |
| |
| See also : the "source" server option in section 5, the Tproxy patches for |
| the Linux kernel on www.balabit.com, the "bind" keyword. |
| |
| |
| srvtcpka-cnt <count> |
| Sets the maximum number of keepalive probes TCP should send before dropping |
| the connection on the server side. |
| May be used in sections : defaults | frontend | listen | backend |
| yes | no | yes | yes |
| Arguments : |
| <count> is the maximum number of keepalive probes. |
| |
| This keyword corresponds to the socket option TCP_KEEPCNT. If this keyword |
| is not specified, system-wide TCP parameter (tcp_keepalive_probes) is used. |
| The availability of this setting depends on the operating system. It is |
| known to work on Linux. |
| |
| See also : "option srvtcpka", "srvtcpka-idle", "srvtcpka-intvl". |
| |
| |
| srvtcpka-idle <timeout> |
| Sets the time the connection needs to remain idle before TCP starts sending |
| keepalive probes, if enabled the sending of TCP keepalive packets on the |
| server side. |
| May be used in sections : defaults | frontend | listen | backend |
| yes | no | yes | yes |
| Arguments : |
| <timeout> is the time the connection needs to remain idle before TCP starts |
| sending keepalive probes. It is specified in seconds by default, |
| but can be in any other unit if the number is suffixed by the |
| unit, as explained at the top of this document. |
| |
| This keyword corresponds to the socket option TCP_KEEPIDLE. If this keyword |
| is not specified, system-wide TCP parameter (tcp_keepalive_time) is used. |
| The availability of this setting depends on the operating system. It is |
| known to work on Linux. |
| |
| See also : "option srvtcpka", "srvtcpka-cnt", "srvtcpka-intvl". |
| |
| |
| srvtcpka-intvl <timeout> |
| Sets the time between individual keepalive probes on the server side. |
| May be used in sections : defaults | frontend | listen | backend |
| yes | no | yes | yes |
| Arguments : |
| <timeout> is the time between individual keepalive probes. It is specified |
| in seconds by default, but can be in any other unit if the number |
| is suffixed by the unit, as explained at the top of this |
| document. |
| |
| This keyword corresponds to the socket option TCP_KEEPINTVL. If this keyword |
| is not specified, system-wide TCP parameter (tcp_keepalive_intvl) is used. |
| The availability of this setting depends on the operating system. It is |
| known to work on Linux. |
| |
| See also : "option srvtcpka", "srvtcpka-cnt", "srvtcpka-idle". |
| |
| |
| stats admin { if | unless } <cond> |
| Enable statistics admin level if/unless a condition is matched |
| May be used in sections : defaults | frontend | listen | backend |
| no | yes | yes | yes |
| |
| This statement enables the statistics admin level if/unless a condition is |
| matched. |
| |
| The admin level allows to enable/disable servers from the web interface. By |
| default, statistics page is read-only for security reasons. |
| |
| Currently, the POST request is limited to the buffer size minus the reserved |
| buffer space, which means that if the list of servers is too long, the |
| request won't be processed. It is recommended to alter few servers at a |
| time. |
| |
| Example : |
| # statistics admin level only for localhost |
| backend stats_localhost |
| stats enable |
| stats admin if LOCALHOST |
| |
| Example : |
| # statistics admin level always enabled because of the authentication |
| backend stats_auth |
| stats enable |
| stats auth admin:AdMiN123 |
| stats admin if TRUE |
| |
| Example : |
| # statistics admin level depends on the authenticated user |
| userlist stats-auth |
| group admin users admin |
| user admin insecure-password AdMiN123 |
| group readonly users haproxy |
| user haproxy insecure-password haproxy |
| |
| backend stats_auth |
| stats enable |
| acl AUTH http_auth(stats-auth) |
| acl AUTH_ADMIN http_auth_group(stats-auth) admin |
| stats http-request auth unless AUTH |
| stats admin if AUTH_ADMIN |
| |
| See also : "stats enable", "stats auth", "stats http-request", section 3.4 |
| about userlists and section 7 about ACL usage. |
| |
| |
| stats auth <user>:<passwd> |
| Enable statistics with authentication and grant access to an account |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | yes |
| Arguments : |
| <user> is a user name to grant access to |
| |
| <passwd> is the cleartext password associated to this user |
| |
| This statement enables statistics with default settings, and restricts access |
| to declared users only. It may be repeated as many times as necessary to |
| allow as many users as desired. When a user tries to access the statistics |
| without a valid account, a "401 Forbidden" response will be returned so that |
| the browser asks the user to provide a valid user and password. The real |
| which will be returned to the browser is configurable using "stats realm". |
| |
| Since the authentication method is HTTP Basic Authentication, the passwords |
| circulate in cleartext on the network. Thus, it was decided that the |
| configuration file would also use cleartext passwords to remind the users |
| that those ones should not be sensitive and not shared with any other account. |
| |
| It is also possible to reduce the scope of the proxies which appear in the |
| report using "stats scope". |
| |
| Though this statement alone is enough to enable statistics reporting, it is |
| recommended to set all other settings in order to avoid relying on default |
| unobvious parameters. |
| |
| Example : |
| # public access (limited to this backend only) |
| backend public_www |
| server srv1 192.168.0.1:80 |
| stats enable |
| stats hide-version |
| stats scope . |
| stats uri /admin?stats |
| stats realm HAProxy\ Statistics |
| stats auth admin1:AdMiN123 |
| stats auth admin2:AdMiN321 |
| |
| # internal monitoring access (unlimited) |
| backend private_monitoring |
| stats enable |
| stats uri /admin?stats |
| stats refresh 5s |
| |
| See also : "stats enable", "stats realm", "stats scope", "stats uri" |
| |
| |
| stats enable |
| Enable statistics reporting with default settings |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | yes |
| Arguments : none |
| |
| This statement enables statistics reporting with default settings defined |
| at build time. Unless stated otherwise, these settings are used : |
| - stats uri : /haproxy?stats |
| - stats realm : "HAProxy Statistics" |
| - stats auth : no authentication |
| - stats scope : no restriction |
| |
| Though this statement alone is enough to enable statistics reporting, it is |
| recommended to set all other settings in order to avoid relying on default |
| unobvious parameters. |
| |
| Example : |
| # public access (limited to this backend only) |
| backend public_www |
| server srv1 192.168.0.1:80 |
| stats enable |
| stats hide-version |
| stats scope . |
| stats uri /admin?stats |
| stats realm HAProxy\ Statistics |
| stats auth admin1:AdMiN123 |
| stats auth admin2:AdMiN321 |
| |
| # internal monitoring access (unlimited) |
| backend private_monitoring |
| stats enable |
| stats uri /admin?stats |
| stats refresh 5s |
| |
| See also : "stats auth", "stats realm", "stats uri" |
| |
| |
| stats hide-version |
| Enable statistics and hide HAProxy version reporting |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | yes |
| Arguments : none |
| |
| By default, the stats page reports some useful status information along with |
| the statistics. Among them is HAProxy's version. However, it is generally |
| considered dangerous to report precise version to anyone, as it can help them |
| target known weaknesses with specific attacks. The "stats hide-version" |
| statement removes the version from the statistics report. This is recommended |
| for public sites or any site with a weak login/password. |
| |
| Though this statement alone is enough to enable statistics reporting, it is |
| recommended to set all other settings in order to avoid relying on default |
| unobvious parameters. |
| |
| Example : |
| # public access (limited to this backend only) |
| backend public_www |
| server srv1 192.168.0.1:80 |
| stats enable |
| stats hide-version |
| stats scope . |
| stats uri /admin?stats |
| stats realm HAProxy\ Statistics |
| stats auth admin1:AdMiN123 |
| stats auth admin2:AdMiN321 |
| |
| # internal monitoring access (unlimited) |
| backend private_monitoring |
| stats enable |
| stats uri /admin?stats |
| stats refresh 5s |
| |
| See also : "stats auth", "stats enable", "stats realm", "stats uri" |
| |
| |
| stats http-request { allow | deny | auth [realm <realm>] } |
| [ { if | unless } <condition> ] |
| Access control for statistics |
| |
| May be used in sections: defaults | frontend | listen | backend |
| no | no | yes | yes |
| |
| As "http-request", these set of options allow to fine control access to |
| statistics. Each option may be followed by if/unless and acl. |
| First option with matched condition (or option without condition) is final. |
| For "deny" a 403 error will be returned, for "allow" normal processing is |
| performed, for "auth" a 401/407 error code is returned so the client |
| should be asked to enter a username and password. |
| |
| There is no fixed limit to the number of http-request statements per |
| instance. |
| |
| See also : "http-request", section 3.4 about userlists and section 7 |
| about ACL usage. |
| |
| |
| stats realm <realm> |
| Enable statistics and set authentication realm |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | yes |
| Arguments : |
| <realm> is the name of the HTTP Basic Authentication realm reported to |
| the browser. The browser uses it to display it in the pop-up |
| inviting the user to enter a valid username and password. |
| |
| The realm is read as a single word, so any spaces in it should be escaped |
| using a backslash ('\'). |
| |
| This statement is useful only in conjunction with "stats auth" since it is |
| only related to authentication. |
| |
| Though this statement alone is enough to enable statistics reporting, it is |
| recommended to set all other settings in order to avoid relying on default |
| unobvious parameters. |
| |
| Example : |
| # public access (limited to this backend only) |
| backend public_www |
| server srv1 192.168.0.1:80 |
| stats enable |
| stats hide-version |
| stats scope . |
| stats uri /admin?stats |
| stats realm HAProxy\ Statistics |
| stats auth admin1:AdMiN123 |
| stats auth admin2:AdMiN321 |
| |
| # internal monitoring access (unlimited) |
| backend private_monitoring |
| stats enable |
| stats uri /admin?stats |
| stats refresh 5s |
| |
| See also : "stats auth", "stats enable", "stats uri" |
| |
| |
| stats refresh <delay> |
| Enable statistics with automatic refresh |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | yes |
| Arguments : |
| <delay> is the suggested refresh delay, specified in seconds, which will |
| be returned to the browser consulting the report page. While the |
| browser is free to apply any delay, it will generally respect it |
| and refresh the page this every seconds. The refresh interval may |
| be specified in any other non-default time unit, by suffixing the |
| unit after the value, as explained at the top of this document. |
| |
| This statement is useful on monitoring displays with a permanent page |
| reporting the load balancer's activity. When set, the HTML report page will |
| include a link "refresh"/"stop refresh" so that the user can select whether |
| they want automatic refresh of the page or not. |
| |
| Though this statement alone is enough to enable statistics reporting, it is |
| recommended to set all other settings in order to avoid relying on default |
| unobvious parameters. |
| |
| Example : |
| # public access (limited to this backend only) |
| backend public_www |
| server srv1 192.168.0.1:80 |
| stats enable |
| stats hide-version |
| stats scope . |
| stats uri /admin?stats |
| stats realm HAProxy\ Statistics |
| stats auth admin1:AdMiN123 |
| stats auth admin2:AdMiN321 |
| |
| # internal monitoring access (unlimited) |
| backend private_monitoring |
| stats enable |
| stats uri /admin?stats |
| stats refresh 5s |
| |
| See also : "stats auth", "stats enable", "stats realm", "stats uri" |
| |
| |
| stats scope { <name> | "." } |
| Enable statistics and limit access scope |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | yes |
| Arguments : |
| <name> is the name of a listen, frontend or backend section to be |
| reported. The special name "." (a single dot) designates the |
| section in which the statement appears. |
| |
| When this statement is specified, only the sections enumerated with this |
| statement will appear in the report. All other ones will be hidden. This |
| statement may appear as many times as needed if multiple sections need to be |
| reported. Please note that the name checking is performed as simple string |
| comparisons, and that it is never checked that a give section name really |
| exists. |
| |
| Though this statement alone is enough to enable statistics reporting, it is |
| recommended to set all other settings in order to avoid relying on default |
| unobvious parameters. |
| |
| Example : |
| # public access (limited to this backend only) |
| backend public_www |
| server srv1 192.168.0.1:80 |
| stats enable |
| stats hide-version |
| stats scope . |
| stats uri /admin?stats |
| stats realm HAProxy\ Statistics |
| stats auth admin1:AdMiN123 |
| stats auth admin2:AdMiN321 |
| |
| # internal monitoring access (unlimited) |
| backend private_monitoring |
| stats enable |
| stats uri /admin?stats |
| stats refresh 5s |
| |
| See also : "stats auth", "stats enable", "stats realm", "stats uri" |
| |
| |
| stats show-desc [ <desc> ] |
| Enable reporting of a description on the statistics page. |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | yes |
| |
| <desc> is an optional description to be reported. If unspecified, the |
| description from global section is automatically used instead. |
| |
| This statement is useful for users that offer shared services to their |
| customers, where node or description should be different for each customer. |
| |
| Though this statement alone is enough to enable statistics reporting, it is |
| recommended to set all other settings in order to avoid relying on default |
| unobvious parameters. By default description is not shown. |
| |
| Example : |
| # internal monitoring access (unlimited) |
| backend private_monitoring |
| stats enable |
| stats show-desc Master node for Europe, Asia, Africa |
| stats uri /admin?stats |
| stats refresh 5s |
| |
| See also: "show-node", "stats enable", "stats uri" and "description" in |
| global section. |
| |
| |
| stats show-legends |
| Enable reporting additional information on the statistics page |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | yes |
| Arguments : none |
| |
| Enable reporting additional information on the statistics page : |
| - cap: capabilities (proxy) |
| - mode: one of tcp, http or health (proxy) |
| - id: SNMP ID (proxy, socket, server) |
| - IP (socket, server) |
| - cookie (backend, server) |
| |
| Though this statement alone is enough to enable statistics reporting, it is |
| recommended to set all other settings in order to avoid relying on default |
| unobvious parameters. Default behavior is not to show this information. |
| |
| See also: "stats enable", "stats uri". |
| |
| |
| stats show-modules |
| Enable display of extra statistics module on the statistics page |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | yes |
| Arguments : none |
| |
| New columns are added at the end of the line containing the extra statistics |
| values as a tooltip. |
| |
| Though this statement alone is enough to enable statistics reporting, it is |
| recommended to set all other settings in order to avoid relying on default |
| unobvious parameters. Default behavior is not to show this information. |
| |
| See also: "stats enable", "stats uri". |
| |
| |
| stats show-node [ <name> ] |
| Enable reporting of a host name on the statistics page. |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | yes |
| Arguments: |
| <name> is an optional name to be reported. If unspecified, the |
| node name from global section is automatically used instead. |
| |
| This statement is useful for users that offer shared services to their |
| customers, where node or description might be different on a stats page |
| provided for each customer. Default behavior is not to show host name. |
| |
| Though this statement alone is enough to enable statistics reporting, it is |
| recommended to set all other settings in order to avoid relying on default |
| unobvious parameters. |
| |
| Example: |
| # internal monitoring access (unlimited) |
| backend private_monitoring |
| stats enable |
| stats show-node Europe-1 |
| stats uri /admin?stats |
| stats refresh 5s |
| |
| See also: "show-desc", "stats enable", "stats uri", and "node" in global |
| section. |
| |
| |
| stats uri <prefix> |
| Enable statistics and define the URI prefix to access them |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | yes |
| Arguments : |
| <prefix> is the prefix of any URI which will be redirected to stats. This |
| prefix may contain a question mark ('?') to indicate part of a |
| query string. |
| |
| The statistics URI is intercepted on the relayed traffic, so it appears as a |
| page within the normal application. It is strongly advised to ensure that the |
| selected URI will never appear in the application, otherwise it will never be |
| possible to reach it in the application. |
| |
| The default URI compiled in HAProxy is "/haproxy?stats", but this may be |
| changed at build time, so it's better to always explicitly specify it here. |
| It is generally a good idea to include a question mark in the URI so that |
| intermediate proxies refrain from caching the results. Also, since any string |
| beginning with the prefix will be accepted as a stats request, the question |
| mark helps ensuring that no valid URI will begin with the same words. |
| |
| It is sometimes very convenient to use "/" as the URI prefix, and put that |
| statement in a "listen" instance of its own. That makes it easy to dedicate |
| an address or a port to statistics only. |
| |
| Though this statement alone is enough to enable statistics reporting, it is |
| recommended to set all other settings in order to avoid relying on default |
| unobvious parameters. |
| |
| Example : |
| # public access (limited to this backend only) |
| backend public_www |
| server srv1 192.168.0.1:80 |
| stats enable |
| stats hide-version |
| stats scope . |
| stats uri /admin?stats |
| stats realm HAProxy\ Statistics |
| stats auth admin1:AdMiN123 |
| stats auth admin2:AdMiN321 |
| |
| # internal monitoring access (unlimited) |
| backend private_monitoring |
| stats enable |
| stats uri /admin?stats |
| stats refresh 5s |
| |
| See also : "stats auth", "stats enable", "stats realm" |
| |
| |
| stick match <pattern> [table <table>] [{if | unless} <cond>] |
| Define a request pattern matching condition to stick a user to a server |
| May be used in sections : defaults | frontend | listen | backend |
| no | no | yes | yes |
| |
| Arguments : |
| <pattern> is a sample expression rule as described in section 7.3. It |
| describes what elements of the incoming request or connection |
| will be analyzed in the hope to find a matching entry in a |
| stickiness table. This rule is mandatory. |
| |
| <table> is an optional stickiness table name. If unspecified, the same |
| backend's table is used. A stickiness table is declared using |
| the "stick-table" statement. |
| |
| <cond> is an optional matching condition. It makes it possible to match |
| on a certain criterion only when other conditions are met (or |
| not met). For instance, it could be used to match on a source IP |
| address except when a request passes through a known proxy, in |
| which case we'd match on a header containing that IP address. |
| |
| Some protocols or applications require complex stickiness rules and cannot |
| always simply rely on cookies nor hashing. The "stick match" statement |
| describes a rule to extract the stickiness criterion from an incoming request |
| or connection. See section 7 for a complete list of possible patterns and |
| transformation rules. |
| |
| The table has to be declared using the "stick-table" statement. It must be of |
| a type compatible with the pattern. By default it is the one which is present |
| in the same backend. It is possible to share a table with other backends by |
| referencing it using the "table" keyword. If another table is referenced, |
| the server's ID inside the backends are used. By default, all server IDs |
| start at 1 in each backend, so the server ordering is enough. But in case of |
| doubt, it is highly recommended to force server IDs using their "id" setting. |
| |
| It is possible to restrict the conditions where a "stick match" statement |
| will apply, using "if" or "unless" followed by a condition. See section 7 for |
| ACL based conditions. |
| |
| There is no limit on the number of "stick match" statements. The first that |
| applies and matches will cause the request to be directed to the same server |
| as was used for the request which created the entry. That way, multiple |
| matches can be used as fallbacks. |
| |
| The stick rules are checked after the persistence cookies, so they will not |
| affect stickiness if a cookie has already been used to select a server. That |
| way, it becomes very easy to insert cookies and match on IP addresses in |
| order to maintain stickiness between HTTP and HTTPS. |
| |
| Example : |
| # forward SMTP users to the same server they just used for POP in the |
| # last 30 minutes |
| backend pop |
| mode tcp |
| balance roundrobin |
| stick store-request src |
| stick-table type ip size 200k expire 30m |
| server s1 192.168.1.1:110 |
| server s2 192.168.1.1:110 |
| |
| backend smtp |
| mode tcp |
| balance roundrobin |
| stick match src table pop |
| server s1 192.168.1.1:25 |
| server s2 192.168.1.1:25 |
| |
| See also : "stick-table", "stick on", and section 7 about ACLs and samples |
| fetching. |
| |
| |
| stick on <pattern> [table <table>] [{if | unless} <condition>] |
| Define a request pattern to associate a user to a server |
| May be used in sections : defaults | frontend | listen | backend |
| no | no | yes | yes |
| |
| Note : This form is exactly equivalent to "stick match" followed by |
| "stick store-request", all with the same arguments. Please refer |
| to both keywords for details. It is only provided as a convenience |
| for writing more maintainable configurations. |
| |
| Examples : |
| # The following form ... |
| stick on src table pop if !localhost |
| |
| # ...is strictly equivalent to this one : |
| stick match src table pop if !localhost |
| stick store-request src table pop if !localhost |
| |
| |
| # Use cookie persistence for HTTP, and stick on source address for HTTPS as |
| # well as HTTP without cookie. Share the same table between both accesses. |
| backend http |
| mode http |
| balance roundrobin |
| stick on src table https |
| cookie SRV insert indirect nocache |
| server s1 192.168.1.1:80 cookie s1 |
| server s2 192.168.1.1:80 cookie s2 |
| |
| backend https |
| mode tcp |
| balance roundrobin |
| stick-table type ip size 200k expire 30m |
| stick on src |
| server s1 192.168.1.1:443 |
| server s2 192.168.1.1:443 |
| |
| See also : "stick match", "stick store-request". |
| |
| |
| stick store-request <pattern> [table <table>] [{if | unless} <condition>] |
| Define a request pattern used to create an entry in a stickiness table |
| May be used in sections : defaults | frontend | listen | backend |
| no | no | yes | yes |
| |
| Arguments : |
| <pattern> is a sample expression rule as described in section 7.3. It |
| describes what elements of the incoming request or connection |
| will be analyzed, extracted and stored in the table once a |
| server is selected. |
| |
| <table> is an optional stickiness table name. If unspecified, the same |
| backend's table is used. A stickiness table is declared using |
| the "stick-table" statement. |
| |
| <cond> is an optional storage condition. It makes it possible to store |
| certain criteria only when some conditions are met (or not met). |
| For instance, it could be used to store the source IP address |
| except when the request passes through a known proxy, in which |
| case we'd store a converted form of a header containing that IP |
| address. |
| |
| Some protocols or applications require complex stickiness rules and cannot |
| always simply rely on cookies nor hashing. The "stick store-request" statement |
| describes a rule to decide what to extract from the request and when to do |
| it, in order to store it into a stickiness table for further requests to |
| match it using the "stick match" statement. Obviously the extracted part must |
| make sense and have a chance to be matched in a further request. Storing a |
| client's IP address for instance often makes sense. Storing an ID found in a |
| URL parameter also makes sense. Storing a source port will almost never make |
| any sense because it will be randomly matched. See section 7 for a complete |
| list of possible patterns and transformation rules. |
| |
| The table has to be declared using the "stick-table" statement. It must be of |
| a type compatible with the pattern. By default it is the one which is present |
| in the same backend. It is possible to share a table with other backends by |
| referencing it using the "table" keyword. If another table is referenced, |
| the server's ID inside the backends are used. By default, all server IDs |
| start at 1 in each backend, so the server ordering is enough. But in case of |
| doubt, it is highly recommended to force server IDs using their "id" setting. |
| |
| It is possible to restrict the conditions where a "stick store-request" |
| statement will apply, using "if" or "unless" followed by a condition. This |
| condition will be evaluated while parsing the request, so any criteria can be |
| used. See section 7 for ACL based conditions. |
| |
| There is no limit on the number of "stick store-request" statements, but |
| there is a limit of 8 simultaneous stores per request or response. This |
| makes it possible to store up to 8 criteria, all extracted from either the |
| request or the response, regardless of the number of rules. Only the 8 first |
| ones which match will be kept. Using this, it is possible to feed multiple |
| tables at once in the hope to increase the chance to recognize a user on |
| another protocol or access method. Using multiple store-request rules with |
| the same table is possible and may be used to find the best criterion to rely |
| on, by arranging the rules by decreasing preference order. Only the first |
| extracted criterion for a given table will be stored. All subsequent store- |
| request rules referencing the same table will be skipped and their ACLs will |
| not be evaluated. |
| |
| The "store-request" rules are evaluated once the server connection has been |
| established, so that the table will contain the real server that processed |
| the request. |
| |
| Example : |
| # forward SMTP users to the same server they just used for POP in the |
| # last 30 minutes |
| backend pop |
| mode tcp |
| balance roundrobin |
| stick store-request src |
| stick-table type ip size 200k expire 30m |
| server s1 192.168.1.1:110 |
| server s2 192.168.1.1:110 |
| |
| backend smtp |
| mode tcp |
| balance roundrobin |
| stick match src table pop |
| server s1 192.168.1.1:25 |
| server s2 192.168.1.1:25 |
| |
| See also : "stick-table", "stick on", about ACLs and sample fetching. |
| |
| |
| stick-table type {ip | integer | string [len <length>] | binary [len <length>]} |
| size <size> [expire <expire>] [nopurge] [peers <peersect>] [srvkey <srvkey>] |
| [store <data_type>]* |
| Configure the stickiness table for the current section |
| May be used in sections : defaults | frontend | listen | backend |
| no | yes | yes | yes |
| |
| Arguments : |
| ip a table declared with "type ip" will only store IPv4 addresses. |
| This form is very compact (about 50 bytes per entry) and allows |
| very fast entry lookup and stores with almost no overhead. This |
| is mainly used to store client source IP addresses. |
| |
| ipv6 a table declared with "type ipv6" will only store IPv6 addresses. |
| This form is very compact (about 60 bytes per entry) and allows |
| very fast entry lookup and stores with almost no overhead. This |
| is mainly used to store client source IP addresses. |
| |
| integer a table declared with "type integer" will store 32bit integers |
| which can represent a client identifier found in a request for |
| instance. |
| |
| string a table declared with "type string" will store substrings of up |
| to <len> characters. If the string provided by the pattern |
| extractor is larger than <len>, it will be truncated before |
| being stored. During matching, at most <len> characters will be |
| compared between the string in the table and the extracted |
| pattern. When not specified, the string is automatically limited |
| to 32 characters. |
| |
| binary a table declared with "type binary" will store binary blocks |
| of <len> bytes. If the block provided by the pattern |
| extractor is larger than <len>, it will be truncated before |
| being stored. If the block provided by the sample expression |
| is shorter than <len>, it will be padded by 0. When not |
| specified, the block is automatically limited to 32 bytes. |
| |
| <length> is the maximum number of characters that will be stored in a |
| "string" type table (See type "string" above). Or the number |
| of bytes of the block in "binary" type table. Be careful when |
| changing this parameter as memory usage will proportionally |
| increase. |
| |
| <size> is the maximum number of entries that can fit in the table. This |
| value directly impacts memory usage. Count approximately |
| 50 bytes per entry, plus the size of a string if any. The size |
| supports suffixes "k", "m", "g" for 2^10, 2^20 and 2^30 factors. |
| |
| [nopurge] indicates that we refuse to purge older entries when the table |
| is full. When not specified and the table is full when HAProxy |
| wants to store an entry in it, it will flush a few of the oldest |
| entries in order to release some space for the new ones. This is |
| most often the desired behavior. In some specific cases, it |
| be desirable to refuse new entries instead of purging the older |
| ones. That may be the case when the amount of data to store is |
| far above the hardware limits and we prefer not to offer access |
| to new clients than to reject the ones already connected. When |
| using this parameter, be sure to properly set the "expire" |
| parameter (see below). |
| |
| <peersect> is the name of the peers section to use for replication. Entries |
| which associate keys to server IDs are kept synchronized with |
| the remote peers declared in this section. All entries are also |
| automatically learned from the local peer (old process) during a |
| soft restart. |
| |
| <expire> defines the maximum duration of an entry in the table since it |
| was last created, refreshed using 'track-sc' or matched using |
| 'stick match' or 'stick on' rule. The expiration delay is |
| defined using the standard time format, similarly as the various |
| timeouts. The maximum duration is slightly above 24 days. See |
| section 2.5 for more information. If this delay is not specified, |
| the session won't automatically expire, but older entries will |
| be removed once full. Be sure not to use the "nopurge" parameter |
| if not expiration delay is specified. |
| Note: 'table_*' converters performs lookups but won't update touch |
| expire since they don't require 'track-sc'. |
| |
| <srvkey> specifies how each server is identified for the purposes of the |
| stick table. The valid values are "name" and "addr". If "name" is |
| given, then <name> argument for the server (may be generated by |
| a template). If "addr" is given, then the server is identified |
| by its current network address, including the port. "addr" is |
| especially useful if you are using service discovery to generate |
| the addresses for servers with peered stick-tables and want |
| to consistently use the same host across peers for a stickiness |
| token. |
| |
| <data_type> is used to store additional information in the stick-table. This |
| may be used by ACLs in order to control various criteria related |
| to the activity of the client matching the stick-table. For each |
| item specified here, the size of each entry will be inflated so |
| that the additional data can fit. Several data types may be |
| stored with an entry. Multiple data types may be specified after |
| the "store" keyword, as a comma-separated list. Alternatively, |
| it is possible to repeat the "store" keyword followed by one or |
| several data types. Except for the "server_id" type which is |
| automatically detected and enabled, all data types must be |
| explicitly declared to be stored. If an ACL references a data |
| type which is not stored, the ACL will simply not match. Some |
| data types require an argument which must be passed just after |
| the type between parenthesis. See below for the supported data |
| types and their arguments. |
| |
| The data types that can be stored with an entry are the following : |
| - server_id : this is an integer which holds the numeric ID of the server a |
| request was assigned to. It is used by the "stick match", "stick store", |
| and "stick on" rules. It is automatically enabled when referenced. |
| |
| - gpc(<nb>) : General Purpose Counters Array of <nb> elements. This is an |
| array of positive 32-bit integers which may be used to count anything. |
| Most of the time they will be used as a incremental counters on some |
| entries, for instance to note that a limit is reached and trigger some |
| actions. This array is limited to a maximum of 100 elements: |
| gpc0 to gpc99, to ensure that the build of a peer update |
| message can fit into the buffer. Users should take in consideration |
| that a large amount of counters will increase the data size and the |
| traffic load using peers protocol since all data/counters are pushed |
| each time any of them is updated. |
| This data_type will exclude the usage of the legacy data_types 'gpc0' |
| and 'gpc1' on the same table. Using the 'gpc' array data_type, all 'gpc0' |
| and 'gpc1' related fetches and actions will apply to the two first |
| elements of this array. |
| |
| - gpc_rate(<nb>,<period>) : Array of increment rates of General Purpose |
| Counters over a period. Those elements are positive 32-bit integers which |
| may be used for anything. Just like <gpc>, the count events, but instead |
| of keeping a cumulative number, they maintain the rate at which the |
| counter is incremented. Most of the time it will be used to measure the |
| frequency of occurrence of certain events (e.g. requests to a specific |
| URL). This array is limited to a maximum of 100 elements: gpt(100) |
| allowing the storage of gpc0 to gpc99, to ensure that the build of a peer |
| update message can fit into the buffer. |
| The array cannot contain less than 1 element: use gpc(1) if you want to |
| store only the counter gpc0. |
| Users should take in consideration that a large amount of |
| counters will increase the data size and the traffic load using peers |
| protocol since all data/counters are pushed each time any of them is |
| updated. |
| This data_type will exclude the usage of the legacy data_types |
| 'gpc0_rate' and 'gpc1_rate' on the same table. Using the 'gpc_rate' |
| array data_type, all 'gpc0' and 'gpc1' related fetches and actions |
| will apply to the two first elements of this array. |
| |
| - gpc0 : first General Purpose Counter. It is a positive 32-bit integer |
| integer which may be used for anything. Most of the time it will be used |
| to put a special tag on some entries, for instance to note that a |
| specific behavior was detected and must be known for future matches. |
| |
| - gpc0_rate(<period>) : increment rate of the first General Purpose Counter |
| over a period. It is a positive 32-bit integer integer which may be used |
| for anything. Just like <gpc0>, it counts events, but instead of keeping |
| a cumulative number, it maintains the rate at which the counter is |
| incremented. Most of the time it will be used to measure the frequency of |
| occurrence of certain events (e.g. requests to a specific URL). |
| |
| - gpc1 : second General Purpose Counter. It is a positive 32-bit integer |
| integer which may be used for anything. Most of the time it will be used |
| to put a special tag on some entries, for instance to note that a |
| specific behavior was detected and must be known for future matches. |
| |
| - gpc1_rate(<period>) : increment rate of the second General Purpose Counter |
| over a period. It is a positive 32-bit integer integer which may be used |
| for anything. Just like <gpc1>, it counts events, but instead of keeping |
| a cumulative number, it maintains the rate at which the counter is |
| incremented. Most of the time it will be used to measure the frequency of |
| occurrence of certain events (e.g. requests to a specific URL). |
| |
| - gpt(<nb>) : General Purpose Tags Array of <nb> elements. This is an array |
| of positive 32-bit integers which may be used for anything. |
| Most of the time they will be used to put a special tags on some entries, |
| for instance to note that a specific behavior was detected and must be |
| known for future matches. This array is limited to a maximum of 100 |
| elements: gpt(100) allowing the storage of gpt0 to gpt99, to ensure that |
| the build of a peer update message can fit into the buffer. |
| The array cannot contain less than 1 element: use gpt(1) if you want to |
| to store only the tag gpt0. |
| Users should take in consideration that a large amount of counters will |
| increase the data size and the traffic load using peers protocol since |
| all data/counters are pushed each time any of them is updated. |
| This data_type will exclude the usage of the legacy data_type 'gpt0' |
| on the same table. Using the 'gpt' array data_type, all 'gpt0' related |
| fetches and actions will apply to the first element of this array. |
| |
| - gpt0 : first General Purpose Tag. It is a positive 32-bit integer |
| integer which may be used for anything. Most of the time it will be used |
| to put a special tag on some entries, for instance to note that a |
| specific behavior was detected and must be known for future matches |
| |
| - conn_cnt : Connection Count. It is a positive 32-bit integer which counts |
| the absolute number of connections received from clients which matched |
| this entry. It does not mean the connections were accepted, just that |
| they were received. |
| |
| - conn_cur : Current Connections. It is a positive 32-bit integer which |
| stores the concurrent connection counts for the entry. It is incremented |
| once an incoming connection matches the entry, and decremented once the |
| connection leaves. That way it is possible to know at any time the exact |
| number of concurrent connections for an entry. |
| |
| - conn_rate(<period>) : frequency counter (takes 12 bytes). It takes an |
| integer parameter <period> which indicates in milliseconds the length |
| of the period over which the average is measured. It reports the average |
| incoming connection rate over that period, in connections per period. The |
| result is an integer which can be matched using ACLs. |
| |
| - sess_cnt : Session Count. It is a positive 32-bit integer which counts |
| the absolute number of sessions received from clients which matched this |
| entry. A session is a connection that was accepted by the layer 4 rules. |
| |
| - sess_rate(<period>) : frequency counter (takes 12 bytes). It takes an |
| integer parameter <period> which indicates in milliseconds the length |
| of the period over which the average is measured. It reports the average |
| incoming session rate over that period, in sessions per period. The |
| result is an integer which can be matched using ACLs. |
| |
| - http_req_cnt : HTTP request Count. It is a positive 32-bit integer which |
| counts the absolute number of HTTP requests received from clients which |
| matched this entry. It does not matter whether they are valid requests or |
| not. Note that this is different from sessions when keep-alive is used on |
| the client side. |
| |
| - http_req_rate(<period>) : frequency counter (takes 12 bytes). It takes an |
| integer parameter <period> which indicates in milliseconds the length |
| of the period over which the average is measured. It reports the average |
| HTTP request rate over that period, in requests per period. The result is |
| an integer which can be matched using ACLs. It does not matter whether |
| they are valid requests or not. Note that this is different from sessions |
| when keep-alive is used on the client side. |
| |
| - http_err_cnt : HTTP Error Count. It is a positive 32-bit integer which |
| counts the absolute number of HTTP requests errors induced by clients |
| which matched this entry. Errors are counted on invalid and truncated |
| requests, as well as on denied or tarpitted requests, and on failed |
| authentications. If the server responds with 4xx, then the request is |
| also counted as an error since it's an error triggered by the client |
| (e.g. vulnerability scan). |
| |
| - http_err_rate(<period>) : frequency counter (takes 12 bytes). It takes an |
| integer parameter <period> which indicates in milliseconds the length |
| of the period over which the average is measured. It reports the average |
| HTTP request error rate over that period, in requests per period (see |
| http_err_cnt above for what is accounted as an error). The result is an |
| integer which can be matched using ACLs. |
| |
| - http_fail_cnt : HTTP Failure Count. It is a positive 32-bit integer which |
| counts the absolute number of HTTP response failures induced by servers |
| which matched this entry. Errors are counted on invalid and truncated |
| responses, as well as any 5xx response other than 501 or 505. It aims at |
| being used combined with path or URI to detect service failures. |
| |
| - http_fail_rate(<period>) : frequency counter (takes 12 bytes). It takes |
| an integer parameter <period> which indicates in milliseconds the length |
| of the period over which the average is measured. It reports the average |
| HTTP response failure rate over that period, in requests per period (see |
| http_fail_cnt above for what is accounted as a failure). The result is an |
| integer which can be matched using ACLs. |
| |
| - bytes_in_cnt : client to server byte count. It is a positive 64-bit |
| integer which counts the cumulative number of bytes received from clients |
| which matched this entry. Headers are included in the count. This may be |
| used to limit abuse of upload features on photo or video servers. |
| |
| - bytes_in_rate(<period>) : frequency counter (takes 12 bytes). It takes an |
| integer parameter <period> which indicates in milliseconds the length |
| of the period over which the average is measured. It reports the average |
| incoming bytes rate over that period, in bytes per period. It may be used |
| to detect users which upload too much and too fast. Warning: with large |
| uploads, it is possible that the amount of uploaded data will be counted |
| once upon termination, thus causing spikes in the average transfer speed |
| instead of having a smooth one. This may partially be smoothed with |
| "option contstats" though this is not perfect yet. Use of byte_in_cnt is |
| recommended for better fairness. |
| |
| - bytes_out_cnt : server to client byte count. It is a positive 64-bit |
| integer which counts the cumulative number of bytes sent to clients which |
| matched this entry. Headers are included in the count. This may be used |
| to limit abuse of bots sucking the whole site. |
| |
| - bytes_out_rate(<period>) : frequency counter (takes 12 bytes). It takes |
| an integer parameter <period> which indicates in milliseconds the length |
| of the period over which the average is measured. It reports the average |
| outgoing bytes rate over that period, in bytes per period. It may be used |
| to detect users which download too much and too fast. Warning: with large |
| transfers, it is possible that the amount of transferred data will be |
| counted once upon termination, thus causing spikes in the average |
| transfer speed instead of having a smooth one. This may partially be |
| smoothed with "option contstats" though this is not perfect yet. Use of |
| byte_out_cnt is recommended for better fairness. |
| |
| There is only one stick-table per proxy. At the moment of writing this doc, |
| it does not seem useful to have multiple tables per proxy. If this happens |
| to be required, simply create a dummy backend with a stick-table in it and |
| reference it. |
| |
| It is important to understand that stickiness based on learning information |
| has some limitations, including the fact that all learned associations are |
| lost upon restart unless peers are properly configured to transfer such |
| information upon restart (recommended). In general it can be good as a |
| complement but not always as an exclusive stickiness. |
| |
| Last, memory requirements may be important when storing many data types. |
| Indeed, storing all indicators above at once in each entry requires 116 bytes |
| per entry, or 116 MB for a 1-million entries table. This is definitely not |
| something that can be ignored. |
| |
| Example: |
| # Keep track of counters of up to 1 million IP addresses over 5 minutes |
| # and store a general purpose counter and the average connection rate |
| # computed over a sliding window of 30 seconds. |
| stick-table type ip size 1m expire 5m store gpc0,conn_rate(30s) |
| |
| See also : "stick match", "stick on", "stick store-request", section 2.5 |
| about time format and section 7 about ACLs. |
| |
| |
| stick store-response <pattern> [table <table>] [{if | unless} <condition>] |
| Define a response pattern used to create an entry in a stickiness table |
| May be used in sections : defaults | frontend | listen | backend |
| no | no | yes | yes |
| |
| Arguments : |
| <pattern> is a sample expression rule as described in section 7.3. It |
| describes what elements of the response or connection will |
| be analyzed, extracted and stored in the table once a |
| server is selected. |
| |
| <table> is an optional stickiness table name. If unspecified, the same |
| backend's table is used. A stickiness table is declared using |
| the "stick-table" statement. |
| |
| <cond> is an optional storage condition. It makes it possible to store |
| certain criteria only when some conditions are met (or not met). |
| For instance, it could be used to store the SSL session ID only |
| when the response is a SSL server hello. |
| |
| Some protocols or applications require complex stickiness rules and cannot |
| always simply rely on cookies nor hashing. The "stick store-response" |
| statement describes a rule to decide what to extract from the response and |
| when to do it, in order to store it into a stickiness table for further |
| requests to match it using the "stick match" statement. Obviously the |
| extracted part must make sense and have a chance to be matched in a further |
| request. Storing an ID found in a header of a response makes sense. |
| See section 7 for a complete list of possible patterns and transformation |
| rules. |
| |
| The table has to be declared using the "stick-table" statement. It must be of |
| a type compatible with the pattern. By default it is the one which is present |
| in the same backend. It is possible to share a table with other backends by |
| referencing it using the "table" keyword. If another table is referenced, |
| the server's ID inside the backends are used. By default, all server IDs |
| start at 1 in each backend, so the server ordering is enough. But in case of |
| doubt, it is highly recommended to force server IDs using their "id" setting. |
| |
| It is possible to restrict the conditions where a "stick store-response" |
| statement will apply, using "if" or "unless" followed by a condition. This |
| condition will be evaluated while parsing the response, so any criteria can |
| be used. See section 7 for ACL based conditions. |
| |
| There is no limit on the number of "stick store-response" statements, but |
| there is a limit of 8 simultaneous stores per request or response. This |
| makes it possible to store up to 8 criteria, all extracted from either the |
| request or the response, regardless of the number of rules. Only the 8 first |
| ones which match will be kept. Using this, it is possible to feed multiple |
| tables at once in the hope to increase the chance to recognize a user on |
| another protocol or access method. Using multiple store-response rules with |
| the same table is possible and may be used to find the best criterion to rely |
| on, by arranging the rules by decreasing preference order. Only the first |
| extracted criterion for a given table will be stored. All subsequent store- |
| response rules referencing the same table will be skipped and their ACLs will |
| not be evaluated. However, even if a store-request rule references a table, a |
| store-response rule may also use the same table. This means that each table |
| may learn exactly one element from the request and one element from the |
| response at once. |
| |
| The table will contain the real server that processed the request. |
| |
| Example : |
| # Learn SSL session ID from both request and response and create affinity. |
| backend https |
| mode tcp |
| balance roundrobin |
| # maximum SSL session ID length is 32 bytes. |
| stick-table type binary len 32 size 30k expire 30m |
| |
| acl clienthello req.ssl_hello_type 1 |
| acl serverhello res.ssl_hello_type 2 |
| |
| # use tcp content accepts to detects ssl client and server hello. |
| tcp-request inspect-delay 5s |
| tcp-request content accept if clienthello |
| |
| # no timeout on response inspect delay by default. |
| tcp-response content accept if serverhello |
| |
| # SSL session ID (SSLID) may be present on a client or server hello. |
| # Its length is coded on 1 byte at offset 43 and its value starts |
| # at offset 44. |
| |
| # Match and learn on request if client hello. |
| stick on req.payload_lv(43,1) if clienthello |
| |
| # Learn on response if server hello. |
| stick store-response resp.payload_lv(43,1) if serverhello |
| |
| server s1 192.168.1.1:443 |
| server s2 192.168.1.1:443 |
| |
| See also : "stick-table", "stick on", and section 7 about ACLs and pattern |
| extraction. |
| |
| |
| tcp-check comment <string> |
| Defines a comment for the following the tcp-check rule, reported in logs if |
| it fails. |
| May be used in sections : defaults | frontend | listen | backend |
| yes | no | yes | yes |
| |
| Arguments : |
| <string> is the comment message to add in logs if the following tcp-check |
| rule fails. |
| |
| It only works for connect, send and expect rules. It is useful to make |
| user-friendly error reporting. |
| |
| See also : "option tcp-check", "tcp-check connect", "tcp-check send" and |
| "tcp-check expect". |
| |
| |
| tcp-check connect [default] [port <expr>] [addr <ip>] [send-proxy] [via-socks4] |
| [ssl] [sni <sni>] [alpn <alpn>] [linger] |
| [proto <name>] [comment <msg>] |
| Opens a new connection |
| May be used in sections: defaults | frontend | listen | backend |
| yes | no | yes | yes |
| |
| Arguments : |
| comment <msg> defines a message to report if the rule evaluation fails. |
| |
| default Use default options of the server line to do the health |
| checks. The server options are used only if not redefined. |
| |
| port <expr> if not set, check port or server port is used. |
| It tells HAProxy where to open the connection to. |
| <port> must be a valid TCP port source integer, from 1 to |
| 65535 or an sample-fetch expression. |
| |
| addr <ip> defines the IP address to do the health check. |
| |
| send-proxy send a PROXY protocol string |
| |
| via-socks4 enables outgoing health checks using upstream socks4 proxy. |
| |
| ssl opens a ciphered connection |
| |
| sni <sni> specifies the SNI to use to do health checks over SSL. |
| |
| alpn <alpn> defines which protocols to advertise with ALPN. The protocol |
| list consists in a comma-delimited list of protocol names, |
| for instance: "http/1.1,http/1.0" (without quotes). |
| If it is not set, the server ALPN is used. |
| |
| proto <name> forces the multiplexer's protocol to use for this connection. |
| It must be a TCP mux protocol and it must be usable on the |
| backend side. The list of available protocols is reported in |
| haproxy -vv. |
| |
| linger cleanly close the connection instead of using a single RST. |
| |
| When an application lies on more than a single TCP port or when HAProxy |
| load-balance many services in a single backend, it makes sense to probe all |
| the services individually before considering a server as operational. |
| |
| When there are no TCP port configured on the server line neither server port |
| directive, then the 'tcp-check connect port <port>' must be the first step |
| of the sequence. |
| |
| In a tcp-check ruleset a 'connect' is required, it is also mandatory to start |
| the ruleset with a 'connect' rule. Purpose is to ensure admin know what they |
| do. |
| |
| When a connect must start the ruleset, if may still be preceded by set-var, |
| unset-var or comment rules. |
| |
| Examples : |
| # check HTTP and HTTPs services on a server. |
| # first open port 80 thanks to server line port directive, then |
| # tcp-check opens port 443, ciphered and run a request on it: |
| option tcp-check |
| tcp-check connect |
| tcp-check send GET\ /\ HTTP/1.0\r\n |
| tcp-check send Host:\ haproxy.1wt.eu\r\n |
| tcp-check send \r\n |
| tcp-check expect rstring (2..|3..) |
| tcp-check connect port 443 ssl |
| tcp-check send GET\ /\ HTTP/1.0\r\n |
| tcp-check send Host:\ haproxy.1wt.eu\r\n |
| tcp-check send \r\n |
| tcp-check expect rstring (2..|3..) |
| server www 10.0.0.1 check port 80 |
| |
| # check both POP and IMAP from a single server: |
| option tcp-check |
| tcp-check connect port 110 linger |
| tcp-check expect string +OK\ POP3\ ready |
| tcp-check connect port 143 |
| tcp-check expect string *\ OK\ IMAP4\ ready |
| server mail 10.0.0.1 check |
| |
| See also : "option tcp-check", "tcp-check send", "tcp-check expect" |
| |
| |
| tcp-check expect [min-recv <int>] [comment <msg>] |
| [ok-status <st>] [error-status <st>] [tout-status <st>] |
| [on-success <fmt>] [on-error <fmt>] [status-code <expr>] |
| [!] <match> <pattern> |
| Specify data to be collected and analyzed during a generic health check |
| May be used in sections: defaults | frontend | listen | backend |
| yes | no | yes | yes |
| |
| Arguments : |
| comment <msg> defines a message to report if the rule evaluation fails. |
| |
| min-recv is optional and can define the minimum amount of data required to |
| evaluate the current expect rule. If the number of received bytes |
| is under this limit, the check will wait for more data. This |
| option can be used to resolve some ambiguous matching rules or to |
| avoid executing costly regex matches on content known to be still |
| incomplete. If an exact string (string or binary) is used, the |
| minimum between the string length and this parameter is used. |
| This parameter is ignored if it is set to -1. If the expect rule |
| does not match, the check will wait for more data. If set to 0, |
| the evaluation result is always conclusive. |
| |
| <match> is a keyword indicating how to look for a specific pattern in the |
| response. The keyword may be one of "string", "rstring", "binary" or |
| "rbinary". |
| The keyword may be preceded by an exclamation mark ("!") to negate |
| the match. Spaces are allowed between the exclamation mark and the |
| keyword. See below for more details on the supported keywords. |
| |
| ok-status <st> is optional and can be used to set the check status if |
| the expect rule is successfully evaluated and if it is |
| the last rule in the tcp-check ruleset. "L7OK", "L7OKC", |
| "L6OK" and "L4OK" are supported : |
| - L7OK : check passed on layer 7 |
| - L7OKC : check conditionally passed on layer 7, set |
| server to NOLB state. |
| - L6OK : check passed on layer 6 |
| - L4OK : check passed on layer 4 |
| By default "L7OK" is used. |
| |
| error-status <st> is optional and can be used to set the check status if |
| an error occurred during the expect rule evaluation. |
| "L7OKC", "L7RSP", "L7STS", "L6RSP" and "L4CON" are |
| supported : |
| - L7OKC : check conditionally passed on layer 7, set |
| server to NOLB state. |
| - L7RSP : layer 7 invalid response - protocol error |
| - L7STS : layer 7 response error, for example HTTP 5xx |
| - L6RSP : layer 6 invalid response - protocol error |
| - L4CON : layer 1-4 connection problem |
| By default "L7RSP" is used. |
| |
| tout-status <st> is optional and can be used to set the check status if |
| a timeout occurred during the expect rule evaluation. |
| "L7TOUT", "L6TOUT", and "L4TOUT" are supported : |
| - L7TOUT : layer 7 (HTTP/SMTP) timeout |
| - L6TOUT : layer 6 (SSL) timeout |
| - L4TOUT : layer 1-4 timeout |
| By default "L7TOUT" is used. |
| |
| on-success <fmt> is optional and can be used to customize the |
| informational message reported in logs if the expect |
| rule is successfully evaluated and if it is the last rule |
| in the tcp-check ruleset. <fmt> is a log-format string. |
| |
| on-error <fmt> is optional and can be used to customize the |
| informational message reported in logs if an error |
| occurred during the expect rule evaluation. <fmt> is a |
| log-format string. |
| |
| status-code <expr> is optional and can be used to set the check status code |
| reported in logs, on success or on error. <expr> is a |
| standard HAProxy expression formed by a sample-fetch |
| followed by some converters. |
| |
| <pattern> is the pattern to look for. It may be a string or a regular |
| expression. If the pattern contains spaces, they must be escaped |
| with the usual backslash ('\'). |
| If the match is set to binary, then the pattern must be passed as |
| a series of hexadecimal digits in an even number. Each sequence of |
| two digits will represent a byte. The hexadecimal digits may be |
| used upper or lower case. |
| |
| The available matches are intentionally similar to their http-check cousins : |
| |
| string <string> : test the exact string matches in the response buffer. |
| A health check response will be considered valid if the |
| response's buffer contains this exact string. If the |
| "string" keyword is prefixed with "!", then the response |
| will be considered invalid if the body contains this |
| string. This can be used to look for a mandatory pattern |
| in a protocol response, or to detect a failure when a |
| specific error appears in a protocol banner. |
| |
| rstring <regex> : test a regular expression on the response buffer. |
| A health check response will be considered valid if the |
| response's buffer matches this expression. If the |
| "rstring" keyword is prefixed with "!", then the response |
| will be considered invalid if the body matches the |
| expression. |
| |
| string-lf <fmt> : test a log-format string match in the response's buffer. |
| A health check response will be considered valid if the |
| response's buffer contains the string resulting of the |
| evaluation of <fmt>, which follows the log-format rules. |
| If prefixed with "!", then the response will be |
| considered invalid if the buffer contains the string. |
| |
| binary <hexstring> : test the exact string in its hexadecimal form matches |
| in the response buffer. A health check response will |
| be considered valid if the response's buffer contains |
| this exact hexadecimal string. |
| Purpose is to match data on binary protocols. |
| |
| rbinary <regex> : test a regular expression on the response buffer, like |
| "rstring". However, the response buffer is transformed |
| into its hexadecimal form, including NUL-bytes. This |
| allows using all regex engines to match any binary |
| content. The hexadecimal transformation takes twice the |
| size of the original response. As such, the expected |
| pattern should work on at-most half the response buffer |
| size. |
| |
| binary-lf <hexfmt> : test a log-format string in its hexadecimal form |
| match in the response's buffer. A health check response |
| will be considered valid if the response's buffer |
| contains the hexadecimal string resulting of the |
| evaluation of <fmt>, which follows the log-format |
| rules. If prefixed with "!", then the response will be |
| considered invalid if the buffer contains the |
| hexadecimal string. The hexadecimal string is converted |
| in a binary string before matching the response's |
| buffer. |
| |
| It is important to note that the responses will be limited to a certain size |
| defined by the global "tune.bufsize" option, which defaults to 16384 bytes. |
| Thus, too large responses may not contain the mandatory pattern when using |
| "string", "rstring" or binary. If a large response is absolutely required, it |
| is possible to change the default max size by setting the global variable. |
| However, it is worth keeping in mind that parsing very large responses can |
| waste some CPU cycles, especially when regular expressions are used, and that |
| it is always better to focus the checks on smaller resources. Also, in its |
| current state, the check will not find any string nor regex past a null |
| character in the response. Similarly it is not possible to request matching |
| the null character. |
| |
| Examples : |
| # perform a POP check |
| option tcp-check |
| tcp-check expect string +OK\ POP3\ ready |
| |
| # perform an IMAP check |
| option tcp-check |
| tcp-check expect string *\ OK\ IMAP4\ ready |
| |
| # look for the redis master server |
| option tcp-check |
| tcp-check send PING\r\n |
| tcp-check expect string +PONG |
| tcp-check send info\ replication\r\n |
| tcp-check expect string role:master |
| tcp-check send QUIT\r\n |
| tcp-check expect string +OK |
| |
| |
| See also : "option tcp-check", "tcp-check connect", "tcp-check send", |
| "tcp-check send-binary", "http-check expect", tune.bufsize |
| |
| |
| tcp-check send <data> [comment <msg>] |
| tcp-check send-lf <fmt> [comment <msg>] |
| Specify a string or a log-format string to be sent as a question during a |
| generic health check |
| May be used in sections: defaults | frontend | listen | backend |
| yes | no | yes | yes |
| |
| Arguments : |
| comment <msg> defines a message to report if the rule evaluation fails. |
| |
| <data> is the string that will be sent during a generic health |
| check session. |
| |
| <fmt> is the log-format string that will be sent, once evaluated, |
| during a generic health check session. |
| |
| Examples : |
| # look for the redis master server |
| option tcp-check |
| tcp-check send info\ replication\r\n |
| tcp-check expect string role:master |
| |
| See also : "option tcp-check", "tcp-check connect", "tcp-check expect", |
| "tcp-check send-binary", tune.bufsize |
| |
| |
| tcp-check send-binary <hexstring> [comment <msg>] |
| tcp-check send-binary-lf <hexfmt> [comment <msg>] |
| Specify an hex digits string or an hex digits log-format string to be sent as |
| a binary question during a raw tcp health check |
| May be used in sections: defaults | frontend | listen | backend |
| yes | no | yes | yes |
| |
| Arguments : |
| comment <msg> defines a message to report if the rule evaluation fails. |
| |
| <hexstring> is the hexadecimal string that will be send, once converted |
| to binary, during a generic health check session. |
| |
| <hexfmt> is the hexadecimal log-format string that will be send, once |
| evaluated and converted to binary, during a generic health |
| check session. |
| |
| Examples : |
| # redis check in binary |
| option tcp-check |
| tcp-check send-binary 50494e470d0a # PING\r\n |
| tcp-check expect binary 2b504F4e47 # +PONG |
| |
| |
| See also : "option tcp-check", "tcp-check connect", "tcp-check expect", |
| "tcp-check send", tune.bufsize |
| |
| |
| tcp-check set-var(<var-name>[,<cond>...]) <expr> |
| tcp-check set-var-fmt(<var-name>[,<cond>...]) <fmt> |
| This operation sets the content of a variable. The variable is declared inline. |
| May be used in sections: defaults | frontend | listen | backend |
| yes | no | yes | yes |
| |
| Arguments : |
| <var-name> The name of the variable starts with an indication about its |
| scope. The scopes allowed for tcp-check are: |
| "proc" : the variable is shared with the whole process. |
| "sess" : the variable is shared with the tcp-check session. |
| "check": the variable is declared for the lifetime of the tcp-check. |
| This prefix is followed by a name. The separator is a '.'. |
| The name may only contain characters 'a-z', 'A-Z', '0-9', '.', |
| and '-'. |
| |
| <cond> A set of conditions that must all be true for the variable to |
| actually be set (such as "ifnotempty", "ifgt" ...). See the |
| set-var converter's description for a full list of possible |
| conditions. |
| |
| <expr> Is a sample-fetch expression potentially followed by converters. |
| |
| <fmt> This is the value expressed using log-format rules (see Custom |
| Log Format in section 8.2.6). |
| |
| Examples : |
| tcp-check set-var(check.port) int(1234) |
| tcp-check set-var-fmt(check.name) "%H" |
| |
| |
| tcp-check unset-var(<var-name>) |
| Free a reference to a variable within its scope. |
| May be used in sections: defaults | frontend | listen | backend |
| yes | no | yes | yes |
| |
| Arguments : |
| <var-name> The name of the variable starts with an indication about its |
| scope. The scopes allowed for tcp-check are: |
| "proc" : the variable is shared with the whole process. |
| "sess" : the variable is shared with the tcp-check session. |
| "check": the variable is declared for the lifetime of the tcp-check. |
| This prefix is followed by a name. The separator is a '.'. |
| The name may only contain characters 'a-z', 'A-Z', '0-9', '.', |
| and '-'. |
| |
| Examples : |
| tcp-check unset-var(check.port) |
| |
| |
| tcp-request connection <action> <options...> [ { if | unless } <condition> ] |
| Perform an action on an incoming connection depending on a layer 4 condition |
| May be used in sections : defaults | frontend | listen | backend |
| yes(!) | yes | yes | no |
| Arguments : |
| <action> defines the action to perform if the condition applies. See |
| below. |
| |
| <condition> is a standard layer4-only ACL-based condition (see section 7). |
| |
| Immediately after acceptance of a new incoming connection, it is possible to |
| evaluate some conditions to decide whether this connection must be accepted |
| or dropped or have its counters tracked. Those conditions cannot make use of |
| any data contents because the connection has not been read from yet, and the |
| buffers are not yet allocated. This is used to selectively and very quickly |
| accept or drop connections from various sources with a very low overhead. If |
| some contents need to be inspected in order to take the decision, the |
| "tcp-request content" statements must be used instead. |
| |
| The "tcp-request connection" rules are evaluated in their exact declaration |
| order. If no rule matches or if there is no rule, the default action is to |
| accept the incoming connection. There is no specific limit to the number of |
| rules which may be inserted. Any rule may optionally be followed by an |
| ACL-based condition, in which case it will only be evaluated if the condition |
| is true. |
| |
| The first keyword is the rule's action. Several types of actions are |
| supported: |
| - accept |
| - expect-netscaler-cip layer4 |
| - expect-proxy layer4 |
| - reject |
| - sc-add-gpc(<idx>,<sc-id>) { <int> | <expr> } |
| - sc-inc-gpc(<idx>,<sc-id>) |
| - sc-inc-gpc0(<sc-id>) |
| - sc-inc-gpc1(<sc-id>) |
| - sc-set-gpt(<idx>,<sc-id>) { <int> | <expr> } |
| - sc-set-gpt0(<sc-id>) { <int> | <expr> } |
| - set-dst <expr> |
| - set-dst-port <expr> |
| - set-mark <mark> |
| - set-src <expr> |
| - set-src-port <expr> |
| - set-tos <tos> |
| - set-var(<var-name>[,<cond>...]) <expr> |
| - set-var-fmt(<var-name>[,<cond>...]) <fmt> |
| - silent-drop [ rst-ttl <ttl> ] |
| - track-sc0 <key> [table <table>] |
| - track-sc1 <key> [table <table>] |
| - track-sc2 <key> [table <table>] |
| - unset-var(<var-name>) |
| |
| The supported actions are described below. |
| |
| There is no limit to the number of "tcp-request connection" statements per |
| instance. |
| |
| This directive is only available from named defaults sections, not anonymous |
| ones. Rules defined in the defaults section are evaluated before ones in the |
| associated proxy section. To avoid ambiguities, in this case the same |
| defaults section cannot be used by proxies with the frontend capability and |
| by proxies with the backend capability. It means a listen section cannot use |
| a defaults section defining such rules. |
| |
| Note that the "if/unless" condition is optional. If no condition is set on |
| the action, it is simply performed unconditionally. That can be useful for |
| "track-sc*" actions as well as for changing the default action to a reject. |
| |
| Example: accept all connections from white-listed hosts, reject too fast |
| connection without counting them, and track accepted connections. |
| This results in connection rate being capped from abusive sources. |
| |
| tcp-request connection accept if { src -f /etc/haproxy/whitelist.lst } |
| tcp-request connection reject if { src_conn_rate gt 10 } |
| tcp-request connection track-sc0 src |
| |
| Example: accept all connections from white-listed hosts, count all other |
| connections and reject too fast ones. This results in abusive ones |
| being blocked as long as they don't slow down. |
| |
| tcp-request connection accept if { src -f /etc/haproxy/whitelist.lst } |
| tcp-request connection track-sc0 src |
| tcp-request connection reject if { sc0_conn_rate gt 10 } |
| |
| Example: enable the PROXY protocol for traffic coming from all known proxies. |
| |
| tcp-request connection expect-proxy layer4 if { src -f proxies.lst } |
| |
| See section 7 about ACL usage. |
| |
| See also : "tcp-request session", "tcp-request content", "stick-table" |
| |
| tcp-request connection accept [ { if | unless } <condition> ] |
| |
| This is used to accept the connection. No further "tcp-request connection" |
| rules are evaluated. |
| |
| tcp-request connection expect-netscaler-cip layer4 |
| [ { if | unless } <condition> ] |
| |
| This configures the client-facing connection to receive a NetScaler Client IP |
| insertion protocol header before any byte is read from the socket. This is |
| equivalent to having the "accept-netscaler-cip" keyword on the "bind" line, |
| except that using the TCP rule allows the PROXY protocol to be accepted only |
| for certain IP address ranges using an ACL. This is convenient when multiple |
| layers of load balancers are passed through by traffic coming from public |
| hosts. |
| |
| tcp-request connection expect-proxy layer4 [ { if | unless } <condition> ] |
| |
| This configures the client-facing connection to receive a PROXY protocol |
| header before any byte is read from the socket. This is equivalent to having |
| the "accept-proxy" keyword on the "bind" line, except that using the TCP rule |
| allows the PROXY protocol to be accepted only for certain IP address ranges |
| using an ACL. This is convenient when multiple layers of load balancers are |
| passed through by traffic coming from public hosts. |
| |
| tcp-request connection reject [ { if | unless } <condition> ] |
| |
| This is used to reject the connection. No further "tcp-request connection" |
| rules are evaluated. Rejected connections do not even become a session, which |
| is why they are accounted separately for in the stats, as "denied |
| connections". They are not considered for the session rate-limit and are not |
| logged either. The reason is that these rules should only be used to filter |
| extremely high connection rates such as the ones encountered during a massive |
| DDoS attack. Under these extreme conditions, the simple action of logging |
| each event would make the system collapse and would considerably lower the |
| filtering capacity. If logging is absolutely desired, then "tcp-request |
| content" rules should be used instead, as "tcp-request session" rules will |
| not log either. |
| |
| tcp-request connection sc-add-gpc(<idx>,<sc-id>) { <int> | <expr> } |
| [ { if | unless } <condition> ] |
| |
| This action increments the General Purpose Counter according to the sticky |
| counter designated by <sc-id>. Please refer to "http-request sc-add-gpc" for |
| a complete description. |
| |
| tcp-request connection sc-inc-gpc(<idx>,<sc-id>) [ { if | unless } <condition> ] |
| tcp-request connection sc-inc-gpc0(<sc-id>) [ { if | unless } <condition> ] |
| tcp-request connection sc-inc-gpc1(<sc-id>) [ { if | unless } <condition> ] |
| |
| These actions increment the General Purppose Counters according to the sticky |
| counter designated by <sc-id>. Please refer to "http-request sc-inc-gpc", |
| "http-request sc-inc-gpc0" and "http-request sc-inc-gpc1" for a complete |
| description. |
| |
| tcp-request connection sc-set-gpt(<idx>,<sc-id>) { <int> | <expr> } |
| [ { if | unless } <condition> ] |
| tcp-request connection sc-set-gpt0(<sc-id>) { <int> | <expr> } |
| [ { if | unless } <condition> ] |
| |
| These actions set the 32-bit unsigned General Purpose Tags according to the |
| sticky counter designated by <sc-id>. Please refer to "http-request |
| sc-set-gpt" and "http-request sc-set-gpt0" for a complete description. |
| |
| tcp-request connection set-dst <expr> [ { if | unless } <condition> ] |
| tcp-request connection set-dst-port <expr> [ { if | unless } <condition> ] |
| |
| These actions are used to set the destination IP/Port address to the value of |
| specified expression. Please refer to "http-request set-dst" and |
| "http-request set-dst-port" for a complete description. |
| |
| tcp-request connection set-mark <mark> [ { if | unless } <condition> ] |
| |
| This action is used to set the Netfilter/IPFW MARK in all packets sent to the |
| client to the value passed in <mark> on platforms which support it. Please |
| refer to "http-request set-mark" for a complete description. |
| |
| tcp-request connection set-src <expr> [ { if | unless } <condition> ] |
| tcp-request connection set-src-port <expr> [ { if | unless } <condition> ] |
| |
| These actions are used to set the source IP/Port address to the value of |
| specified expression. Please refer to "http-request set-src" and |
| "http-request set-src-port" for a complete description. |
| |
| tcp-request connection set-tos <tos> [ { if | unless } <condition> ] |
| |
| This is used to set the TOS or DSCP field value of packets sent to the client |
| to the value passed in <tos> on platforms which support this. Please refer to |
| "http-request set-tos" for a complete description. |
| |
| tcp-request connection set-var(<var-name>[,<cond>...]) <expr> [ { if | unless } <condition> ] |
| tcp-request connection set-var-fmt(<var-name>[,<cond>...]) <fmt> [ { if | unless } <condition> ] |
| |
| This is used to set the contents of a variable. The variable is declared |
| inline. "tcp-request connection" can set variables in the "proc" and "sess" |
| scopes. Please refer to "http-request set-var" and "http-request set-var-fmt" |
| for a complete description. |
| |
| tcp-request connection silent-drop [ rst-ttl <ttl> ] [ { if | unless } <condition> ] |
| |
| This stops the evaluation of the rules and makes the client-facing connection |
| suddenly disappear using a system-dependent way that tries to prevent the |
| client from being notified. Please refer to "http-request silent-drop" for a |
| complete description. |
| |
| tcp-request connection track-sc0 <key> [table <table>] [ { if | unless } <condition> ] |
| tcp-request connection track-sc1 <key> [table <table>] [ { if | unless } <condition> ] |
| tcp-request connection track-sc2 <key> [table <table>] [ { if | unless } <condition> ] |
| |
| This enables tracking of sticky counters from current connection. Please |
| refer to "http-request track-sc0", "http-request track-sc1" and "http-request |
| track-sc2" for a complete description. |
| |
| tcp-request connection unset-var(<var-name>) [ { if | unless } <condition> ] |
| |
| This is used to unset a variable. Please refer to "http-request set-var" for |
| details about variables. |
| |
| |
| tcp-request content <action> [{if | unless} <condition>] |
| Perform an action on a new session depending on a layer 4-7 condition |
| May be used in sections : defaults | frontend | listen | backend |
| yes(!) | yes | yes | yes |
| Arguments : |
| <action> defines the action to perform if the condition applies. See |
| below. |
| |
| <condition> is a standard layer 4-7 ACL-based condition (see section 7). |
| |
| A request's contents can be analyzed at an early stage of request processing |
| called "TCP content inspection". During this stage, ACL-based rules are |
| evaluated every time the request contents are updated, until either an |
| "accept", a "reject" or a "switch-mode" rule matches, or the TCP request |
| inspection delay expires with no matching rule. |
| |
| The first difference between these rules and "tcp-request connection" rules |
| is that "tcp-request content" rules can make use of contents to take a |
| decision. Most often, these decisions will consider a protocol recognition or |
| validity. The second difference is that content-based rules can be used in |
| both frontends and backends. In case of HTTP keep-alive with the client, all |
| tcp-request content rules are evaluated again, so HAProxy keeps a record of |
| what sticky counters were assigned by a "tcp-request connection" versus a |
| "tcp-request content" rule, and flushes all the content-related ones after |
| processing an HTTP request, so that they may be evaluated again by the rules |
| being evaluated again for the next request. This is of particular importance |
| when the rule tracks some L7 information or when it is conditioned by an |
| L7-based ACL, since tracking may change between requests. |
| |
| Content-based rules are evaluated in their exact declaration order. If no |
| rule matches or if there is no rule, the default action is to accept the |
| contents. There is no specific limit to the number of rules which may be |
| inserted. |
| |
| The first keyword is the rule's action. Several types of actions are |
| supported: |
| - accept |
| - capture <sample> len <length> |
| - do-resolve(<var>,<resolvers>,[ipv4,ipv6]) <expr> |
| - reject |
| - sc-add-gpc(<idx>,<sc-id>) { <int> | <expr> } |
| - sc-inc-gpc(<idx>,<sc-id>) |
| - sc-inc-gpc0(<sc-id>) |
| - sc-inc-gpc1(<sc-id>) |
| - sc-set-gpt(<idx>,<sc-id>) { <int> | <expr> } |
| - sc-set-gpt0(<sc-id>) { <int> | <expr> } |
| - send-spoe-group <engine-name> <group-name> |
| - set-bandwidth-limit <name> [limit {<expr> | <size>}] [period {<expr> | <time>}] |
| - set-dst <expr> |
| - set-dst-port <expr> |
| - set-log-level <level> |
| - set-mark <mark> |
| - set-nice <nice> |
| - set-priority-class <expr> |
| - set-priority-offset <expr> |
| - set-src <expr> |
| - set-src-port <expr> |
| - set-tos <tos> |
| - set-var(<var-name>[,<cond>...]) <expr> |
| - set-var-fmt(<var-name>[,<cond>...]) <fmt> |
| - silent-drop [ rst-ttl <ttl> ] |
| - switch-mode http [ proto <name> ] |
| - track-sc0 <key> [table <table>] |
| - track-sc1 <key> [table <table>] |
| - track-sc2 <key> [table <table>] |
| - unset-var(<var-name>) |
| - use-service <service-name> |
| |
| The supported actions are described below. |
| |
| While there is nothing mandatory about it, it is recommended to use the |
| track-sc0 in "tcp-request connection" rules, track-sc1 for "tcp-request |
| content" rules in the frontend, and track-sc2 for "tcp-request content" |
| rules in the backend, because that makes the configuration more readable |
| and easier to troubleshoot, but this is just a guideline and all counters |
| may be used everywhere. |
| |
| This directive is only available from named defaults sections, not anonymous |
| ones. Rules defined in the defaults section are evaluated before ones in the |
| associated proxy section. To avoid ambiguities, in this case the same |
| defaults section cannot be used by proxies with the frontend capability and |
| by proxies with the backend capability. It means a listen section cannot use |
| a defaults section defining such rules. |
| |
| Note that the "if/unless" condition is optional. If no condition is set on |
| the action, it is simply performed unconditionally. That can be useful for |
| "track-sc*" actions as well as for changing the default action to a reject. |
| |
| Note also that it is recommended to use a "tcp-request session" rule to track |
| information that does *not* depend on Layer 7 contents, especially for HTTP |
| frontends. Some HTTP processing are performed at the session level and may |
| lead to an early rejection of the requests. Thus, the tracking at the content |
| level may be disturbed in such case. A warning is emitted during startup to |
| prevent, as far as possible, such unreliable usage. |
| |
| It is perfectly possible to match layer 7 contents with "tcp-request content" |
| rules from a TCP proxy, since HTTP-specific ACL matches are able to |
| preliminarily parse the contents of a buffer before extracting the required |
| data. If the buffered contents do not parse as a valid HTTP message, then the |
| ACL does not match. The parser which is involved there is exactly the same |
| as for all other HTTP processing, so there is no risk of parsing something |
| differently. In an HTTP frontend or an HTTP backend, it is guaranteed that |
| HTTP contents will always be immediately present when the rule is evaluated |
| first because the HTTP parsing is performed in the early stages of the |
| connection processing, at the session level. But for such proxies, using |
| "http-request" rules is much more natural and recommended. |
| |
| Tracking layer7 information is also possible provided that the information |
| are present when the rule is processed. The rule processing engine is able to |
| wait until the inspect delay expires when the data to be tracked is not yet |
| available. |
| |
| Example: |
| tcp-request content use-service lua.deny if { src -f /etc/haproxy/blacklist.lst } |
| |
| Example: |
| |
| tcp-request content set-var(sess.my_var) src |
| tcp-request content set-var-fmt(sess.from) %[src]:%[src_port] |
| tcp-request content unset-var(sess.my_var2) |
| |
| Example: |
| # Accept HTTP requests containing a Host header saying "example.com" |
| # and reject everything else. (Only works for HTTP/1 connections) |
| acl is_host_com hdr(Host) -i example.com |
| tcp-request inspect-delay 30s |
| tcp-request content accept if is_host_com |
| tcp-request content reject |
| |
| # Accept HTTP requests containing a Host header saying "example.com" |
| # and reject everything else. (works for HTTP/1 and HTTP/2 connections) |
| acl is_host_com hdr(Host) -i example.com |
| tcp-request inspect-delay 5s |
| tcp-request switch-mode http if HTTP |
| tcp-request reject # non-HTTP traffic is implicit here |
| ... |
| http-request reject unless is_host_com |
| |
| Example: |
| # reject SMTP connection if client speaks first |
| tcp-request inspect-delay 30s |
| acl content_present req.len gt 0 |
| tcp-request content reject if content_present |
| |
| # Forward HTTPS connection only if client speaks |
| tcp-request inspect-delay 30s |
| acl content_present req.len gt 0 |
| tcp-request content accept if content_present |
| tcp-request content reject |
| |
| Example: |
| # Track the last IP(stick-table type string) from X-Forwarded-For |
| tcp-request inspect-delay 10s |
| tcp-request content track-sc0 hdr(x-forwarded-for,-1) |
| # Or track the last IP(stick-table type ip|ipv6) from X-Forwarded-For |
| tcp-request content track-sc0 req.hdr_ip(x-forwarded-for,-1) |
| |
| Example: |
| # track request counts per "base" (concatenation of Host+URL) |
| tcp-request inspect-delay 10s |
| tcp-request content track-sc0 base table req-rate |
| |
| Example: track per-frontend and per-backend counters, block abusers at the |
| frontend when the backend detects abuse(and marks gpc0). |
| |
| frontend http |
| # Use General Purpose Counter 0 in SC0 as a global abuse counter |
| # protecting all our sites |
| stick-table type ip size 1m expire 5m store gpc0 |
| tcp-request connection track-sc0 src |
| tcp-request connection reject if { sc0_get_gpc0 gt 0 } |
| ... |
| use_backend http_dynamic if { path_end .php } |
| |
| backend http_dynamic |
| # if a source makes too fast requests to this dynamic site (tracked |
| # by SC1), block it globally in the frontend. |
| stick-table type ip size 1m expire 5m store http_req_rate(10s) |
| acl click_too_fast sc1_http_req_rate gt 10 |
| acl mark_as_abuser sc0_inc_gpc0(http) gt 0 |
| tcp-request content track-sc1 src |
| tcp-request content reject if click_too_fast mark_as_abuser |
| |
| See section 7 about ACL usage. |
| |
| See also : "tcp-request connection", "tcp-request session", |
| "tcp-request inspect-delay", and "http-request". |
| |
| tcp-request content accept [ { if | unless } <condition> ] |
| |
| This is used to accept the connection. No further "tcp-request content" |
| rules are evaluated for the current section. |
| |
| tcp-request content capture <sample> len <length> |
| [ { if | unless } <condition> ] |
| |
| This captures sample expression <sample> from the request buffer, and |
| converts it to a string of at most <len> characters. The resulting string is |
| stored into the next request "capture" slot, so it will possibly appear next |
| to some captured HTTP headers. It will then automatically appear in the logs, |
| and it will be possible to extract it using sample fetch rules to feed it |
| into headers or anything. The length should be limited given that this size |
| will be allocated for each capture during the whole session life. Please |
| check section 7.3 (Fetching samples) and "capture request header" for more |
| information. |
| |
| tcp-request content do-resolve(<var>,<resolvers>,[ipv4,ipv6]) <expr> |
| |
| This action performs a DNS resolution of the output of <expr> and stores the |
| result in the variable <var>. Please refer to "http-request do-resolve" for a |
| complete description. |
| |
| tcp-request content reject [ { if | unless } <condition> ] |
| |
| This is used to reject the connection. No further "tcp-request content" rules |
| are evaluated. |
| |
| tcp-request content sc-add-gpc(<idx>,<sc-id>) { <int> | <expr> } |
| [ { if | unless } <condition> ] |
| |
| This action increments the General Purpose Counter according to the sticky |
| counter designated by <sc-id>. Please refer to "http-request sc-add-gpc" for |
| a complete description. |
| |
| tcp-request content sc-inc-gpc(<idx>,<sc-id>) [ { if | unless } <condition> ] |
| tcp-request content sc-inc-gpc0(<sc-id>) [ { if | unless } <condition> ] |
| tcp-request content sc-inc-gpc1(<sc-id>) [ { if | unless } <condition> ] |
| |
| These actions increment the General Purppose Counters according to the sticky |
| counter designated by <sc-id>. Please refer to "http-request sc-inc-gpc", |
| "http-request sc-inc-gpc0" and "http-request sc-inc-gpc1" for a complete |
| description. |
| |
| tcp-request content sc-set-gpt(<idx>,<sc-id>) { <int> | <expr> } |
| [ { if | unless } <condition> ] |
| tcp-request content sc-set-gpt0(<sc-id>) { <int> | <expr> } |
| [ { if | unless } <condition> ] |
| |
| These actions set the 32-bit unsigned General Purpose Tags according to the |
| sticky counter designated by <sc-id>. Please refer to "http-request |
| sc-set-gpt" and "http-request sc-set-gpt0" for a complete description. |
| |
| tcp-request content send-spoe-group <engine-name> <group-name> |
| [ { if | unless } <condition> ] |
| |
| This action is is used to trigger sending of a group of SPOE messages. Please |
| refer to "http-request send-spoe-group" for a complete description. |
| |
| tcp-request content set-bandwidth-limit <name> [limit { <expr> | <size> }] |
| [period { <expr> | <time> }] [ { if | unless } <condition> ] |
| |
| This action is used to enable the bandwidth limitation filter <name>, either |
| on the upload or download direction depending on the filter type. Please |
| refer to "http-request set-bandwidth-limit" for a complete description. |
| |
| tcp-request content set-dst <expr> [ { if | unless } <condition> ] |
| tcp-request content set-dst-port <expr> [ { if | unless } <condition> ] |
| |
| These actions are used to set the destination IP/Port address to the value of |
| specified expression. Please refer to "http-request set-dst" and |
| "http-request set-dst-port" for a complete description. |
| |
| tcp-request content set-log-level <level> [ { if | unless } <condition> ] |
| |
| This action is used to set the log level of the current session. Please refer |
| to "http-request set-log-level". for a complete description. |
| |
| tcp-request content set-mark <mark> [ { if | unless } <condition> ] |
| |
| This action is used to set the Netfilter/IPFW MARK in all packets sent to the |
| client to the value passed in <mark> on platforms which support it. Please |
| refer to "http-request set-mark" for a complete description. |
| |
| tcp-request content set-nice <nice> [ { if | unless } <condition> ] |
| |
| This sets the "nice" factor of the current request being processed. Please |
| refer to "http-request set-nice" for a complete description. |
| |
| tcp-request content set-priority-class <expr> [ { if | unless } <condition> ] |
| |
| This is used to set the queue priority class of the current request. Please |
| refer to "http-request set-priority-class" for a complete description. |
| |
| tcp-request content set-priority-offset <expr> [ { if | unless } <condition> ] |
| |
| This is used to set the queue priority timestamp offset of the current |
| request. Please refer to "http-request set-priority-offset" for a complete |
| description. |
| |
| tcp-request content set-src <expr> [ { if | unless } <condition> ] |
| tcp-request content set-src-port <expr> [ { if | unless } <condition> ] |
| |
| These actions are used to set the source IP/Port address to the value of |
| specified expression. Please refer to "http-request set-src" and |
| "http-request set-src-port" for a complete description. |
| |
| tcp-request content set-tos <tos> [ { if | unless } <condition> ] |
| |
| This is used to set the TOS or DSCP field value of packets sent to the client |
| to the value passed in <tos> on platforms which support this. Please refer to |
| "http-request set-tos" for a complete description. |
| |
| tcp-request content set-var(<var-name>[,<cond>...]) <expr> [ { if | unless } <condition> ] |
| tcp-request content set-var-fmt(<var-name>[,<cond>...]) <fmt> [ { if | unless } <condition> ] |
| |
| This is used to set the contents of a variable. The variable is declared |
| inline. Please refer to "http-request set-var" and "http-request set-var-fmt" |
| for a complete description. |
| |
| tcp-request content silent-drop [ rst-ttl <ttl> ] [ { if | unless } <condition> ] |
| |
| This stops the evaluation of the rules and makes the client-facing connection |
| suddenly disappear using a system-dependent way that tries to prevent the |
| client from being notified. Please refer to "http-request silent-drop" for a |
| complete description. |
| |
| tcp-request content switch-mode http [ proto <name> ] |
| [ { if | unless } <condition> ] |
| |
| This action is used to perform a connection upgrade. Only HTTP upgrades are |
| supported for now. The protocol may optionally be specified. This action is |
| only available for a proxy with the frontend capability. The connection |
| upgrade is immediately performed, following "tcp-request content" rules are |
| not evaluated. This upgrade method should be preferred to the implicit one |
| consisting to rely on the backend mode. When used, it is possible to set HTTP |
| directives in a frontend without any warning. These directives will be |
| conditionally evaluated if the HTTP upgrade is performed. However, an HTTP |
| backend must still be selected. It remains unsupported to route an HTTP |
| connection (upgraded or not) to a TCP server. |
| |
| See section 4 about Proxies for more details on HTTP upgrades. |
| |
| tcp-request content track-sc0 <key> [table <table>] [ { if | unless } <condition> ] |
| tcp-request content track-sc1 <key> [table <table>] [ { if | unless } <condition> ] |
| tcp-request content track-sc2 <key> [table <table>] [ { if | unless } <condition> ] |
| |
| This enables tracking of sticky counters from current connection. Please |
| refer to "http-request track-sc0", "http-request track-sc1" and "http-request |
| track-sc2" for a complete description. |
| |
| tcp-request content unset-var(<var-name>) [ { if | unless } <condition> ] |
| |
| This is used to unset a variable. Please refer to "http-request set-var" for |
| details about variables. |
| |
| tcp-request content use-service <service-name> [ { if | unless } <condition> ] |
| |
| This action is used to executes a TCP service which will reply to the request |
| and stop the evaluation of the rules. This service may choose to reply by |
| sending any valid response or it may immediately close the connection without |
| sending anything. Outside natives services, it is possible to write your own |
| services in Lua. No further "tcp-request content" rules are evaluated. |
| |
| |
| tcp-request inspect-delay <timeout> |
| Set the maximum allowed time to wait for data during content inspection |
| May be used in sections : defaults | frontend | listen | backend |
| yes(!) | yes | yes | yes |
| Arguments : |
| <timeout> is the timeout value specified in milliseconds by default, but |
| can be in any other unit if the number is suffixed by the unit, |
| as explained at the top of this document. |
| |
| People using HAProxy primarily as a TCP relay are often worried about the |
| risk of passing any type of protocol to a server without any analysis. In |
| order to be able to analyze the request contents, we must first withhold |
| the data then analyze them. This statement simply enables withholding of |
| data for at most the specified amount of time. |
| |
| TCP content inspection applies very early when a connection reaches a |
| frontend, then very early when the connection is forwarded to a backend. This |
| means that a connection may experience a first delay in the frontend and a |
| second delay in the backend if both have tcp-request rules. |
| |
| Note that when performing content inspection, HAProxy will evaluate the whole |
| rules for every new chunk which gets in, taking into account the fact that |
| those data are partial. If no rule matches before the aforementioned delay, |
| a last check is performed upon expiration, this time considering that the |
| contents are definitive. If no delay is set, HAProxy will not wait at all |
| and will immediately apply a verdict based on the available information. |
| Obviously this is unlikely to be very useful and might even be racy, so such |
| setups are not recommended. |
| |
| Note the inspection delay is shortened if an connection error or shutdown is |
| experienced or if the request buffer appears as full. |
| |
| As soon as a rule matches, the request is released and continues as usual. If |
| the timeout is reached and no rule matches, the default policy will be to let |
| it pass through unaffected. |
| |
| For most protocols, it is enough to set it to a few seconds, as most clients |
| send the full request immediately upon connection. Add 3 or more seconds to |
| cover TCP retransmits but that's all. For some protocols, it may make sense |
| to use large values, for instance to ensure that the client never talks |
| before the server (e.g. SMTP), or to wait for a client to talk before passing |
| data to the server (e.g. SSL). Note that the client timeout must cover at |
| least the inspection delay, otherwise it will expire first. If the client |
| closes the connection or if the buffer is full, the delay immediately expires |
| since the contents will not be able to change anymore. |
| |
| This directive is only available from named defaults sections, not anonymous |
| ones. Proxies inherit this value from their defaults section. |
| |
| See also : "tcp-request content accept", "tcp-request content reject", |
| "timeout client". |
| |
| |
| tcp-request session <action> [{if | unless} <condition>] |
| Perform an action on a validated session depending on a layer 5 condition |
| May be used in sections : defaults | frontend | listen | backend |
| yes(!) | yes | yes | no |
| Arguments : |
| <action> defines the action to perform if the condition applies. See |
| below. |
| |
| <condition> is a standard layer5-only ACL-based condition (see section 7). |
| |
| Once a session is validated, (i.e. after all handshakes have been completed), |
| it is possible to evaluate some conditions to decide whether this session |
| must be accepted or dropped or have its counters tracked. Those conditions |
| cannot make use of any data contents because no buffers are allocated yet and |
| the processing cannot wait at this stage. The main use case is to copy some |
| early information into variables (since variables are accessible in the |
| session), or to keep track of some information collected after the handshake, |
| such as SSL-level elements (SNI, ciphers, client cert's CN) or information |
| from the PROXY protocol header (e.g. track a source forwarded this way). The |
| extracted information can thus be copied to a variable or tracked using |
| "track-sc" rules. Of course it is also possible to decide to accept/reject as |
| with other rulesets. Most operations performed here could also be performed |
| in "tcp-request content" rules, except that in HTTP these rules are evaluated |
| for each new request, and that might not always be acceptable. For example a |
| rule might increment a counter on each evaluation. It would also be possible |
| that a country is resolved by geolocation from the source IP address, |
| assigned to a session-wide variable, then the source address rewritten from |
| an HTTP header for all requests. If some contents need to be inspected in |
| order to take the decision, the "tcp-request content" statements must be used |
| instead. |
| |
| The "tcp-request session" rules are evaluated in their exact declaration |
| order. If no rule matches or if there is no rule, the default action is to |
| accept the incoming session. There is no specific limit to the number of |
| rules which may be inserted. |
| |
| The first keyword is the rule's action. Several types of actions are |
| supported: |
| - accept |
| - reject |
| - sc-add-gpc(<idx>,<sc-id>) { <int> | <expr> } |
| - sc-inc-gpc(<idx>,<sc-id>) |
| - sc-inc-gpc0(<sc-id>) |
| - sc-inc-gpc1(<sc-id>) |
| - sc-set-gpt(<idx>,<sc-id>) { <int> | <expr> } |
| - sc-set-gpt0(<sc-id>) { <int> | <expr> } |
| - set-dst <expr> |
| - set-dst-port <expr> |
| - set-mark <mark> |
| - set-src <expr> |
| - set-src-port <expr> |
| - set-tos <tos> |
| - set-var(<var-name>[,<cond>...]) <expr> |
| - set-var-fmt(<var-name>[,<cond>...]) <fmt> |
| - silent-drop [ rst-ttl <ttl> ] |
| - track-sc0 <key> [table <table>] |
| - track-sc1 <key> [table <table>] |
| - track-sc2 <key> [table <table>] |
| - unset-var(<var-name>) |
| |
| The supported actions are described below. |
| |
| This directive is only available from named defaults sections, not anonymous |
| ones. Rules defined in the defaults section are evaluated before ones in the |
| associated proxy section. To avoid ambiguities, in this case the same |
| defaults section cannot be used by proxies with the frontend capability and |
| by proxies with the backend capability. It means a listen section cannot use |
| a defaults section defining such rules. |
| |
| Note that the "if/unless" condition is optional. If no condition is set on |
| the action, it is simply performed unconditionally. That can be useful for |
| "track-sc*" actions as well as for changing the default action to a reject. |
| |
| Example: track the original source address by default, or the one advertised |
| in the PROXY protocol header for connection coming from the local |
| proxies. The first connection-level rule enables receipt of the |
| PROXY protocol for these ones, the second rule tracks whatever |
| address we decide to keep after optional decoding. |
| |
| tcp-request connection expect-proxy layer4 if { src -f proxies.lst } |
| tcp-request session track-sc0 src |
| |
| Example: accept all sessions from white-listed hosts, reject too fast |
| sessions without counting them, and track accepted sessions. |
| This results in session rate being capped from abusive sources. |
| |
| tcp-request session accept if { src -f /etc/haproxy/whitelist.lst } |
| tcp-request session reject if { src_sess_rate gt 10 } |
| tcp-request session track-sc0 src |
| |
| Example: accept all sessions from white-listed hosts, count all other |
| sessions and reject too fast ones. This results in abusive ones |
| being blocked as long as they don't slow down. |
| |
| tcp-request session accept if { src -f /etc/haproxy/whitelist.lst } |
| tcp-request session track-sc0 src |
| tcp-request session reject if { sc0_sess_rate gt 10 } |
| |
| See section 7 about ACL usage. |
| |
| See also : "tcp-request connection", "tcp-request content", "stick-table" |
| |
| tcp-request session accept [ { if | unless } <condition> ] |
| |
| This is used to accept the connection. No further "tcp-request session" |
| rules are evaluated. |
| |
| tcp-request session reject [ { if | unless } <condition> ] |
| |
| This is used to reject the connection. No further "tcp-request session" rules |
| are evaluated. |
| |
| tcp-request session sc-add-gpc(<idx>,<sc-id>) { <int> | <expr> } |
| [ { if | unless } <condition> ] |
| |
| This action increments the General Purpose Counter according to the sticky |
| counter designated by <sc-id>. Please refer to "http-request sc-add-gpc" for |
| a complete description. |
| |
| tcp-request session sc-inc-gpc(<idx>,<sc-id>) [ { if | unless } <condition> ] |
| tcp-request session sc-inc-gpc0(<sc-id>) [ { if | unless } <condition> ] |
| tcp-request session sc-inc-gpc1(<sc-id>) [ { if | unless } <condition> ] |
| |
| These actions increment the General Purppose Counters according to the sticky |
| counter designated by <sc-id>. Please refer to "http-request sc-inc-gpc", |
| "http-request sc-inc-gpc0" and "http-request sc-inc-gpc1" for a complete |
| description. |
| |
| tcp-request session sc-set-gpt(<idx>,<sc-id>) { <int> | <expr> } |
| [ { if | unless } <condition> ] |
| tcp-request session sc-set-gpt0(<sc-id>) { <int> | <expr> } |
| [ { if | unless } <condition> ] |
| |
| These actions set the 32-bit unsigned General Purpose Tags according to the |
| sticky counter designated by <sc-id>. Please refer to "tcp-request connection |
| sc-set-gpt" and "tcp-request connection sc-set-gpt0" for a complete |
| description. |
| |
| tcp-request session set-dst <expr> [ { if | unless } <condition> ] |
| tcp-request session set-dst-port <expr> [ { if | unless } <condition> ] |
| |
| These actions are used to set the destination IP/Port address to the value of |
| specified expression. Please refer to "http-request set-dst" and |
| "http-request set-dst-port" for a complete description. |
| |
| tcp-request session set-mark <mark> [ { if | unless } <condition> ] |
| |
| This action is used to set the Netfilter/IPFW MARK in all packets sent to the |
| client to the value passed in <mark> on platforms which support it. Please |
| refer to "http-request set-mark" for a complete description. |
| |
| tcp-request session set-src <expr> [ { if | unless } <condition> ] |
| tcp-request session set-src-port <expr> [ { if | unless } <condition> ] |
| |
| These actions are used to set the source IP/Port address to the value of |
| specified expression. Please refer to "http-request set-src" and |
| "http-request set-src-port" for a complete description. |
| |
| tcp-request session set-tos <tos> [ { if | unless } <condition> ] |
| |
| This is used to set the TOS or DSCP field value of packets sent to the client |
| to the value passed in <tos> on platforms which support this. Please refer to |
| "http-request set-tos" for a complete description. |
| |
| tcp-request session set-var(<var-name>[,<cond>...]) <expr> [ { if | unless } <condition> ] |
| tcp-request session set-var-fmt(<var-name>[,<cond>...]) <fmt> [ { if | unless } <condition> ] |
| |
| This is used to set the contents of a variable. The variable is declared |
| inline. Please refer to "http-request set-var" and "http-request set-var-fmt" |
| for a complete description. |
| |
| tcp-request session silent-drop [ rst-ttl <ttl> ] [ { if | unless } <condition> ] |
| |
| This stops the evaluation of the rules and makes the client-facing connection |
| suddenly disappear using a system-dependent way that tries to prevent the |
| client from being notified. Please refer to "http-request silent-drop" for a |
| complete description. |
| |
| tcp-request session track-sc0 <key> [table <table>] [ { if | unless } <condition> ] |
| tcp-request session track-sc1 <key> [table <table>] [ { if | unless } <condition> ] |
| tcp-request session track-sc2 <key> [table <table>] [ { if | unless } <condition> ] |
| |
| This enables tracking of sticky counters from current connection. Please |
| refer to "http-request track-sc0", "http-request track-sc1" and "http-request |
| track-sc2" for a complete description. |
| |
| tcp-request session unset-var(<var-name>) [ { if | unless } <condition> ] |
| |
| This is used to unset a variable. Please refer to "http-request set-var" for |
| details about variables. |
| |
| |
| tcp-response content <action> [{if | unless} <condition>] |
| Perform an action on a session response depending on a layer 4-7 condition |
| May be used in sections : defaults | frontend | listen | backend |
| yes(!) | no | yes | yes |
| Arguments : |
| <action> defines the action to perform if the condition applies. See |
| below. |
| |
| <condition> is a standard layer 4-7 ACL-based condition (see section 7). |
| |
| Response contents can be analyzed at an early stage of response processing |
| called "TCP content inspection". During this stage, ACL-based rules are |
| evaluated every time the response contents are updated, until either an |
| "accept", "close" or a "reject" rule matches, or a TCP response inspection |
| delay is set and expires with no matching rule. |
| |
| Most often, these decisions will consider a protocol recognition or validity. |
| |
| Content-based rules are evaluated in their exact declaration order. If no |
| rule matches or if there is no rule, the default action is to accept the |
| contents. There is no specific limit to the number of rules which may be |
| inserted. |
| |
| The first keyword is the rule's action. Several types of actions are |
| supported: |
| - accept |
| - close |
| - reject |
| - sc-add-gpc(<idx>,<sc-id>) { <int> | <expr> } |
| - sc-inc-gpc(<idx>,<sc-id>) |
| - sc-inc-gpc0(<sc-id>) |
| - sc-inc-gpc1(<sc-id>) |
| - sc-set-gpt(<idx>,<sc-id>) { <int> | <expr> } |
| - sc-set-gpt0(<sc-id>) { <int> | <expr> } |
| - send-spoe-group <engine-name> <group-name> |
| - set-bandwidth-limit <name> [limit {<expr> | <size>}] [period {<expr> | <time>}] |
| - set-log-level <level> |
| - set-mark <mark> |
| - set-nice <nice> |
| - set-tos <tos> |
| - set-var(<var-name>[,<cond>...]) <expr> |
| - set-var-fmt(<var-name>[,<cond>...]) <fmt> |
| - silent-drop [ rst-ttl <ttl> ] |
| - unset-var(<var-name>) |
| |
| The supported actions are described below. |
| |
| This directive is only available from named defaults sections, not anonymous |
| ones. Rules defined in the defaults section are evaluated before ones in the |
| associated proxy section. To avoid ambiguities, in this case the same |
| defaults section cannot be used by proxies with the frontend capability and |
| by proxies with the backend capability. It means a listen section cannot use |
| a defaults section defining such rules. |
| |
| Note that the "if/unless" condition is optional. If no condition is set on |
| the action, it is simply performed unconditionally. That can be useful for |
| for changing the default action to a reject. |
| |
| Several types of actions are supported : |
| |
| It is perfectly possible to match layer 7 contents with "tcp-response |
| content" rules, but then it is important to ensure that a full response has |
| been buffered, otherwise no contents will match. In order to achieve this, |
| the best solution involves detecting the HTTP protocol during the inspection |
| period. |
| |
| See section 7 about ACL usage. |
| |
| See also : "tcp-request content", "tcp-response inspect-delay" |
| |
| tcp-response content accept [ { if | unless } <condition> ] |
| |
| This is used to accept the response. No further "tcp-response content" rules |
| are evaluated. |
| |
| tcp-response content close [ { if | unless } <condition> ] |
| |
| This is used to immediately closes the connection with the server. No further |
| "tcp-response content" rules are evaluated. The main purpose of this action |
| is to force a connection to be finished between a client and a server after |
| an exchange when the application protocol expects some long time outs to |
| elapse first. The goal is to eliminate idle connections which take |
| significant resources on servers with certain protocols. |
| |
| tcp-response content reject [ { if | unless } <condition> ] |
| |
| This is used to reject the response. No further "tcp-response content" rules |
| are evaluated. |
| |
| tcp-response content sc-add-gpc(<idx>,<sc-id>) { <int> | <expr> } |
| [ { if | unless } <condition> ] |
| |
| This action increments the General Purpose Counter according to the sticky |
| counter designated by <sc-id>. Please refer to "http-request sc-add-gpc" for |
| a complete description. |
| |
| tcp-response content sc-inc-gpc(<idx>,<sc-id>) [ { if | unless } <condition> ] |
| tcp-response content sc-inc-gpc0(<sc-id>) [ { if | unless } <condition> ] |
| tcp-response content sc-inc-gpc1(<sc-id>) [ { if | unless } <condition> ] |
| |
| These actions increment the General Purppose Counters according to the sticky |
| counter designated by <sc-id>. Please refer to "http-request sc-inc-gpc", |
| "http-request sc-inc-gpc0" and "http-request sc-inc-gpc1" for a complete |
| description. |
| |
| tcp-response content sc-set-gpt(<idx>,<sc-id>) { <int> | <expr> } |
| [ { if | unless } <condition> ] |
| tcp-resposne content sc-set-gpt0(<sc-id>) { <int> | <expr> } |
| [ { if | unless } <condition> ] |
| |
| These actions set the 32-bit unsigned General Purpose Tags according to the |
| sticky counter designated by <sc-id>. Please refer to "http-request |
| sc-set-gpt" and "http-request sc-set-gpt0" for a complete description. |
| |
| tcp-response content send-spoe-group <engine-name> <group-name> |
| [ { if | unless } <condition> ] |
| |
| This action is is used to trigger sending of a group of SPOE messages. Please |
| refer to "http-request send-spoe-group" for a complete description. |
| |
| |
| tcp-response content set-bandwidth-limit <name> [limit { <expr> | <size> }] |
| [period { <expr> | <time> }] [ { if | unless } <condition> ] |
| |
| This action is used to enable the bandwidth limitation filter <name>, either |
| on the upload or download direction depending on the filter type. Please |
| refer to "http-request set-bandwidth-limit" for a complete description. |
| |
| tcp-response content set-log-level <level> [ { if | unless } <condition> ] |
| |
| This action is used to set the log level of the current session. Please refer |
| to "http-request set-log-level". for a complete description. |
| |
| tcp-response content set-mark <mark> [ { if | unless } <condition> ] |
| |
| This action is used to set the Netfilter/IPFW MARK in all packets sent to the |
| client to the value passed in <mark> on platforms which support it. Please |
| refer to "http-request set-mark" for a complete description. |
| |
| tcp-response content set-nice <nice> [ { if | unless } <condition> ] |
| |
| This sets the "nice" factor of the current request being processed. Please |
| refer to "http-request set-nice" for a complete description. |
| |
| tcp-response content set-tos <tos> [ { if | unless } <condition> ] |
| |
| This is used to set the TOS or DSCP field value of packets sent to the client |
| to the value passed in <tos> on platforms which support this. Please refer to |
| "http-request set-tos" for a complete description. |
| |
| tcp-response content set-var(<var-name>[,<cond>...]) <expr> [ { if | unless } <condition> ] |
| tcp-response content set-var-fmt(<var-name>[,<cond>...]) <fmt> [ { if | unless } <condition> ] |
| |
| This is used to set the contents of a variable. The variable is declared |
| inline. Please refer to "http-request set-var" and "http-request set-var-fmt" |
| for a complete description. |
| |
| tcp-response content silent-drop [ rst-ttl <ttl> ] [ { if | unless } <condition> ] |
| |
| This stops the evaluation of the rules and makes the client-facing connection |
| suddenly disappear using a system-dependent way that tries to prevent the |
| client from being notified. Please refer to "http-request silent-drop" for a |
| complete description. |
| |
| tcp-response content unset-var(<var-name>) [ { if | unless } <condition> ] |
| |
| This is used to unset a variable. Please refer to "http-request set-var" for |
| details about variables. |
| |
| |
| tcp-response inspect-delay <timeout> |
| Set the maximum allowed time to wait for a response during content inspection |
| May be used in sections : defaults | frontend | listen | backend |
| yes(!) | no | yes | yes |
| Arguments : |
| <timeout> is the timeout value specified in milliseconds by default, but |
| can be in any other unit if the number is suffixed by the unit, |
| as explained at the top of this document. |
| |
| This directive is only available from named defaults sections, not anonymous |
| ones. Proxies inherit this value from their defaults section. |
| |
| See also : "tcp-response content", "tcp-request inspect-delay". |
| |
| |
| timeout check <timeout> |
| Set additional check timeout, but only after a connection has been already |
| established. |
| |
| May be used in sections: defaults | frontend | listen | backend |
| yes | no | yes | yes |
| Arguments: |
| <timeout> is the timeout value specified in milliseconds by default, but |
| can be in any other unit if the number is suffixed by the unit, |
| as explained at the top of this document. |
| |
| If set, HAProxy uses min("timeout connect", "inter") as a connect timeout |
| for check and "timeout check" as an additional read timeout. The "min" is |
| used so that people running with *very* long "timeout connect" (e.g. those |
| who needed this due to the queue or tarpit) do not slow down their checks. |
| (Please also note that there is no valid reason to have such long connect |
| timeouts, because "timeout queue" and "timeout tarpit" can always be used to |
| avoid that). |
| |
| If "timeout check" is not set HAProxy uses "inter" for complete check |
| timeout (connect + read) exactly like all <1.3.15 version. |
| |
| In most cases check request is much simpler and faster to handle than normal |
| requests and people may want to kick out laggy servers so this timeout should |
| be smaller than "timeout server". |
| |
| This parameter is specific to backends, but can be specified once for all in |
| "defaults" sections. This is in fact one of the easiest solutions not to |
| forget about it. |
| |
| See also: "timeout connect", "timeout queue", "timeout server", |
| "timeout tarpit". |
| |
| |
| timeout client <timeout> |
| Set the maximum inactivity time on the client side. |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | no |
| Arguments : |
| <timeout> is the timeout value specified in milliseconds by default, but |
| can be in any other unit if the number is suffixed by the unit, |
| as explained at the top of this document. |
| |
| The inactivity timeout applies when the client is expected to acknowledge or |
| send data. In HTTP mode, this timeout is particularly important to consider |
| during the first phase, when the client sends the request, and during the |
| response while it is reading data sent by the server. That said, for the |
| first phase, it is preferable to set the "timeout http-request" to better |
| protect HAProxy from Slowloris like attacks. The value is specified in |
| milliseconds by default, but can be in any other unit if the number is |
| suffixed by the unit, as specified at the top of this document. In TCP mode |
| (and to a lesser extent, in HTTP mode), it is highly recommended that the |
| client timeout remains equal to the server timeout in order to avoid complex |
| situations to debug. It is a good practice to cover one or several TCP packet |
| losses by specifying timeouts that are slightly above multiples of 3 seconds |
| (e.g. 4 or 5 seconds). If some long-lived sessions are mixed with short-lived |
| sessions (e.g. WebSocket and HTTP), it's worth considering "timeout tunnel", |
| which overrides "timeout client" and "timeout server" for tunnels, as well as |
| "timeout client-fin" for half-closed connections. |
| |
| This parameter is specific to frontends, but can be specified once for all in |
| "defaults" sections. This is in fact one of the easiest solutions not to |
| forget about it. An unspecified timeout results in an infinite timeout, which |
| is not recommended. Such a usage is accepted and works but reports a warning |
| during startup because it may result in accumulation of expired sessions in |
| the system if the system's timeouts are not configured either. |
| |
| See also : "timeout server", "timeout tunnel", "timeout http-request". |
| |
| |
| timeout client-fin <timeout> |
| Set the inactivity timeout on the client side for half-closed connections. |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | no |
| Arguments : |
| <timeout> is the timeout value specified in milliseconds by default, but |
| can be in any other unit if the number is suffixed by the unit, |
| as explained at the top of this document. |
| |
| The inactivity timeout applies when the client is expected to acknowledge or |
| send data while one direction is already shut down. This timeout is different |
| from "timeout client" in that it only applies to connections which are closed |
| in one direction. This is particularly useful to avoid keeping connections in |
| FIN_WAIT state for too long when clients do not disconnect cleanly. This |
| problem is particularly common long connections such as RDP or WebSocket. |
| Note that this timeout can override "timeout tunnel" when a connection shuts |
| down in one direction. It is applied to idle HTTP/2 connections once a GOAWAY |
| frame was sent, often indicating an expectation that the connection quickly |
| ends. |
| |
| This parameter is specific to frontends, but can be specified once for all in |
| "defaults" sections. By default it is not set, so half-closed connections |
| will use the other timeouts (timeout.client or timeout.tunnel). |
| |
| See also : "timeout client", "timeout server-fin", and "timeout tunnel". |
| |
| |
| timeout connect <timeout> |
| Set the maximum time to wait for a connection attempt to a server to succeed. |
| May be used in sections : defaults | frontend | listen | backend |
| yes | no | yes | yes |
| Arguments : |
| <timeout> is the timeout value specified in milliseconds by default, but |
| can be in any other unit if the number is suffixed by the unit, |
| as explained at the top of this document. |
| |
| If the server is located on the same LAN as HAProxy, the connection should be |
| immediate (less than a few milliseconds). Anyway, it is a good practice to |
| cover one or several TCP packet losses by specifying timeouts that are |
| slightly above multiples of 3 seconds (e.g. 4 or 5 seconds). By default, the |
| connect timeout also presets both queue and tarpit timeouts to the same value |
| if these have not been specified. |
| |
| This parameter is specific to backends, but can be specified once for all in |
| "defaults" sections. This is in fact one of the easiest solutions not to |
| forget about it. An unspecified timeout results in an infinite timeout, which |
| is not recommended. Such a usage is accepted and works but reports a warning |
| during startup because it may result in accumulation of failed sessions in |
| the system if the system's timeouts are not configured either. |
| |
| See also: "timeout check", "timeout queue", "timeout server", "timeout tarpit". |
| |
| |
| timeout http-keep-alive <timeout> |
| Set the maximum allowed time to wait for a new HTTP request to appear |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | yes |
| Arguments : |
| <timeout> is the timeout value specified in milliseconds by default, but |
| can be in any other unit if the number is suffixed by the unit, |
| as explained at the top of this document. |
| |
| By default, the time to wait for a new request in case of keep-alive is set |
| by "timeout http-request". However this is not always convenient because some |
| people want very short keep-alive timeouts in order to release connections |
| faster, and others prefer to have larger ones but still have short timeouts |
| once the request has started to present itself. |
| |
| The "http-keep-alive" timeout covers these needs. It will define how long to |
| wait for a new HTTP request to start coming after a response was sent. Once |
| the first byte of request has been seen, the "http-request" timeout is used |
| to wait for the complete request to come. Note that empty lines prior to a |
| new request do not refresh the timeout and are not counted as a new request. |
| |
| There is also another difference between the two timeouts : when a connection |
| expires during timeout http-keep-alive, no error is returned, the connection |
| just closes. If the connection expires in "http-request" while waiting for a |
| request to complete, an HTTP 408 error is returned to the client before |
| closing the connection, unless "option http-ignore-probes" is set in the |
| frontend. |
| |
| In general "timeout http-keep-alive" is best used to prevent clients from |
| holding open an otherwise idle connection too long on sites seeing large |
| amounts of short connections. This can be accomplished by setting the value |
| to a few tens to hundreds of milliseconds in HTTP/1.1. This will close the |
| connection after the client requests a page without having to hold that |
| connection open to wait for more activity from the client. In that scenario, |
| a new activity from the browser would result in a new handshake at the TCP |
| and/or SSL layer. A common use case for this is HTTP sites serving only a |
| redirect to the HTTPS page. Such connections are better not kept idle too |
| long because they won't be reused, unless maybe to fetch a favicon. |
| |
| Another use case is the exact opposite: some sites want to permit clients |
| to reuse idle connections for a long time (e.g. 30 seconds to one minute) but |
| do not want to wait that long for the first request, in order to avoid a very |
| inexpensive attack vector. In this case, the http-keep-alive timeout would be |
| set to a large value, but http-request would remain low (a few seconds). |
| |
| When set to a very small value additional requests that are not pipelined |
| are likely going to be handled over another connection unless the requests |
| are truly pipelined, which is very rare with HTTP/1.1 (requests being sent |
| back-to-back without waiting for a response). Most HTTP/1.1 implementations |
| send a request, wait for a response and then send another request. A small |
| value here for HTTP/1.1 may be advantageous to use less memory and sockets |
| for sites with hundreds of thousands of clients, at the expense of an |
| increase in handshake computation costs. |
| |
| Special care should be taken with small values when dealing with HTTP/2. The |
| nature of HTTP/2 is to multiplex requests over a connection in order to save |
| on the overhead of reconnecting the TCP and/or SSL layers. The protocol also |
| uses control frames which cope poorly with early TCP connection closures, on |
| very rare occasions this may result in truncated responses when data are |
| destroyed in flight after leaving HAProxy (which then cannot even log an |
| error). A suggested low starting value for HTTP/2 connections would be around |
| 4 seconds. This would prevent most modern keep-alive implementations from |
| needlessly holding open stale connections, and at the same time would allow |
| subsequent requests to reuse the connection. However, this should be adjusted |
| as needed and is simply a starting point. |
| |
| If this parameter is not set, the "http-request" timeout applies, and if both |
| are not set, "timeout client" still applies at the lower level. It should be |
| set in the frontend to take effect, unless the frontend is in TCP mode, in |
| which case the HTTP backend's timeout will be used. |
| |
| See also : "timeout http-request", "timeout client". |
| |
| |
| timeout http-request <timeout> |
| Set the maximum allowed time to wait for a complete HTTP request |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | yes |
| Arguments : |
| <timeout> is the timeout value specified in milliseconds by default, but |
| can be in any other unit if the number is suffixed by the unit, |
| as explained at the top of this document. |
| |
| In order to offer DoS protection, it may be required to lower the maximum |
| accepted time to receive a complete HTTP request without affecting the client |
| timeout. This helps protecting against established connections on which |
| nothing is sent. The client timeout cannot offer a good protection against |
| this abuse because it is an inactivity timeout, which means that if the |
| attacker sends one character every now and then, the timeout will not |
| trigger. With the HTTP request timeout, no matter what speed the client |
| types, the request will be aborted if it does not complete in time. When the |
| timeout expires, an HTTP 408 response is sent to the client to inform it |
| about the problem, and the connection is closed. The logs will report |
| termination codes "cR". Some recent browsers are having problems with this |
| standard, well-documented behavior, so it might be needed to hide the 408 |
| code using "option http-ignore-probes" or "errorfile 408 /dev/null". See |
| more details in the explanations of the "cR" termination code in section 8.5. |
| |
| By default, this timeout only applies to the header part of the request, |
| and not to any data. As soon as the empty line is received, this timeout is |
| not used anymore. When combined with "option http-buffer-request", this |
| timeout also applies to the body of the request.. |
| It is used again on keep-alive connections to wait for a second |
| request if "timeout http-keep-alive" is not set. |
| |
| Generally it is enough to set it to a few seconds, as most clients send the |
| full request immediately upon connection. Add 3 or more seconds to cover TCP |
| retransmits but that's all. Setting it to very low values (e.g. 50 ms) will |
| generally work on local networks as long as there are no packet losses. This |
| will prevent people from sending bare HTTP requests using telnet. |
| |
| If this parameter is not set, the client timeout still applies between each |
| chunk of the incoming request. It should be set in the frontend to take |
| effect, unless the frontend is in TCP mode, in which case the HTTP backend's |
| timeout will be used. |
| |
| See also : "errorfile", "http-ignore-probes", "timeout http-keep-alive", and |
| "timeout client", "option http-buffer-request". |
| |
| |
| timeout queue <timeout> |
| Set the maximum time to wait in the queue for a connection slot to be free |
| May be used in sections : defaults | frontend | listen | backend |
| yes | no | yes | yes |
| Arguments : |
| <timeout> is the timeout value specified in milliseconds by default, but |
| can be in any other unit if the number is suffixed by the unit, |
| as explained at the top of this document. |
| |
| When a server's maxconn is reached, connections are left pending in a queue |
| which may be server-specific or global to the backend. In order not to wait |
| indefinitely, a timeout is applied to requests pending in the queue. If the |
| timeout is reached, it is considered that the request will almost never be |
| served, so it is dropped and a 503 error is returned to the client. |
| |
| The "timeout queue" statement allows to fix the maximum time for a request to |
| be left pending in a queue. If unspecified, the same value as the backend's |
| connection timeout ("timeout connect") is used, for backwards compatibility |
| with older versions with no "timeout queue" parameter. |
| |
| See also : "timeout connect". |
| |
| |
| timeout server <timeout> |
| Set the maximum inactivity time on the server side. |
| May be used in sections : defaults | frontend | listen | backend |
| yes | no | yes | yes |
| Arguments : |
| <timeout> is the timeout value specified in milliseconds by default, but |
| can be in any other unit if the number is suffixed by the unit, |
| as explained at the top of this document. |
| |
| The inactivity timeout applies when the server is expected to acknowledge or |
| send data. In HTTP mode, this timeout is particularly important to consider |
| during the first phase of the server's response, when it has to send the |
| headers, as it directly represents the server's processing time for the |
| request. To find out what value to put there, it's often good to start with |
| what would be considered as unacceptable response times, then check the logs |
| to observe the response time distribution, and adjust the value accordingly. |
| |
| The value is specified in milliseconds by default, but can be in any other |
| unit if the number is suffixed by the unit, as specified at the top of this |
| document. In TCP mode (and to a lesser extent, in HTTP mode), it is highly |
| recommended that the client timeout remains equal to the server timeout in |
| order to avoid complex situations to debug. Whatever the expected server |
| response times, it is a good practice to cover at least one or several TCP |
| packet losses by specifying timeouts that are slightly above multiples of 3 |
| seconds (e.g. 4 or 5 seconds minimum). If some long-lived sessions are mixed |
| with short-lived sessions (e.g. WebSocket and HTTP), it's worth considering |
| "timeout tunnel", which overrides "timeout client" and "timeout server" for |
| tunnels. |
| |
| This parameter is specific to backends, but can be specified once for all in |
| "defaults" sections. This is in fact one of the easiest solutions not to |
| forget about it. An unspecified timeout results in an infinite timeout, which |
| is not recommended. Such a usage is accepted and works but reports a warning |
| during startup because it may result in accumulation of expired sessions in |
| the system if the system's timeouts are not configured either. |
| |
| See also : "timeout client" and "timeout tunnel". |
| |
| |
| timeout server-fin <timeout> |
| Set the inactivity timeout on the server side for half-closed connections. |
| May be used in sections : defaults | frontend | listen | backend |
| yes | no | yes | yes |
| Arguments : |
| <timeout> is the timeout value specified in milliseconds by default, but |
| can be in any other unit if the number is suffixed by the unit, |
| as explained at the top of this document. |
| |
| The inactivity timeout applies when the server is expected to acknowledge or |
| send data while one direction is already shut down. This timeout is different |
| from "timeout server" in that it only applies to connections which are closed |
| in one direction. This is particularly useful to avoid keeping connections in |
| FIN_WAIT state for too long when a remote server does not disconnect cleanly. |
| This problem is particularly common long connections such as RDP or WebSocket. |
| Note that this timeout can override "timeout tunnel" when a connection shuts |
| down in one direction. This setting was provided for completeness, but in most |
| situations, it should not be needed. |
| |
| This parameter is specific to backends, but can be specified once for all in |
| "defaults" sections. By default it is not set, so half-closed connections |
| will use the other timeouts (timeout.server or timeout.tunnel). |
| |
| See also : "timeout client-fin", "timeout server", and "timeout tunnel". |
| |
| |
| timeout tarpit <timeout> |
| Set the duration for which tarpitted connections will be maintained |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | yes |
| Arguments : |
| <timeout> is the tarpit duration specified in milliseconds by default, but |
| can be in any other unit if the number is suffixed by the unit, |
| as explained at the top of this document. |
| |
| When a connection is tarpitted using "http-request tarpit", it is maintained |
| open with no activity for a certain amount of time, then closed. "timeout |
| tarpit" defines how long it will be maintained open. |
| |
| The value is specified in milliseconds by default, but can be in any other |
| unit if the number is suffixed by the unit, as specified at the top of this |
| document. If unspecified, the same value as the backend's connection timeout |
| ("timeout connect") is used, for backwards compatibility with older versions |
| with no "timeout tarpit" parameter. |
| |
| See also : "timeout connect". |
| |
| |
| timeout tunnel <timeout> |
| Set the maximum inactivity time on the client and server side for tunnels. |
| May be used in sections : defaults | frontend | listen | backend |
| yes | no | yes | yes |
| Arguments : |
| <timeout> is the timeout value specified in milliseconds by default, but |
| can be in any other unit if the number is suffixed by the unit, |
| as explained at the top of this document. |
| |
| The tunnel timeout applies when a bidirectional connection is established |
| between a client and a server, and the connection remains inactive in both |
| directions. This timeout supersedes both the client and server timeouts once |
| the connection becomes a tunnel. In TCP, this timeout is used as soon as no |
| analyzer remains attached to either connection (e.g. tcp content rules are |
| accepted). In HTTP, this timeout is used when a connection is upgraded (e.g. |
| when switching to the WebSocket protocol, or forwarding a CONNECT request |
| to a proxy), or after the first response when no keepalive/close option is |
| specified. |
| |
| Since this timeout is usually used in conjunction with long-lived connections, |
| it usually is a good idea to also set "timeout client-fin" to handle the |
| situation where a client suddenly disappears from the net and does not |
| acknowledge a close, or sends a shutdown and does not acknowledge pending |
| data anymore. This can happen in lossy networks where firewalls are present, |
| and is detected by the presence of large amounts of sessions in a FIN_WAIT |
| state. |
| |
| The value is specified in milliseconds by default, but can be in any other |
| unit if the number is suffixed by the unit, as specified at the top of this |
| document. Whatever the expected normal idle time, it is a good practice to |
| cover at least one or several TCP packet losses by specifying timeouts that |
| are slightly above multiples of 3 seconds (e.g. 4 or 5 seconds minimum). |
| |
| This parameter is specific to backends, but can be specified once for all in |
| "defaults" sections. This is in fact one of the easiest solutions not to |
| forget about it. |
| |
| Example : |
| defaults http |
| option http-server-close |
| timeout connect 5s |
| timeout client 30s |
| timeout client-fin 30s |
| timeout server 30s |
| timeout tunnel 1h # timeout to use with WebSocket and CONNECT |
| |
| See also : "timeout client", "timeout client-fin", "timeout server". |
| |
| |
| transparent (deprecated) |
| Enable client-side transparent proxying |
| May be used in sections : defaults | frontend | listen | backend |
| yes | no | yes | yes |
| Arguments : none |
| |
| This keyword was introduced in order to provide layer 7 persistence to layer |
| 3 load balancers. The idea is to use the OS's ability to redirect an incoming |
| connection for a remote address to a local process (here HAProxy), and let |
| this process know what address was initially requested. When this option is |
| used, sessions without cookies will be forwarded to the original destination |
| IP address of the incoming request (which should match that of another |
| equipment), while requests with cookies will still be forwarded to the |
| appropriate server. |
| |
| The "transparent" keyword is deprecated, use "option transparent" instead. |
| |
| Note that contrary to a common belief, this option does NOT make HAProxy |
| present the client's IP to the server when establishing the connection. |
| |
| See also: "option transparent" |
| |
| unique-id-format <string> |
| Generate a unique ID for each request. |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | no |
| Arguments : |
| <string> is a log-format string. |
| |
| This keyword creates a ID for each request using the custom log format. A |
| unique ID is useful to trace a request passing through many components of |
| a complex infrastructure. The newly created ID may also be logged using the |
| %ID tag the log-format string. |
| |
| The format should be composed from elements that are guaranteed to be |
| unique when combined together. For instance, if multiple HAProxy instances |
| are involved, it might be important to include the node name. It is often |
| needed to log the incoming connection's source and destination addresses |
| and ports. Note that since multiple requests may be performed over the same |
| connection, including a request counter may help differentiate them. |
| Similarly, a timestamp may protect against a rollover of the counter. |
| Logging the process ID will avoid collisions after a service restart. |
| |
| It is recommended to use hexadecimal notation for many fields since it |
| makes them more compact and saves space in logs. |
| |
| Example: |
| |
| unique-id-format %{+X}o\ %ci:%cp_%fi:%fp_%Ts_%rt:%pid |
| |
| will generate: |
| |
| 7F000001:8296_7F00001E:1F90_4F7B0A69_0003:790A |
| |
| See also: "unique-id-header" |
| |
| unique-id-header <name> |
| Add a unique ID header in the HTTP request. |
| May be used in sections : defaults | frontend | listen | backend |
| yes | yes | yes | no |
| Arguments : |
| <name> is the name of the header. |
| |
| Add a unique-id header in the HTTP request sent to the server, using the |
| unique-id-format. It can't work if the unique-id-format doesn't exist. |
| |
| Example: |
| |
| unique-id-format %{+X}o\ %ci:%cp_%fi:%fp_%Ts_%rt:%pid |
| unique-id-header X-Unique-ID |
| |
| will generate: |
| |
| X-Unique-ID: 7F000001:8296_7F00001E:1F90_4F7B0A69_0003:790A |
| |
| See also: "unique-id-format" |
| |
| use_backend <backend> [{if | unless} <condition>] |
| Switch to a specific backend if/unless an ACL-based condition is matched. |
| May be used in sections : defaults | frontend | listen | backend |
| no | yes | yes | no |
| Arguments : |
| <backend> is the name of a valid backend or "listen" section, or a |
| "log-format" string resolving to a backend name. |
| |
| <condition> is a condition composed of ACLs, as described in section 7. If |
| it is omitted, the rule is unconditionally applied. |
| |
| When doing content-switching, connections arrive on a frontend and are then |
| dispatched to various backends depending on a number of conditions. The |
| relation between the conditions and the backends is described with the |
| "use_backend" keyword. While it is normally used with HTTP processing, it can |
| also be used in pure TCP, either without content using stateless ACLs (e.g. |
| source address validation) or combined with a "tcp-request" rule to wait for |
| some payload. |
| |
| There may be as many "use_backend" rules as desired. All of these rules are |
| evaluated in their declaration order, and the first one which matches will |
| assign the backend. |
| |
| In the first form, the backend will be used if the condition is met. In the |
| second form, the backend will be used if the condition is not met. If no |
| condition is valid, the backend defined with "default_backend" will be used. |
| If no default backend is defined, either the servers in the same section are |
| used (in case of a "listen" section) or, in case of a frontend, no server is |
| used and a 503 service unavailable response is returned. |
| |
| Note that it is possible to switch from a TCP frontend to an HTTP backend. In |
| this case, either the frontend has already checked that the protocol is HTTP, |
| and backend processing will immediately follow, or the backend will wait for |
| a complete HTTP request to get in. This feature is useful when a frontend |
| must decode several protocols on a unique port, one of them being HTTP. |
| |
| When <backend> is a simple name, it is resolved at configuration time, and an |
| error is reported if the specified backend does not exist. If <backend> is |
| a log-format string instead, no check may be done at configuration time, so |
| the backend name is resolved dynamically at run time. If the resulting |
| backend name does not correspond to any valid backend, no other rule is |
| evaluated, and the default_backend directive is applied instead. Note that |
| when using dynamic backend names, it is highly recommended to use a prefix |
| that no other backend uses in order to ensure that an unauthorized backend |
| cannot be forced from the request. |
| |
| It is worth mentioning that "use_backend" rules with an explicit name are |
| used to detect the association between frontends and backends to compute the |
| backend's "fullconn" setting. This cannot be done for dynamic names. |
| |
| See also: "default_backend", "tcp-request", "fullconn", "log-format", and |
| section 7 about ACLs. |
| |
| use-fcgi-app <name> |
| Defines the FastCGI application to use for the backend. |
| May be used in sections : defaults | frontend | listen | backend |
| no | no | yes | yes |
| Arguments : |
| <name> is the name of the FastCGI application to use. |
| |
| See section 10.1 about FastCGI application setup for details. |
| |
| use-server <server> if <condition> |
| use-server <server> unless <condition> |
| Only use a specific server if/unless an ACL-based condition is matched. |
| May be used in sections : defaults | frontend | listen | backend |
| no | no | yes | yes |
| Arguments : |
| <server> is the name of a valid server in the same backend section |
| or a "log-format" string resolving to a server name. |
| |
| <condition> is a condition composed of ACLs, as described in section 7. |
| |
| By default, connections which arrive to a backend are load-balanced across |
| the available servers according to the configured algorithm, unless a |
| persistence mechanism such as a cookie is used and found in the request. |
| |
| Sometimes it is desirable to forward a particular request to a specific |
| server without having to declare a dedicated backend for this server. This |
| can be achieved using the "use-server" rules. These rules are evaluated after |
| the "redirect" rules and before evaluating cookies, and they have precedence |
| on them. There may be as many "use-server" rules as desired. All of these |
| rules are evaluated in their declaration order, and the first one which |
| matches will assign the server. |
| |
| If a rule designates a server which is down, and "option persist" is not used |
| and no force-persist rule was validated, it is ignored and evaluation goes on |
| with the next rules until one matches. |
| |
| In the first form, the server will be used if the condition is met. In the |
| second form, the server will be used if the condition is not met. If no |
| condition is valid, the processing continues and the server will be assigned |
| according to other persistence mechanisms. |
| |
| Note that even if a rule is matched, cookie processing is still performed but |
| does not assign the server. This allows prefixed cookies to have their prefix |
| stripped. |
| |
| The "use-server" statement works both in HTTP and TCP mode. This makes it |
| suitable for use with content-based inspection. For instance, a server could |
| be selected in a farm according to the TLS SNI field when using protocols with |
| implicit TLS (also see "req.ssl_sni"). And if these servers have their weight |
| set to zero, they will not be used for other traffic. |
| |
| Example : |
| # intercept incoming TLS requests based on the SNI field |
| use-server www if { req.ssl_sni -i www.example.com } |
| server www 192.168.0.1:443 weight 0 |
| use-server mail if { req.ssl_sni -i mail.example.com } |
| server mail 192.168.0.1:465 weight 0 |
| use-server imap if { req.ssl_sni -i imap.example.com } |
| server imap 192.168.0.1:993 weight 0 |
| # all the rest is forwarded to this server |
| server default 192.168.0.2:443 check |
| |
| When <server> is a simple name, it is checked against existing servers in the |
| configuration and an error is reported if the specified server does not exist. |
| If it is a log-format, no check is performed when parsing the configuration, |
| and if we can't resolve a valid server name at runtime but the use-server rule |
| was conditioned by an ACL returning true, no other use-server rule is applied |
| and we fall back to load balancing. |
| |
| See also: "use_backend", section 5 about server and section 7 about ACLs. |
| |
| |
| 5. Bind and server options |
| -------------------------- |
| |
| The "bind", "server" and "default-server" keywords support a number of settings |
| depending on some build options and on the system HAProxy was built on. These |
| settings generally each consist in one word sometimes followed by a value, |
| written on the same line as the "bind" or "server" line. All these options are |
| described in this section. |
| |
| |
| 5.1. Bind options |
| ----------------- |
| |
| The "bind" keyword supports a certain number of settings which are all passed |
| as arguments on the same line. The order in which those arguments appear makes |
| no importance, provided that they appear after the bind address. All of these |
| parameters are optional. Some of them consist in a single words (booleans), |
| while other ones expect a value after them. In this case, the value must be |
| provided immediately after the setting name. |
| |
| The currently supported settings are the following ones. |
| |
| accept-netscaler-cip <magic number> |
| Enforces the use of the NetScaler Client IP insertion protocol over any |
| connection accepted by any of the TCP sockets declared on the same line. The |
| NetScaler Client IP insertion protocol dictates the layer 3/4 addresses of |
| the incoming connection to be used everywhere an address is used, with the |
| only exception of "tcp-request connection" rules which will only see the |
| real connection address. Logs will reflect the addresses indicated in the |
| protocol, unless it is violated, in which case the real address will still |
| be used. This keyword combined with support from external components can be |
| used as an efficient and reliable alternative to the X-Forwarded-For |
| mechanism which is not always reliable and not even always usable. See also |
| "tcp-request connection expect-netscaler-cip" for a finer-grained setting of |
| which client is allowed to use the protocol. |
| |
| accept-proxy |
| Enforces the use of the PROXY protocol over any connection accepted by any of |
| the sockets declared on the same line. Versions 1 and 2 of the PROXY protocol |
| are supported and correctly detected. The PROXY protocol dictates the layer |
| 3/4 addresses of the incoming connection to be used everywhere an address is |
| used, with the only exception of "tcp-request connection" rules which will |
| only see the real connection address. Logs will reflect the addresses |
| indicated in the protocol, unless it is violated, in which case the real |
| address will still be used. This keyword combined with support from external |
| components can be used as an efficient and reliable alternative to the |
| X-Forwarded-For mechanism which is not always reliable and not even always |
| usable. See also "tcp-request connection expect-proxy" for a finer-grained |
| setting of which client is allowed to use the protocol. |
| |
| allow-0rtt |
| Allow receiving early data when using TLSv1.3. This is disabled by default, |
| due to security considerations. Because it is vulnerable to replay attacks, |
| you should only allow if for requests that are safe to replay, i.e. requests |
| that are idempotent. You can use the "wait-for-handshake" action for any |
| request that wouldn't be safe with early data. |
| |
| alpn <protocols> |
| This enables the TLS ALPN extension and advertises the specified protocol |
| list as supported on top of ALPN. The protocol list consists in a comma- |
| delimited list of protocol names, for instance: "http/1.1,http/1.0" (without |
| quotes). This requires that the SSL library is built with support for TLS |
| extensions enabled (check with haproxy -vv). The ALPN extension replaces the |
| initial NPN extension. At the protocol layer, ALPN is required to enable |
| HTTP/2 on an HTTPS frontend and HTTP/3 on a QUIC frontend. However, when such |
| frontends have none of "npn", "alpn" and "no-alpn" set, a default value of |
| "h2,http/1.1" will be used for a regular HTTPS frontend, and "h3" for a QUIC |
| frontend. Versions of OpenSSL prior to 1.0.2 didn't support ALPN and only |
| supposed the now obsolete NPN extension. At the time of writing this, most |
| browsers still support both ALPN and NPN for HTTP/2 so a fallback to NPN may |
| still work for a while. But ALPN must be used whenever possible. Protocols |
| not advertised are not negotiated. For example it is possible to only accept |
| HTTP/2 connections with this: |
| |
| bind :443 ssl crt pub.pem alpn h2 # explicitly disable HTTP/1.1 |
| |
| QUIC supports only h3 and hq-interop as ALPN. h3 is for HTTP/3 and hq-interop |
| is used for http/0.9 and QUIC interop runner (see https://interop.seemann.io). |
| Each "alpn" statement will replace a previous one. In order to remove them, |
| use "no-alpn". |
| |
| Note that some old browsers such as Firefox 88 used to experience issues with |
| WebSocket over H2, and in case such a setup is encountered, it may be needed |
| to either explicitly disable HTTP/2 in the "alpn" string by forcing it to |
| "http/1.1" or "no-alpn", or to enable "h2-workaround-bogus-websocket-clients" |
| globally. |
| |
| backlog <backlog> |
| Sets the socket's backlog to this value. If unspecified or 0, the frontend's |
| backlog is used instead, which generally defaults to the maxconn value. |
| |
| ca-file <cafile> |
| This setting is only available when support for OpenSSL was built in. It |
| designates a PEM file from which to load CA certificates used to verify |
| client's certificate. It is possible to load a directory containing multiple |
| CAs, in this case HAProxy will try to load every ".pem", ".crt", ".cer", and |
| .crl" available in the directory, files starting with a dot are ignored. |
| |
| Warning: The "@system-ca" parameter could be used in place of the cafile |
| in order to use the trusted CAs of your system, like its done with the server |
| directive. But you mustn't use it unless you know what you are doing. |
| Configuring it this way basically mean that the bind will accept any client |
| certificate generated from one of the CA present on your system, which is |
| extremely insecure. |
| |
| ca-ignore-err [all|<errorID>,...] |
| This setting is only available when support for OpenSSL was built in. |
| Sets a comma separated list of errorIDs to ignore during verify at depth > 0. |
| It could be a numerical ID, or the constant name (X509_V_ERR) which is |
| available in the OpenSSL documentation: |
| https://www.openssl.org/docs/manmaster/man3/X509_STORE_CTX_get_error.html#ERROR-CODES |
| It is recommended to use the constant name as the numerical value can change |
| in new version of OpenSSL. |
| If set to 'all', all errors are ignored. SSL handshake is not aborted if an |
| error is ignored. |
| |
| ca-sign-file <cafile> |
| This setting is only available when support for OpenSSL was built in. It |
| designates a PEM file containing both the CA certificate and the CA private |
| key used to create and sign server's certificates. This is a mandatory |
| setting when the dynamic generation of certificates is enabled. See |
| 'generate-certificates' for details. |
| |
| ca-sign-pass <passphrase> |
| This setting is only available when support for OpenSSL was built in. It is |
| the CA private key passphrase. This setting is optional and used only when |
| the dynamic generation of certificates is enabled. See |
| 'generate-certificates' for details. |
| |
| ca-verify-file <cafile> |
| This setting designates a PEM file from which to load CA certificates used to |
| verify client's certificate. It designates CA certificates which must not be |
| included in CA names sent in server hello message. Typically, "ca-file" must |
| be defined with intermediate certificates, and "ca-verify-file" with |
| certificates to ending the chain, like root CA. |
| |
| ciphers <ciphers> |
| This setting is only available when support for OpenSSL was built in. It sets |
| the string describing the list of cipher algorithms ("cipher suite") that are |
| negotiated during the SSL/TLS handshake up to TLSv1.2. The format of the |
| string is defined in "man 1 ciphers" from OpenSSL man pages. For background |
| information and recommendations see e.g. |
| (https://wiki.mozilla.org/Security/Server_Side_TLS) and |
| (https://mozilla.github.io/server-side-tls/ssl-config-generator/). For TLSv1.3 |
| cipher configuration, please check the "ciphersuites" keyword. |
| |
| ciphersuites <ciphersuites> |
| This setting is only available when support for OpenSSL was built in and |
| OpenSSL 1.1.1 or later was used to build HAProxy. It sets the string describing |
| the list of cipher algorithms ("cipher suite") that are negotiated during the |
| TLSv1.3 handshake. The format of the string is defined in "man 1 ciphers" from |
| OpenSSL man pages under the "ciphersuites" section. For cipher configuration |
| for TLSv1.2 and earlier, please check the "ciphers" keyword. |
| This setting might accept TLSv1.2 ciphersuites however this is an |
| undocumented behavior and not recommended as it could be inconsistent or buggy. |
| The default TLSv1.3 ciphersuites of OpenSSL are: |
| "TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256" |
| |
| TLSv1.3 only supports 5 ciphersuites: |
| |
| - TLS_AES_128_GCM_SHA256 |
| - TLS_AES_256_GCM_SHA384 |
| - TLS_CHACHA20_POLY1305_SHA256 |
| - TLS_AES_128_CCM_SHA256 |
| - TLS_AES_128_CCM_8_SHA256 |
| |
| Example: |
| ciphers ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-RSA-AES128-GCM-SHA256 |
| ciphersuites TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256 |
| |
| client-sigalgs <sigalgs> |
| This setting is only available when support for OpenSSL was built in. It sets |
| the string describing the list of signature algorithms related to client |
| authentication that are negotiated . The format of the string is defined in |
| "man 3 SSL_CTX_set1_client_sigalgs" from the OpenSSL man pages. It is not |
| recommended to use this setting if no specific usecase was identified. |
| |
| crl-file <crlfile> |
| This setting is only available when support for OpenSSL was built in. It |
| designates a PEM file from which to load certificate revocation list used |
| to verify client's certificate. You need to provide a certificate revocation |
| list for every certificate of your certificate authority chain. |
| |
| crt <cert> |
| This setting is only available when support for OpenSSL was built in. It |
| designates a PEM file containing both the required certificates and any |
| associated private keys. This file can be built by concatenating multiple |
| PEM files into one (e.g. cat cert.pem key.pem > combined.pem). If your CA |
| requires an intermediate certificate, this can also be concatenated into this |
| file. Intermediate certificate can also be shared in a directory via |
| "issuers-chain-path" directive. |
| |
| If the file does not contain a private key, HAProxy will try to load |
| the key at the same path suffixed by a ".key". |
| |
| If the OpenSSL used supports Diffie-Hellman, parameters present in this file |
| are loaded. |
| |
| If a directory name is used instead of a PEM file, then all files found in |
| that directory will be loaded in alphabetic order unless their name ends |
| with '.key', '.issuer', '.ocsp' or '.sctl' (reserved extensions). Files |
| starting with a dot are also ignored. This directive may be specified multiple |
| times in order to load certificates from multiple files or directories. The |
| certificates will be presented to clients who provide a valid TLS Server Name |
| Indication field matching one of their CN or alt subjects. Wildcards are |
| supported, where a wildcard character '*' is used instead of the first |
| hostname component (e.g. *.example.org matches www.example.org but not |
| www.sub.example.org). If an empty directory is used, HAProxy will not start |
| unless the "strict-sni" keyword is used. |
| |
| If no SNI is provided by the client or if the SSL library does not support |
| TLS extensions, or if the client provides an SNI hostname which does not |
| match any certificate, then the first loaded certificate will be presented. |
| This means that when loading certificates from a directory, it is highly |
| recommended to load the default one first as a file or to ensure that it will |
| always be the first one in the directory. |
| |
| Note that the same cert may be loaded multiple times without side effects. |
| |
| Some CAs (such as GoDaddy) offer a drop down list of server types that do not |
| include HAProxy when obtaining a certificate. If this happens be sure to |
| choose a web server that the CA believes requires an intermediate CA (for |
| GoDaddy, selection Apache Tomcat will get the correct bundle, but many |
| others, e.g. nginx, result in a wrong bundle that will not work for some |
| clients). |
| |
| For each PEM file, HAProxy checks for the presence of file at the same path |
| suffixed by ".ocsp". If such file is found, support for the TLS Certificate |
| Status Request extension (also known as "OCSP stapling") is automatically |
| enabled. The content of this file is optional. If not empty, it must contain |
| a valid OCSP Response in DER format. In order to be valid an OCSP Response |
| must comply with the following rules: it has to indicate a good status, |
| it has to be a single response for the certificate of the PEM file, and it |
| has to be valid at the moment of addition. If these rules are not respected |
| the OCSP Response is ignored and a warning is emitted. In order to identify |
| which certificate an OCSP Response applies to, the issuer's certificate is |
| necessary. If the issuer's certificate is not found in the PEM file, it will |
| be loaded from a file at the same path as the PEM file suffixed by ".issuer" |
| if it exists otherwise it will fail with an error. |
| |
| For each PEM file, HAProxy also checks for the presence of file at the same |
| path suffixed by ".sctl". If such file is found, support for Certificate |
| Transparency (RFC6962) TLS extension is enabled. The file must contain a |
| valid Signed Certificate Timestamp List, as described in RFC. File is parsed |
| to check basic syntax, but no signatures are verified. |
| |
| There are cases where it is desirable to support multiple key types, e.g. RSA |
| and ECDSA in the cipher suites offered to the clients. This allows clients |
| that support EC certificates to be able to use EC ciphers, while |
| simultaneously supporting older, RSA only clients. |
| |
| To achieve this, OpenSSL 1.1.1 is required, you can configure this behavior |
| by providing one crt entry per certificate type, or by configuring a "cert |
| bundle" like it was required before HAProxy 1.8. See "ssl-load-extra-files". |
| |
| crt-ignore-err <errors> |
| This setting is only available when support for OpenSSL was built in. Sets a |
| comma separated list of errorIDs to ignore during verify at depth == 0. |
| It could be a numerical ID, or the constant name (X509_V_ERR) which is |
| available in the OpenSSL documentation: |
| https://www.openssl.org/docs/manmaster/man3/X509_STORE_CTX_get_error.html#ERROR-CODES |
| It is recommended to use the constant name as the numerical value can change |
| in new version of OpenSSL. |
| If set to 'all', all errors are ignored. SSL handshake is not aborted if an |
| error is ignored. |
| |
| crt-list <file> |
| This setting is only available when support for OpenSSL was built in. It |
| designates a list of PEM file with an optional ssl configuration and a SNI |
| filter per certificate, with the following format for each line : |
| |
| <crtfile> [\[<sslbindconf> ...\]] [[!]<snifilter> ...] |
| |
| sslbindconf supports the following keywords from the bind line |
| (see Section 5.1. Bind options): |
| |
| - allow-0rtt |
| - alpn |
| - ca-file |
| - ca-verify-file |
| - ciphers |
| - ciphersuites |
| - client-sigalgs |
| - crl-file |
| - curves |
| - ecdhe |
| - no-alpn |
| - no-ca-names |
| - npn |
| - sigalgs |
| - ssl-min-ver |
| - ssl-max-ver |
| - verify |
| |
| It overrides the configuration set in bind line for the certificate. |
| |
| Wildcards are supported in the SNI filter. Negative filter are also supported, |
| useful in combination with a wildcard filter to exclude a particular SNI, or |
| after the first certificate to exclude a pattern from its CN or Subject Alt |
| Name (SAN). The certificates will be presented to clients who provide a valid |
| TLS Server Name Indication field matching one of the SNI filters. If no SNI |
| filter is specified, the CN and SAN are used. This directive may be specified |
| multiple times. See the "crt" option for more information. The default |
| certificate is still needed to meet OpenSSL expectations. If it is not used, |
| the 'strict-sni' option may be used. |
| |
| Multi-cert bundling (see "ssl-load-extra-files") is supported with crt-list, |
| as long as only the base name is given in the crt-list. SNI filter will do |
| the same work on all bundled certificates. |
| |
| Empty lines as well as lines beginning with a hash ('#') will be ignored. |
| |
| The first declared certificate of a bind line is used as the default |
| certificate, either from crt or crt-list option, which HAProxy should use in |
| the TLS handshake if no other certificate matches. This certificate will also |
| be used if the provided SNI matches its CN or SAN, even if a matching SNI |
| filter is found on any crt-list. The SNI filter !* can be used after the first |
| declared certificate to not include its CN and SAN in the SNI tree, so it will |
| never match except if no other certificate matches. This way the first |
| declared certificate act as a fallback. |
| |
| When no ALPN is set, the "bind" line's default one is used. If a "bind" line |
| has no "no-alpn", "alpn" nor "npn" set, a default value will be used |
| depending on the protocol (see "alpn" above). However if the "bind" line has |
| a different default, or explicitly disables ALPN using "no-alpn", it is |
| possible to force a specific value for a certificate. |
| |
| crt-list file example: |
| cert1.pem !* |
| # comment |
| cert2.pem [alpn h2,http/1.1] |
| certW.pem *.domain.tld !secure.domain.tld |
| certS.pem [curves X25519:P-256 ciphers ECDHE-ECDSA-AES256-GCM-SHA384] secure.domain.tld |
| |
| curves <curves> |
| This setting is only available when support for OpenSSL was built in. It sets |
| the string describing the list of elliptic curves algorithms ("curve suite") |
| that are negotiated during the SSL/TLS handshake with ECDHE. The format of the |
| string is a colon-delimited list of curve name. |
| Example: "X25519:P-256" (without quote) |
| When "curves" is set, "ecdhe" parameter is ignored. |
| |
| defer-accept |
| Is an optional keyword which is supported only on certain Linux kernels. It |
| states that a connection will only be accepted once some data arrive on it, |
| or at worst after the first retransmit. This should be used only on protocols |
| for which the client talks first (e.g. HTTP). It can slightly improve |
| performance by ensuring that most of the request is already available when |
| the connection is accepted. On the other hand, it will not be able to detect |
| connections which don't talk. It is important to note that this option is |
| broken in all kernels up to 2.6.31, as the connection is never accepted until |
| the client talks. This can cause issues with front firewalls which would see |
| an established connection while the proxy will only see it in SYN_RECV. This |
| option is only supported on TCPv4/TCPv6 sockets and ignored by other ones. |
| |
| ecdhe <named curve> |
| This setting is only available when support for OpenSSL was built in. It sets |
| the named curve (RFC 4492) used to generate ECDH ephemeral keys. By default, |
| used named curve is prime256v1. |
| |
| expose-fd listeners |
| This option is only usable with the stats socket. It gives your stats socket |
| the capability to pass listeners FD to another HAProxy process. |
| In master-worker mode, this is not required anymore, the listeners will be |
| passed using the internal socketpairs between the master and the workers. |
| See also "-x" in the management guide. |
| |
| force-sslv3 |
| This option enforces use of SSLv3 only on SSL connections instantiated from |
| this listener. SSLv3 is generally less expensive than the TLS counterparts |
| for high connection rates. This option is also available on global statement |
| "ssl-default-bind-options". See also "ssl-min-ver" and "ssl-max-ver". |
| |
| force-tlsv10 |
| This option enforces use of TLSv1.0 only on SSL connections instantiated from |
| this listener. This option is also available on global statement |
| "ssl-default-bind-options". See also "ssl-min-ver" and "ssl-max-ver". |
| |
| force-tlsv11 |
| This option enforces use of TLSv1.1 only on SSL connections instantiated from |
| this listener. This option is also available on global statement |
| "ssl-default-bind-options". See also "ssl-min-ver" and "ssl-max-ver". |
| |
| force-tlsv12 |
| This option enforces use of TLSv1.2 only on SSL connections instantiated from |
| this listener. This option is also available on global statement |
| "ssl-default-bind-options". See also "ssl-min-ver" and "ssl-max-ver". |
| |
| force-tlsv13 |
| This option enforces use of TLSv1.3 only on SSL connections instantiated from |
| this listener. This option is also available on global statement |
| "ssl-default-bind-options". See also "ssl-min-ver" and "ssl-max-ver". |
| |
| generate-certificates |
| This setting is only available when support for OpenSSL was built in. It |
| enables the dynamic SSL certificates generation. A CA certificate and its |
| private key are necessary (see 'ca-sign-file'). When HAProxy is configured as |
| a transparent forward proxy, SSL requests generate errors because of a common |
| name mismatch on the certificate presented to the client. With this option |
| enabled, HAProxy will try to forge a certificate using the SNI hostname |
| indicated by the client. This is done only if no certificate matches the SNI |
| hostname (see 'crt-list'). If an error occurs, the default certificate is |
| used, else the 'strict-sni' option is set. |
| It can also be used when HAProxy is configured as a reverse proxy to ease the |
| deployment of an architecture with many backends. |
| |
| Creating a SSL certificate is an expensive operation, so a LRU cache is used |
| to store forged certificates (see 'tune.ssl.ssl-ctx-cache-size'). It |
| increases the HAProxy's memory footprint to reduce latency when the same |
| certificate is used many times. |
| |
| gid <gid> |
| Sets the group of the UNIX sockets to the designated system gid. It can also |
| be set by default in the global section's "unix-bind" statement. Note that |
| some platforms simply ignore this. This setting is equivalent to the "group" |
| setting except that the group ID is used instead of its name. This setting is |
| ignored by non UNIX sockets. |
| |
| group <group> |
| Sets the group of the UNIX sockets to the designated system group. It can |
| also be set by default in the global section's "unix-bind" statement. Note |
| that some platforms simply ignore this. This setting is equivalent to the |
| "gid" setting except that the group name is used instead of its gid. This |
| setting is ignored by non UNIX sockets. |
| |
| id <id> |
| Fixes the socket ID. By default, socket IDs are automatically assigned, but |
| sometimes it is more convenient to fix them to ease monitoring. This value |
| must be strictly positive and unique within the listener/frontend. This |
| option can only be used when defining only a single socket. |
| |
| interface <interface> |
| Restricts the socket to a specific interface. When specified, only packets |
| received from that particular interface are processed by the socket. This is |
| currently only supported on Linux. The interface must be a primary system |
| interface, not an aliased interface. It is also possible to bind multiple |
| frontends to the same address if they are bound to different interfaces. Note |
| that binding to a network interface requires root privileges. This parameter |
| is only compatible with TCPv4/TCPv6 sockets. When specified, return traffic |
| uses the same interface as inbound traffic, and its associated routing table, |
| even if there are explicit routes through different interfaces configured. |
| This can prove useful to address asymmetric routing issues when the same |
| client IP addresses need to be able to reach frontends hosted on different |
| interfaces. |
| |
| level <level> |
| This setting is used with the stats sockets only to restrict the nature of |
| the commands that can be issued on the socket. It is ignored by other |
| sockets. <level> can be one of : |
| - "user" is the least privileged level; only non-sensitive stats can be |
| read, and no change is allowed. It would make sense on systems where it |
| is not easy to restrict access to the socket. |
| - "operator" is the default level and fits most common uses. All data can |
| be read, and only non-sensitive changes are permitted (e.g. clear max |
| counters). |
| - "admin" should be used with care, as everything is permitted (e.g. clear |
| all counters). |
| |
| maxconn <maxconn> |
| Limits the sockets to this number of concurrent connections. Extraneous |
| connections will remain in the system's backlog until a connection is |
| released. If unspecified, the limit will be the same as the frontend's |
| maxconn. Note that in case of port ranges or multiple addresses, the same |
| value will be applied to each socket. This setting enables different |
| limitations on expensive sockets, for instance SSL entries which may easily |
| eat all memory. |
| |
| mode <mode> |
| Sets the octal mode used to define access permissions on the UNIX socket. It |
| can also be set by default in the global section's "unix-bind" statement. |
| Note that some platforms simply ignore this. This setting is ignored by non |
| UNIX sockets. |
| |
| mss <maxseg> |
| Sets the TCP Maximum Segment Size (MSS) value to be advertised on incoming |
| connections. This can be used to force a lower MSS for certain specific |
| ports, for instance for connections passing through a VPN. Note that this |
| relies on a kernel feature which is theoretically supported under Linux but |
| was buggy in all versions prior to 2.6.28. It may or may not work on other |
| operating systems. It may also not change the advertised value but change the |
| effective size of outgoing segments. The commonly advertised value for TCPv4 |
| over Ethernet networks is 1460 = 1500(MTU) - 40(IP+TCP). If this value is |
| positive, it will be used as the advertised MSS. If it is negative, it will |
| indicate by how much to reduce the incoming connection's advertised MSS for |
| outgoing segments. This parameter is only compatible with TCP v4/v6 sockets. |
| |
| name <name> |
| Sets an optional name for these sockets, which will be reported on the stats |
| page. |
| |
| namespace <name> |
| On Linux, it is possible to specify which network namespace a socket will |
| belong to. This directive makes it possible to explicitly bind a listener to |
| a namespace different from the default one. Please refer to your operating |
| system's documentation to find more details about network namespaces. |
| |
| nice <nice> |
| Sets the 'niceness' of connections initiated from the socket. Value must be |
| in the range -1024..1024 inclusive, and defaults to zero. Positive values |
| means that such connections are more friendly to others and easily offer |
| their place in the scheduler. On the opposite, negative values mean that |
| connections want to run with a higher priority than others. The difference |
| only happens under high loads when the system is close to saturation. |
| Negative values are appropriate for low-latency or administration services, |
| and high values are generally recommended for CPU intensive tasks such as SSL |
| processing or bulk transfers which are less sensible to latency. For example, |
| it may make sense to use a positive value for an SMTP socket and a negative |
| one for an RDP socket. |
| |
| no-alpn |
| Disables ALPN processing (technically speaking this sets the ALPN string to |
| an empty string that will not be advertised). It permits to cancel a previous |
| occurrence of an "alpn" setting and to disable application protocol |
| negotiation. It may also be used to prevent a listener from negotiating ALPN |
| with a client on an HTTPS or QUIC listener; by default, HTTPS listeners will |
| advertise "h2,http/1.1" and QUIC listeners will advertise "h3". See also |
| "alpn" bove. Note that when using "crt-list", a certificate may override the |
| "alpn" setting and re-enable its processing. |
| |
| no-ca-names |
| This setting is only available when support for OpenSSL was built in. It |
| prevents from send CA names in server hello message when ca-file is used. |
| Use "ca-verify-file" instead of "ca-file" with "no-ca-names". |
| |
| no-sslv3 |
| This setting is only available when support for OpenSSL was built in. It |
| disables support for SSLv3 on any sockets instantiated from the listener when |
| SSL is supported. Note that SSLv2 is forced disabled in the code and cannot |
| be enabled using any configuration option. This option is also available on |
| global statement "ssl-default-bind-options". Use "ssl-min-ver" and |
| "ssl-max-ver" instead. |
| |
| no-tls-tickets |
| This setting is only available when support for OpenSSL was built in. It |
| disables the stateless session resumption (RFC 5077 TLS Ticket |
| extension) and force to use stateful session resumption. Stateless |
| session resumption is more expensive in CPU usage. This option is also |
| available on global statement "ssl-default-bind-options". |
| The TLS ticket mechanism is only used up to TLS 1.2. |
| Forward Secrecy is compromised with TLS tickets, unless ticket keys |
| are periodically rotated (via reload or by using "tls-ticket-keys"). |
| |
| no-tlsv10 |
| This setting is only available when support for OpenSSL was built in. It |
| disables support for TLSv1.0 on any sockets instantiated from the listener |
| when SSL is supported. Note that SSLv2 is forced disabled in the code and |
| cannot be enabled using any configuration option. This option is also |
| available on global statement "ssl-default-bind-options". Use "ssl-min-ver" |
| and "ssl-max-ver" instead. |
| |
| no-tlsv11 |
| This setting is only available when support for OpenSSL was built in. It |
| disables support for TLSv1.1 on any sockets instantiated from the listener |
| when SSL is supported. Note that SSLv2 is forced disabled in the code and |
| cannot be enabled using any configuration option. This option is also |
| available on global statement "ssl-default-bind-options". Use "ssl-min-ver" |
| and "ssl-max-ver" instead. |
| |
| no-tlsv12 |
| This setting is only available when support for OpenSSL was built in. It |
| disables support for TLSv1.2 on any sockets instantiated from the listener |
| when SSL is supported. Note that SSLv2 is forced disabled in the code and |
| cannot be enabled using any configuration option. This option is also |
| available on global statement "ssl-default-bind-options". Use "ssl-min-ver" |
| and "ssl-max-ver" instead. |
| |
| no-tlsv13 |
| This setting is only available when support for OpenSSL was built in. It |
| disables support for TLSv1.3 on any sockets instantiated from the listener |
| when SSL is supported. Note that SSLv2 is forced disabled in the code and |
| cannot be enabled using any configuration option. This option is also |
| available on global statement "ssl-default-bind-options". Use "ssl-min-ver" |
| and "ssl-max-ver" instead. |
| |
| npn <protocols> |
| This enables the NPN TLS extension and advertises the specified protocol list |
| as supported on top of NPN. The protocol list consists in a comma-delimited |
| list of protocol names, for instance: "http/1.1,http/1.0" (without quotes). |
| This requires that the SSL library is built with support for TLS extensions |
| enabled (check with haproxy -vv). Note that the NPN extension has been |
| replaced with the ALPN extension (see the "alpn" keyword), though this one is |
| only available starting with OpenSSL 1.0.2. If HTTP/2 is desired on an older |
| version of OpenSSL, NPN might still be used as most clients still support it |
| at the time of writing this. It is possible to enable both NPN and ALPN |
| though it probably doesn't make any sense out of testing. |
| |
| ocsp-update [ off | on ] (crt-list only) |
| Enable automatic OCSP response update when set to 'on', disable it otherwise. |
| Its value defaults to 'off'. |
| Please note that for now, this option can only be used in a crt-list line, it |
| cannot be used directly on a bind line. It lies in this "Bind options" |
| section because it is still a frontend option. This limitation was set so |
| that the option applies to only one certificate at a time. |
| If a given certificate is used in multiple crt-lists with different values of |
| the 'ocsp-update' set, an error will be raised. Here is an example |
| configuration enabling it: |
| |
| haproxy.cfg: |
| frontend fe |
| bind :443 ssl crt-list haproxy.list |
| |
| haproxy.list: |
| server_cert.pem [ocsp-update on] foo.bar |
| |
| When the option is set to 'on', we will try to get an ocsp response whenever |
| an ocsp uri is found in the frontend's certificate. The only limitation of |
| this mode is that the certificate's issuer will have to be known in order for |
| the OCSP certid to be built. |
| Each OCSP response will be updated at least once an hour, and even more |
| frequently if a given OCSP response has an expire date earlier than this one |
| hour limit. A minimum update interval of 5 minutes will still exist in order |
| to avoid updating too often responses that have a really short expire time or |
| even no 'Next Update' at all. Because of this hard limit, please note that |
| when auto update is set to 'on' or 'auto', any OCSP response loaded during |
| init will not be updated until at least 5 minutes, even if its expire time |
| ends before now+5m. This should not be too much of a hassle since an OCSP |
| response must be valid when it gets loaded during init (its expire time must |
| be in the future) so it is unlikely that this response expires in such a |
| short time after init. |
| On the other hand, if a certificate has an OCSP uri specified and no OCSP |
| response, setting this option to 'on' for the given certificate will ensure |
| that the OCSP response gets fetched automatically right after init. |
| The default minimum and maximum delays (5 minutes and 1 hour respectively) |
| can be configured by the "tune.ssl.ocsp-update.maxdelay" and |
| "tune.ssl.ocsp-update.mindelay" global options. |
| |
| Whenever an OCSP response is updated by the auto update task or following a |
| call to the "update ssl ocsp-response" CLI command, a dedicated log line is |
| emitted. It follows a dedicated log-format that contains the following header |
| "%ci:%cp [%tr] %ft" and is followed by specific OCSP-related information: |
| - the path of the corresponding frontend certificate |
| - a numerical update status |
| - a textual update status |
| - the number of update failures for the given response |
| - the number of update successes for the givan response |
| See "show ssl ocsp-updates" CLI command for a full list of error codes and |
| error messages. This line is emitted regardless of the success or failure of |
| the concerned OCSP response update. |
| The OCSP request/response is sent and received through an http_client |
| instance that has the dontlog-normal option set and that uses the regular |
| HTTP log format in case of error (unreachable OCSP responder for instance). |
| If such an error occurs, another log line that contains HTTP-related |
| information will then be emitted alongside the "regular" OCSP one (which will |
| likely have "HTTP error" as text status). But if a purely HTTP error happens |
| (unreachable OCSP responder for instance), an extra log line that follows the |
| regular HTTP log-format will be emitted. |
| Here are two examples of such log lines, with a successful OCSP update log |
| line first and then an example of an HTTP error with the two different lines |
| (lines were spit and the URL was shortened for readability): |
| <134>Mar 6 11:16:53 haproxy[14872]: -:- [06/Mar/2023:11:16:52.808] \ |
| <OCSP-UPDATE> /path_to_cert/foo.pem 1 "Update successful" 0 1 |
| |
| <134>Mar 6 11:18:55 haproxy[14872]: -:- [06/Mar/2023:11:18:54.207] \ |
| <OCSP-UPDATE> /path_to_cert/bar.pem 2 "HTTP error" 1 0 |
| <134>Mar 6 11:18:55 haproxy[14872]: -:- [06/Mar/2023:11:18:52.200] \ |
| <OCSP-UPDATE> -/- 2/0/-1/-1/3009 503 217 - - SC-- 0/0/0/0/3 0/0 {} \ |
| "GET http://127.0.0.1:12345/MEMwQT HTTP/1.1" |
| |
| Troubleshooting: |
| A common error that can happen with let's encrypt certificates is if the DNS |
| resolution provides an IPv6 address and your system does not have a valid |
| outgoing IPv6 route. In such a case, you can either create the appropriate |
| route or set the "httpclient.resolvers.prefer ipv4" option in the global |
| section. |
| In case of "OCSP response check failure" error, you might want to check that |
| the issuer certificate that you provided is valid. |
| |
| prefer-client-ciphers |
| Use the client's preference when selecting the cipher suite, by default |
| the server's preference is enforced. This option is also available on |
| global statement "ssl-default-bind-options". |
| Note that with OpenSSL >= 1.1.1 ChaCha20-Poly1305 is reprioritized anyway |
| (without setting this option), if a ChaCha20-Poly1305 cipher is at the top of |
| the client cipher list. |
| |
| proto <name> |
| Forces the multiplexer's protocol to use for the incoming connections. It |
| must be compatible with the mode of the frontend (TCP or HTTP). It must also |
| be usable on the frontend side. The list of available protocols is reported |
| in haproxy -vv. The protocols properties are reported : the mode (TCP/HTTP), |
| the side (FE/BE), the mux name and its flags. |
| |
| Some protocols are subject to the head-of-line blocking on server side |
| (flag=HOL_RISK). Finally some protocols don't support upgrades (flag=NO_UPG). |
| The HTX compatibility is also reported (flag=HTX). |
| |
| Here are the protocols that may be used as argument to a "proto" directive on |
| a bind line : |
| |
| h2 : mode=HTTP side=FE|BE mux=H2 flags=HTX|HOL_RISK|NO_UPG |
| h1 : mode=HTTP side=FE|BE mux=H1 flags=HTX|NO_UPG |
| none : mode=TCP side=FE|BE mux=PASS flags=NO_UPG |
| |
| Idea behind this option is to bypass the selection of the best multiplexer's |
| protocol for all connections instantiated from this listening socket. For |
| instance, it is possible to force the http/2 on clear TCP by specifying "proto |
| h2" on the bind line. |
| |
| quic-cc-algo { cubic | newreno } |
| This is a QUIC specific setting to select the congestion control algorithm |
| for any connection attempts to the configured QUIC listeners. They are similar |
| to those used by TCP. |
| |
| Default value: cubic |
| |
| quic-force-retry |
| This is a QUIC specific setting which forces the use of the QUIC Retry feature |
| for all the connection attempts to the configured QUIC listeners. It consists |
| in veryfying the peers are able to receive packets at the transport address |
| they used to initiate a new connection, sending them a Retry packet which |
| contains a token. This token must be sent back to the Retry packet sender, |
| this latter being the only one to be able to validate the token. Note that QUIC |
| Retry will always be used even if a Retry threshold was set (see |
| "tune.quic.retry-threshold" setting). |
| |
| This setting requires the cluster secret to be set or else an error will be |
| reported on startup (see "cluster-secret"). |
| |
| See https://www.rfc-editor.org/rfc/rfc9000.html#section-8.1.2 for more |
| information about QUIC retry. |
| |
| severity-output <format> |
| This setting is used with the stats sockets only to configure severity |
| level output prepended to informational feedback messages. Severity |
| level of messages can range between 0 and 7, conforming to syslog |
| rfc5424. Valid and successful socket commands requesting data |
| (i.e. "show map", "get acl foo" etc.) will never have a severity level |
| prepended. It is ignored by other sockets. <format> can be one of : |
| - "none" (default) no severity level is prepended to feedback messages. |
| - "number" severity level is prepended as a number. |
| - "string" severity level is prepended as a string following the |
| rfc5424 convention. |
| |
| shards <number> | by-thread | by-group |
| In multi-threaded mode, on operating systems supporting multiple listeners on |
| the same IP:port, this will automatically create this number of multiple |
| identical listeners for the same line, all bound to a fair share of the number |
| of the threads attached to this listener. This can sometimes be useful when |
| using very large thread counts where the in-kernel locking on a single socket |
| starts to cause a significant overhead. In this case the incoming traffic is |
| distributed over multiple sockets and the contention is reduced. Note that |
| doing this can easily increase the CPU usage by making more threads work a |
| little bit. |
| |
| If the number of shards is higher than the number of available threads, it |
| will automatically be trimmed to the number of threads (i.e. one shard per |
| thread). The special "by-thread" value also creates as many shards as there |
| are threads on the "bind" line. Since the system will evenly distribute the |
| incoming traffic between all these shards, it is important that this number |
| is an integral divisor of the number of threads. Alternately, the other |
| special value "by-group" will create one shard per thread group. This can |
| be useful when dealing with many threads and not wanting to create too many |
| sockets. The load distribution will be a bit less optimal but the contention |
| (especially in the system) will still be lower than with a single socket. |
| |
| On operating systems that do not support multiple sockets bound to the same |
| address, "by-thread" and "by-group" will automatically fall back to a single |
| shard. For "by-group" this is done without any warning since it doesn't |
| change anything for a single group, and will result in sockets being |
| duplicated for each group anyway. However, for "by-thread", a diagnostic |
| warning will be emitted if this happens since the resulting number of |
| listeners will not be the expected one. |
| |
| sigalgs <sigalgs> |
| This setting is only available when support for OpenSSL was built in. It sets |
| the string describing the list of signature algorithms that are negotiated |
| during the TLSv1.2 and TLSv1.3 handshake. The format of the string is defined |
| in "man 3 SSL_CTX_set1_sigalgs" from the OpenSSL man pages. It is not |
| recommended to use this setting unless compatibility with a middlebox is |
| required. |
| |
| ssl |
| This setting is only available when support for OpenSSL was built in. It |
| enables SSL deciphering on connections instantiated from this listener. A |
| certificate is necessary (see "crt" above). All contents in the buffers will |
| appear in clear text, so that ACLs and HTTP processing will only have access |
| to deciphered contents. SSLv3 is disabled per default, use "ssl-min-ver SSLv3" |
| to enable it. |
| |
| ssl-max-ver [ SSLv3 | TLSv1.0 | TLSv1.1 | TLSv1.2 | TLSv1.3 ] |
| This option enforces use of <version> or lower on SSL connections instantiated |
| from this listener. Using this setting without "ssl-min-ver" can be |
| ambiguous because the default ssl-min-ver value could change in future HAProxy |
| versions. This option is also available on global statement |
| "ssl-default-bind-options". See also "ssl-min-ver". |
| |
| ssl-min-ver [ SSLv3 | TLSv1.0 | TLSv1.1 | TLSv1.2 | TLSv1.3 ] |
| This option enforces use of <version> or upper on SSL connections |
| instantiated from this listener. The default value is "TLSv1.2". This option |
| is also available on global statement "ssl-default-bind-options". |
| See also "ssl-max-ver". |
| |
| strict-sni |
| This setting is only available when support for OpenSSL was built in. The |
| SSL/TLS negotiation is allow only if the client provided an SNI which match |
| a certificate. The default certificate is not used. This option also allows |
| to start without any certificate on a bind line, so an empty directory could |
| be used and filled later from the stats socket. |
| See the "crt" option for more information. See "add ssl crt-list" command in |
| the management guide. |
| |
| tcp-ut <delay> |
| Sets the TCP User Timeout for all incoming connections instantiated from this |
| listening socket. This option is available on Linux since version 2.6.37. It |
| allows HAProxy to configure a timeout for sockets which contain data not |
| receiving an acknowledgment for the configured delay. This is especially |
| useful on long-lived connections experiencing long idle periods such as |
| remote terminals or database connection pools, where the client and server |
| timeouts must remain high to allow a long period of idle, but where it is |
| important to detect that the client has disappeared in order to release all |
| resources associated with its connection (and the server's session). The |
| argument is a delay expressed in milliseconds by default. This only works |
| for regular TCP connections, and is ignored for other protocols. |
| |
| tfo |
| Is an optional keyword which is supported only on Linux kernels >= 3.7. It |
| enables TCP Fast Open on the listening socket, which means that clients which |
| support this feature will be able to send a request and receive a response |
| during the 3-way handshake starting from second connection, thus saving one |
| round-trip after the first connection. This only makes sense with protocols |
| that use high connection rates and where each round trip matters. This can |
| possibly cause issues with many firewalls which do not accept data on SYN |
| packets, so this option should only be enabled once well tested. This option |
| is only supported on TCPv4/TCPv6 sockets and ignored by other ones. You may |
| need to build HAProxy with USE_TFO=1 if your libc doesn't define |
| TCP_FASTOPEN. |
| |
| thread [<thread-group>/]<thread-set>[,...] |
| This restricts the list of threads on which this listener is allowed to run. |
| It does not enforce any of them but eliminates those which do not match. It |
| limits the threads allowed to process incoming connections for this listener. |
| |
| There are two numbering schemes. By default, thread numbers are absolute in |
| the process, comprised between 1 and the value specified in global.nbthread. |
| It is also possible to designate a thread number using its relative number |
| inside its thread group, by specifying the thread group number first, then a |
| slash ('/') and the relative thread number(s). In this case thread numbers |
| also start at 1 and end at 32 or 64 depending on the platform. When absolute |
| thread numbers are specified, they will be automatically translated to |
| relative numbers once thread groups are known. Usually, absolute numbers are |
| preferred for simple configurations, and relative ones are preferred for |
| complex configurations where CPU arrangement matters for performance. |
| |
| After the optional thread group number, the "thread-set" specification must |
| use the following format: |
| |
| "all" | "odd" | "even" | [number][-[number]] |
| |
| As their names imply, "all" validates all threads within the set (either all |
| of the group's when a group is specified, or all of the process' threads), |
| "odd" validates all odd-numberred threads (every other thread starting at 1) |
| either for the process or the group, and "even" validates all even-numberred |
| threads (every other thread starting at 2). If instead thread number ranges |
| are used, then all threads included in the range from the first to the last |
| thread number are validated. The numbers are either relative to the group |
| or absolute depending on the presence of a thread group number. If the first |
| thread number is omitted, "1" is used, representing either the first thread |
| of the group or the first thread of the process. If the last thread number is |
| omitted, either the last thread number of the group (32 or 64) is used, or |
| the last thread number of the process (global.nbthread). |
| |
| These ranges may be repeated and delimited by a comma, so that non-contiguous |
| thread sets can be specified, and the group, if present, must be specified |
| again for each new range. Note that it is not permitted to mix group-relative |
| and absolute specifications because the whole "bind" line must use either |
| an absolute notation or a relative one, as those not set will be resolved at |
| the end of the parsing. |
| |
| It is important to know that each listener described by a "bind" line creates |
| at least one socket represented by at least one file descriptor. Since file |
| descriptors cannot span multiple thread groups, if a "bind" line specifies a |
| thread range that covers more than one group, several file descriptors will |
| automatically be created so that there is at least one per group. Technically |
| speaking they all refer to the same socket in the kernel, but they will get a |
| distinct identifier in haproxy and will even have a dedicated stats entry if |
| "option socket-stats" is used. |
| |
| The main purpose is to have multiple bind lines sharing the same IP:port but |
| not the same thread in a listener, so that the system can distribute the |
| incoming connections into multiple queues, bypassing haproxy's internal queue |
| load balancing. Currently Linux 3.9 and above is known for supporting this. |
| See also the "shards" keyword above that automates duplication of "bind" |
| lines and their assignment to multiple groups of threads. |
| |
| tls-ticket-keys <keyfile> |
| Sets the TLS ticket keys file to load the keys from. The keys need to be 48 |
| or 80 bytes long, depending if aes128 or aes256 is used, encoded with base64 |
| with one line per key (ex. openssl rand 80 | openssl base64 -A | xargs echo). |
| The first key determines the key length used for next keys: you can't mix |
| aes128 and aes256 keys. Number of keys is specified by the TLS_TICKETS_NO |
| build option (default 3) and at least as many keys need to be present in |
| the file. Last TLS_TICKETS_NO keys will be used for decryption and the |
| penultimate one for encryption. This enables easy key rotation by just |
| appending new key to the file and reloading the process. Keys must be |
| periodically rotated (ex. every 12h) or Perfect Forward Secrecy is |
| compromised. It is also a good idea to keep the keys off any permanent |
| storage such as hard drives (hint: use tmpfs and don't swap those files). |
| Lifetime hint can be changed using tune.ssl.timeout. |
| |
| transparent |
| Is an optional keyword which is supported only on certain Linux kernels. It |
| indicates that the addresses will be bound even if they do not belong to the |
| local machine, and that packets targeting any of these addresses will be |
| intercepted just as if the addresses were locally configured. This normally |
| requires that IP forwarding is enabled. Caution! do not use this with the |
| default address '*', as it would redirect any traffic for the specified port. |
| This keyword is available only when HAProxy is built with USE_LINUX_TPROXY=1. |
| This parameter is only compatible with TCPv4 and TCPv6 sockets, depending on |
| kernel version. Some distribution kernels include backports of the feature, |
| so check for support with your vendor. |
| |
| uid <uid> |
| Sets the owner of the UNIX sockets to the designated system uid. It can also |
| be set by default in the global section's "unix-bind" statement. Note that |
| some platforms simply ignore this. This setting is equivalent to the "user" |
| setting except that the user numeric ID is used instead of its name. This |
| setting is ignored by non UNIX sockets. |
| |
| user <user> |
| Sets the owner of the UNIX sockets to the designated system user. It can also |
| be set by default in the global section's "unix-bind" statement. Note that |
| some platforms simply ignore this. This setting is equivalent to the "uid" |
| setting except that the user name is used instead of its uid. This setting is |
| ignored by non UNIX sockets. |
| |
| v4v6 |
| Is an optional keyword which is supported only on most recent systems |
| including Linux kernels >= 2.4.21. It is used to bind a socket to both IPv4 |
| and IPv6 when it uses the default address. Doing so is sometimes necessary |
| on systems which bind to IPv6 only by default. It has no effect on non-IPv6 |
| sockets, and is overridden by the "v6only" option. |
| |
| v6only |
| Is an optional keyword which is supported only on most recent systems |
| including Linux kernels >= 2.4.21. It is used to bind a socket to IPv6 only |
| when it uses the default address. Doing so is sometimes preferred to doing it |
| system-wide as it is per-listener. It has no effect on non-IPv6 sockets and |
| has precedence over the "v4v6" option. |
| |
| verify [none|optional|required] |
| This setting is only available when support for OpenSSL was built in. If set |
| to 'none', client certificate is not requested. This is the default. In other |
| cases, a client certificate is requested. If the client does not provide a |
| certificate after the request and if 'verify' is set to 'required', then the |
| handshake is aborted, while it would have succeeded if set to 'optional'. The |
| certificate provided by the client is always verified using CAs from |
| 'ca-file' and optional CRLs from 'crl-file'. On verify failure the handshake |
| is aborted, regardless of the 'verify' option, unless the error code exactly |
| matches one of those listed with 'ca-ignore-err' or 'crt-ignore-err'. |
| |
| 5.2. Server and default-server options |
| ------------------------------------ |
| |
| The "server" and "default-server" keywords support a certain number of settings |
| which are all passed as arguments on the server line. The order in which those |
| arguments appear does not count, and they are all optional. Some of those |
| settings are single words (booleans) while others expect one or several values |
| after them. In this case, the values must immediately follow the setting name. |
| Except default-server, all those settings must be specified after the server's |
| address if they are used: |
| |
| server <name> <address>[:port] [settings ...] |
| default-server [settings ...] |
| |
| Note that all these settings are supported both by "server" and "default-server" |
| keywords, except "id" which is only supported by "server". |
| |
| The currently supported settings are the following ones. |
| |
| addr <ipv4|ipv6> |
| Using the "addr" parameter, it becomes possible to use a different IP address |
| to send health-checks or to probe the agent-check. On some servers, it may be |
| desirable to dedicate an IP address to specific component able to perform |
| complex tests which are more suitable to health-checks than the application. |
| This parameter is ignored if the "check" parameter is not set. See also the |
| "port" parameter. |
| |
| agent-check |
| Enable an auxiliary agent check which is run independently of a regular |
| health check. An agent health check is performed by making a TCP connection |
| to the port set by the "agent-port" parameter and reading an ASCII string |
| terminated by the first '\r' or '\n' met. The string is made of a series of |
| words delimited by spaces, tabs or commas in any order, each consisting of : |
| |
| - An ASCII representation of a positive integer percentage, e.g. "75%". |
| Values in this format will set the weight proportional to the initial |
| weight of a server as configured when HAProxy starts. Note that a zero |
| weight is reported on the stats page as "DRAIN" since it has the same |
| effect on the server (it's removed from the LB farm). |
| |
| - The string "maxconn:" followed by an integer (no space between). Values |
| in this format will set the maxconn of a server. The maximum number of |
| connections advertised needs to be multiplied by the number of load |
| balancers and different backends that use this health check to get the |
| total number of connections the server might receive. Example: maxconn:30 |
| |
| - The word "ready". This will turn the server's administrative state to the |
| READY mode, thus canceling any DRAIN or MAINT state |
| |
| - The word "drain". This will turn the server's administrative state to the |
| DRAIN mode, thus it will not accept any new connections other than those |
| that are accepted via persistence. |
| |
| - The word "maint". This will turn the server's administrative state to the |
| MAINT mode, thus it will not accept any new connections at all, and health |
| checks will be stopped. |
| |
| - The words "down", "fail", or "stopped", optionally followed by a |
| description string after a sharp ('#'). All of these mark the server's |
| operating state as DOWN, but since the word itself is reported on the stats |
| page, the difference allows an administrator to know if the situation was |
| expected or not : the service may intentionally be stopped, may appear up |
| but fail some validity tests, or may be seen as down (e.g. missing process, |
| or port not responding). |
| |
| - The word "up" sets back the server's operating state as UP if health checks |
| also report that the service is accessible. |
| |
| Parameters which are not advertised by the agent are not changed. For |
| example, an agent might be designed to monitor CPU usage and only report a |
| relative weight and never interact with the operating status. Similarly, an |
| agent could be designed as an end-user interface with 3 radio buttons |
| allowing an administrator to change only the administrative state. However, |
| it is important to consider that only the agent may revert its own actions, |
| so if a server is set to DRAIN mode or to DOWN state using the agent, the |
| agent must implement the other equivalent actions to bring the service into |
| operations again. |
| |
| Failure to connect to the agent is not considered an error as connectivity |
| is tested by the regular health check which is enabled by the "check" |
| parameter. Warning though, it is not a good idea to stop an agent after it |
| reports "down", since only an agent reporting "up" will be able to turn the |
| server up again. Note that the CLI on the Unix stats socket is also able to |
| force an agent's result in order to work around a bogus agent if needed. |
| |
| Requires the "agent-port" parameter to be set. See also the "agent-inter" |
| and "no-agent-check" parameters. |
| |
| agent-send <string> |
| If this option is specified, HAProxy will send the given string (verbatim) |
| to the agent server upon connection. You could, for example, encode |
| the backend name into this string, which would enable your agent to send |
| different responses based on the backend. Make sure to include a '\n' if |
| you want to terminate your request with a newline. |
| |
| agent-inter <delay> |
| The "agent-inter" parameter sets the interval between two agent checks |
| to <delay> milliseconds. If left unspecified, the delay defaults to 2000 ms. |
| |
| Just as with every other time-based parameter, it may be entered in any |
| other explicit unit among { us, ms, s, m, h, d }. The "agent-inter" |
| parameter also serves as a timeout for agent checks "timeout check" is |
| not set. In order to reduce "resonance" effects when multiple servers are |
| hosted on the same hardware, the agent and health checks of all servers |
| are started with a small time offset between them. It is also possible to |
| add some random noise in the agent and health checks interval using the |
| global "spread-checks" keyword. This makes sense for instance when a lot |
| of backends use the same servers. |
| |
| See also the "agent-check" and "agent-port" parameters. |
| |
| agent-addr <addr> |
| The "agent-addr" parameter sets address for agent check. |
| |
| You can offload agent-check to another target, so you can make single place |
| managing status and weights of servers defined in HAProxy in case you can't |
| make self-aware and self-managing services. You can specify both IP or |
| hostname, it will be resolved. |
| |
| agent-port <port> |
| The "agent-port" parameter sets the TCP port used for agent checks. |
| |
| See also the "agent-check" and "agent-inter" parameters. |
| |
| allow-0rtt |
| Allow sending early data to the server when using TLS 1.3. |
| Note that early data will be sent only if the client used early data, or |
| if the backend uses "retry-on" with the "0rtt-rejected" keyword. |
| |
| alpn <protocols> |
| This enables the TLS ALPN extension and advertises the specified protocol |
| list as supported on top of ALPN. The protocol list consists in a comma- |
| delimited list of protocol names, for instance: "http/1.1,http/1.0" (without |
| quotes). This requires that the SSL library is built with support for TLS |
| extensions enabled (check with haproxy -vv). The ALPN extension replaces the |
| initial NPN extension. ALPN is required to connect to HTTP/2 servers. |
| Versions of OpenSSL prior to 1.0.2 didn't support ALPN and only supposed the |
| now obsolete NPN extension. |
| If both HTTP/2 and HTTP/1.1 are expected to be supported, both versions can |
| be advertised, in order of preference, like below : |
| |
| server 127.0.0.1:443 ssl crt pub.pem alpn h2,http/1.1 |
| |
| See also "ws" to use an alternative ALPN for websocket streams. |
| |
| backup |
| When "backup" is present on a server line, the server is only used in load |
| balancing when all other non-backup servers are unavailable. Requests coming |
| with a persistence cookie referencing the server will always be served |
| though. By default, only the first operational backup server is used, unless |
| the "allbackups" option is set in the backend. See also the "no-backup" and |
| "allbackups" options. |
| |
| ca-file <cafile> |
| This setting is only available when support for OpenSSL was built in. It |
| designates a PEM file from which to load CA certificates used to verify |
| server's certificate. It is possible to load a directory containing multiple |
| CAs, in this case HAProxy will try to load every ".pem", ".crt", ".cer", and |
| .crl" available in the directory, files starting with a dot are ignored. |
| |
| In order to use the trusted CAs of your system, the "@system-ca" parameter |
| could be used in place of the cafile. The location of this directory could be |
| overwritten by setting the SSL_CERT_DIR environment variable. |
| |
| check |
| This option enables health checks on a server: |
| - when not set, no health checking is performed, and the server is always |
| considered available. |
| - when set and no other check method is configured, the server is considered |
| available when a connection can be established at the highest configured |
| transport layer. This means TCP by default, or SSL/TLS when "ssl" or |
| "check-ssl" are set, both possibly combined with connection prefixes such |
| as a PROXY protocol header when "send-proxy" or "check-send-proxy" are |
| set. This behavior is slightly different for dynamic servers, read the |
| following paragraphs for more details. |
| - when set and an application-level health check is defined, the |
| application-level exchanges are performed on top of the configured |
| transport layer and the server is considered available if all of the |
| exchanges succeed. |
| |
| By default, health checks are performed on the same address and port as |
| configured on the server, using the same encapsulation parameters (SSL/TLS, |
| proxy-protocol header, etc... ). It is possible to change the destination |
| address using "addr" and the port using "port". When done, it is assumed the |
| server isn't checked on the service port, and configured encapsulation |
| parameters are not reused. One must explicitly set "check-send-proxy" to send |
| connection headers, "check-ssl" to use SSL/TLS. |
| |
| Note that the implicit configuration of ssl and PROXY protocol is not |
| performed for dynamic servers. In this case, it is required to explicitly |
| use "check-ssl" and "check-send-proxy" when wanted, even if the check port is |
| not overridden. |
| |
| When "sni" or "alpn" are set on the server line, their value is not used for |
| health checks and one must use "check-sni" or "check-alpn". |
| |
| The default source address for health check traffic is the same as the one |
| defined in the backend. It can be changed with the "source" keyword. |
| |
| The interval between checks can be set using the "inter" keyword, and the |
| "rise" and "fall" keywords can be used to define how many successful or |
| failed health checks are required to flag a server available or not |
| available. |
| |
| Optional application-level health checks can be configured with "option |
| httpchk", "option mysql-check" "option smtpchk", "option pgsql-check", |
| "option ldap-check", or "option redis-check". |
| |
| Example: |
| # simple tcp check |
| backend foo |
| server s1 192.168.0.1:80 check |
| # this does a tcp connect + tls handshake |
| backend foo |
| server s1 192.168.0.1:443 ssl check |
| # simple tcp check is enough for check success |
| backend foo |
| option tcp-check |
| tcp-check connect |
| server s1 192.168.0.1:443 ssl check |
| |
| check-send-proxy |
| This option forces emission of a PROXY protocol line with outgoing health |
| checks, regardless of whether the server uses send-proxy or not for the |
| normal traffic. By default, the PROXY protocol is enabled for health checks |
| if it is already enabled for normal traffic and if no "port" nor "addr" |
| directive is present. However, if such a directive is present, the |
| "check-send-proxy" option needs to be used to force the use of the |
| protocol. See also the "send-proxy" option for more information. |
| |
| check-alpn <protocols> |
| Defines which protocols to advertise with ALPN. The protocol list consists in |
| a comma-delimited list of protocol names, for instance: "http/1.1,http/1.0" |
| (without quotes). If it is not set, the server ALPN is used. |
| |
| check-proto <name> |
| Forces the multiplexer's protocol to use for the server's health-check |
| connections. It must be compatible with the health-check type (TCP or |
| HTTP). It must also be usable on the backend side. The list of available |
| protocols is reported in haproxy -vv. The protocols properties are |
| reported : the mode (TCP/HTTP), the side (FE/BE), the mux name and its flags. |
| |
| Some protocols are subject to the head-of-line blocking on server side |
| (flag=HOL_RISK). Finally some protocols don't support upgrades (flag=NO_UPG). |
| The HTX compatibility is also reported (flag=HTX). |
| |
| Here are the protocols that may be used as argument to a "check-proto" |
| directive on a server line: |
| |
| h2 : mode=HTTP side=FE|BE mux=H2 flags=HTX|HOL_RISK|NO_UPG |
| fcgi : mode=HTTP side=BE mux=FCGI flags=HTX|HOL_RISK|NO_UPG |
| h1 : mode=HTTP side=FE|BE mux=H1 flags=HTX|NO_UPG |
| none : mode=TCP side=FE|BE mux=PASS flags=NO_UPG |
| |
| Idea behind this option is to bypass the selection of the best multiplexer's |
| protocol for health-check connections established to this server. |
| If not defined, the server one will be used, if set. |
| |
| check-sni <sni> |
| This option allows you to specify the SNI to be used when doing health checks |
| over SSL. It is only possible to use a string to set <sni>. If you want to |
| set a SNI for proxied traffic, see "sni". |
| |
| check-ssl |
| This option forces encryption of all health checks over SSL, regardless of |
| whether the server uses SSL or not for the normal traffic. This is generally |
| used when an explicit "port" or "addr" directive is specified and SSL health |
| checks are not inherited. It is important to understand that this option |
| inserts an SSL transport layer below the checks, so that a simple TCP connect |
| check becomes an SSL connect, which replaces the old ssl-hello-chk. The most |
| common use is to send HTTPS checks by combining "httpchk" with SSL checks. |
| All SSL settings are common to health checks and traffic (e.g. ciphers). |
| See the "ssl" option for more information and "no-check-ssl" to disable |
| this option. |
| |
| check-via-socks4 |
| This option enables outgoing health checks using upstream socks4 proxy. By |
| default, the health checks won't go through socks tunnel even it was enabled |
| for normal traffic. |
| |
| ciphers <ciphers> |
| This setting is only available when support for OpenSSL was built in. This |
| option sets the string describing the list of cipher algorithms that is |
| negotiated during the SSL/TLS handshake with the server. The format of the |
| string is defined in "man 1 ciphers" from OpenSSL man pages. For background |
| information and recommendations see e.g. |
| (https://wiki.mozilla.org/Security/Server_Side_TLS) and |
| (https://mozilla.github.io/server-side-tls/ssl-config-generator/). For TLSv1.3 |
| cipher configuration, please check the "ciphersuites" keyword. |
| |
| ciphersuites <ciphersuites> |
| This setting is only available when support for OpenSSL was built in and |
| OpenSSL 1.1.1 or later was used to build HAProxy. This option sets the string |
| describing the list of cipher algorithms that is negotiated during the TLS |
| 1.3 handshake with the server. The format of the string is defined in |
| "man 1 ciphers" from OpenSSL man pages under the "ciphersuites" section. |
| For cipher configuration for TLSv1.2 and earlier, please check the "ciphers" |
| keyword. |
| |
| cookie <value> |
| The "cookie" parameter sets the cookie value assigned to the server to |
| <value>. This value will be checked in incoming requests, and the first |
| operational server possessing the same value will be selected. In return, in |
| cookie insertion or rewrite modes, this value will be assigned to the cookie |
| sent to the client. There is nothing wrong in having several servers sharing |
| the same cookie value, and it is in fact somewhat common between normal and |
| backup servers. See also the "cookie" keyword in backend section. |
| |
| crl-file <crlfile> |
| This setting is only available when support for OpenSSL was built in. It |
| designates a PEM file from which to load certificate revocation list used |
| to verify server's certificate. |
| |
| crt <cert> |
| This setting is only available when support for OpenSSL was built in. |
| It designates a PEM file from which to load both a certificate and the |
| associated private key. This file can be built by concatenating both PEM |
| files into one. This certificate will be sent if the server send a client |
| certificate request. |
| |
| If the file does not contain a private key, HAProxy will try to load the key |
| at the same path suffixed by a ".key" (provided the "ssl-load-extra-files" |
| option is set accordingly). |
| |
| disabled |
| The "disabled" keyword starts the server in the "disabled" state. That means |
| that it is marked down in maintenance mode, and no connection other than the |
| ones allowed by persist mode will reach it. It is very well suited to setup |
| new servers, because normal traffic will never reach them, while it is still |
| possible to test the service by making use of the force-persist mechanism. |
| See also "enabled" setting. |
| |
| enabled |
| This option may be used as 'server' setting to reset any 'disabled' |
| setting which would have been inherited from 'default-server' directive as |
| default value. |
| It may also be used as 'default-server' setting to reset any previous |
| 'default-server' 'disabled' setting. |
| |
| error-limit <count> |
| If health observing is enabled, the "error-limit" parameter specifies the |
| number of consecutive errors that triggers event selected by the "on-error" |
| option. By default it is set to 10 consecutive errors. |
| |
| See also the "check", "error-limit" and "on-error". |
| |
| fall <count> |
| The "fall" parameter states that a server will be considered as dead after |
| <count> consecutive unsuccessful health checks. This value defaults to 3 if |
| unspecified. See also the "check", "inter" and "rise" parameters. |
| |
| force-sslv3 |
| This option enforces use of SSLv3 only when SSL is used to communicate with |
| the server. SSLv3 is generally less expensive than the TLS counterparts for |
| high connection rates. This option is also available on global statement |
| "ssl-default-server-options". See also "ssl-min-ver" and ssl-max-ver". |
| |
| force-tlsv10 |
| This option enforces use of TLSv1.0 only when SSL is used to communicate with |
| the server. This option is also available on global statement |
| "ssl-default-server-options". See also "ssl-min-ver" and ssl-max-ver". |
| |
| force-tlsv11 |
| This option enforces use of TLSv1.1 only when SSL is used to communicate with |
| the server. This option is also available on global statement |
| "ssl-default-server-options". See also "ssl-min-ver" and ssl-max-ver". |
| |
| force-tlsv12 |
| This option enforces use of TLSv1.2 only when SSL is used to communicate with |
| the server. This option is also available on global statement |
| "ssl-default-server-options". See also "ssl-min-ver" and ssl-max-ver". |
| |
| force-tlsv13 |
| This option enforces use of TLSv1.3 only when SSL is used to communicate with |
| the server. This option is also available on global statement |
| "ssl-default-server-options". See also "ssl-min-ver" and ssl-max-ver". |
| |
| id <value> |
| Set a persistent ID for the server. This ID must be positive and unique for |
| the proxy. An unused ID will automatically be assigned if unset. The first |
| assigned value will be 1. This ID is currently only returned in statistics. |
| |
| init-addr {last | libc | none | <ip>},[...]* |
| Indicate in what order the server's address should be resolved upon startup |
| if it uses an FQDN. Attempts are made to resolve the address by applying in |
| turn each of the methods mentioned in the comma-delimited list. The first |
| method which succeeds is used. If the end of the list is reached without |
| finding a working method, an error is thrown. Method "last" suggests to pick |
| the address which appears in the state file (see "server-state-file"). Method |
| "libc" uses the libc's internal resolver (gethostbyname() or getaddrinfo() |
| depending on the operating system and build options). Method "none" |
| specifically indicates that the server should start without any valid IP |
| address in a down state. It can be useful to ignore some DNS issues upon |
| startup, waiting for the situation to get fixed later. Finally, an IP address |
| (IPv4 or IPv6) may be provided. It can be the currently known address of the |
| server (e.g. filled by a configuration generator), or the address of a dummy |
| server used to catch old sessions and present them with a decent error |
| message for example. When the "first" load balancing algorithm is used, this |
| IP address could point to a fake server used to trigger the creation of new |
| instances on the fly. This option defaults to "last,libc" indicating that the |
| previous address found in the state file (if any) is used first, otherwise |
| the libc's resolver is used. This ensures continued compatibility with the |
| historic behavior. |
| |
| Example: |
| defaults |
| # never fail on address resolution |
| default-server init-addr last,libc,none |
| |
| inter <delay> |
| fastinter <delay> |
| downinter <delay> |
| The "inter" parameter sets the interval between two consecutive health checks |
| to <delay> milliseconds. If left unspecified, the delay defaults to 2000 ms. |
| It is also possible to use "fastinter" and "downinter" to optimize delays |
| between checks depending on the server state : |
| |
| Server state | Interval used |
| ----------------------------------------+---------------------------------- |
| UP 100% (non-transitional) | "inter" |
| ----------------------------------------+---------------------------------- |
| Transitionally UP (going down "fall"), | "fastinter" if set, |
| Transitionally DOWN (going up "rise"), | "inter" otherwise. |
| or yet unchecked. | |
| ----------------------------------------+---------------------------------- |
| DOWN 100% (non-transitional) | "downinter" if set, |
| | "inter" otherwise. |
| ----------------------------------------+---------------------------------- |
| |
| Just as with every other time-based parameter, they can be entered in any |
| other explicit unit among { us, ms, s, m, h, d }. The "inter" parameter also |
| serves as a timeout for health checks sent to servers if "timeout check" is |
| not set. In order to reduce "resonance" effects when multiple servers are |
| hosted on the same hardware, the agent and health checks of all servers |
| are started with a small time offset between them. It is also possible to |
| add some random noise in the agent and health checks interval using the |
| global "spread-checks" keyword. This makes sense for instance when a lot |
| of backends use the same servers. |
| |
| log-proto <logproto> |
| The "log-proto" specifies the protocol used to forward event messages to |
| a server configured in a ring section. Possible values are "legacy" |
| and "octet-count" corresponding respectively to "Non-transparent-framing" |
| and "Octet counting" in rfc6587. "legacy" is the default. |
| |
| maxconn <maxconn> |
| The "maxconn" parameter specifies the maximal number of concurrent |
| connections that will be sent to this server. If the number of incoming |
| concurrent connections goes higher than this value, they will be queued, |
| waiting for a slot to be released. This parameter is very important as it can |
| save fragile servers from going down under extreme loads. If a "minconn" |
| parameter is specified, the limit becomes dynamic. The default value is "0" |
| which means unlimited. See also the "minconn" and "maxqueue" parameters, and |
| the backend's "fullconn" keyword. |
| |
| In HTTP mode this parameter limits the number of concurrent requests instead |
| of the number of connections. Multiple requests might be multiplexed over a |
| single TCP connection to the server. As an example if you specify a maxconn |
| of 50 you might see between 1 and 50 actual server connections, but no more |
| than 50 concurrent requests. |
| |
| maxqueue <maxqueue> |
| The "maxqueue" parameter specifies the maximal number of connections which |
| will wait in the queue for this server. If this limit is reached, next |
| requests will be redispatched to other servers instead of indefinitely |
| waiting to be served. This will break persistence but may allow people to |
| quickly re-log in when the server they try to connect to is dying. Some load |
| balancing algorithms such as leastconn take this into account and accept to |
| add requests into a server's queue up to this value if it is explicitly set |
| to a value greater than zero, which often allows to better smooth the load |
| when dealing with single-digit maxconn values. The default value is "0" which |
| means the queue is unlimited. See also the "maxconn" and "minconn" parameters |
| and "balance leastconn". |
| |
| max-reuse <count> |
| The "max-reuse" argument indicates the HTTP connection processors that they |
| should not reuse a server connection more than this number of times to send |
| new requests. Permitted values are -1 (the default), which disables this |
| limit, or any positive value. Value zero will effectively disable keep-alive. |
| This is only used to work around certain server bugs which cause them to leak |
| resources over time. The argument is not necessarily respected by the lower |
| layers as there might be technical limitations making it impossible to |
| enforce. At least HTTP/2 connections to servers will respect it. |
| |
| minconn <minconn> |
| When the "minconn" parameter is set, the maxconn limit becomes a dynamic |
| limit following the backend's load. The server will always accept at least |
| <minconn> connections, never more than <maxconn>, and the limit will be on |
| the ramp between both values when the backend has less than <fullconn> |
| concurrent connections. This makes it possible to limit the load on the |
| server during normal loads, but push it further for important loads without |
| overloading the server during exceptional loads. See also the "maxconn" |
| and "maxqueue" parameters, as well as the "fullconn" backend keyword. |
| |
| namespace <name> |
| On Linux, it is possible to specify which network namespace a socket will |
| belong to. This directive makes it possible to explicitly bind a server to |
| a namespace different from the default one. Please refer to your operating |
| system's documentation to find more details about network namespaces. |
| |
| no-agent-check |
| This option may be used as "server" setting to reset any "agent-check" |
| setting which would have been inherited from "default-server" directive as |
| default value. |
| It may also be used as "default-server" setting to reset any previous |
| "default-server" "agent-check" setting. |
| |
| no-backup |
| This option may be used as "server" setting to reset any "backup" |
| setting which would have been inherited from "default-server" directive as |
| default value. |
| It may also be used as "default-server" setting to reset any previous |
| "default-server" "backup" setting. |
| |
| no-check |
| This option may be used as "server" setting to reset any "check" |
| setting which would have been inherited from "default-server" directive as |
| default value. |
| It may also be used as "default-server" setting to reset any previous |
| "default-server" "check" setting. |
| |
| no-check-ssl |
| This option may be used as "server" setting to reset any "check-ssl" |
| setting which would have been inherited from "default-server" directive as |
| default value. |
| It may also be used as "default-server" setting to reset any previous |
| "default-server" "check-ssl" setting. |
| |
| no-send-proxy |
| This option may be used as "server" setting to reset any "send-proxy" |
| setting which would have been inherited from "default-server" directive as |
| default value. |
| It may also be used as "default-server" setting to reset any previous |
| "default-server" "send-proxy" setting. |
| |
| no-send-proxy-v2 |
| This option may be used as "server" setting to reset any "send-proxy-v2" |
| setting which would have been inherited from "default-server" directive as |
| default value. |
| It may also be used as "default-server" setting to reset any previous |
| "default-server" "send-proxy-v2" setting. |
| |
| no-send-proxy-v2-ssl |
| This option may be used as "server" setting to reset any "send-proxy-v2-ssl" |
| setting which would have been inherited from "default-server" directive as |
| default value. |
| It may also be used as "default-server" setting to reset any previous |
| "default-server" "send-proxy-v2-ssl" setting. |
| |
| no-send-proxy-v2-ssl-cn |
| This option may be used as "server" setting to reset any "send-proxy-v2-ssl-cn" |
| setting which would have been inherited from "default-server" directive as |
| default value. |
| It may also be used as "default-server" setting to reset any previous |
| "default-server" "send-proxy-v2-ssl-cn" setting. |
| |
| no-ssl |
| This option may be used as "server" setting to reset any "ssl" |
| setting which would have been inherited from "default-server" directive as |
| default value. |
| It may also be used as "default-server" setting to reset any previous |
| "default-server" "ssl" setting. |
| |
| Note that using `default-server ssl` setting and `no-ssl` on server will |
| however init SSL connection, so it can be later be enabled through the |
| runtime API: see `set server` commands in management doc. |
| |
| no-ssl-reuse |
| This option disables SSL session reuse when SSL is used to communicate with |
| the server. It will force the server to perform a full handshake for every |
| new connection. It's probably only useful for benchmarking, troubleshooting, |
| and for paranoid users. |
| |
| no-sslv3 |
| This option disables support for SSLv3 when SSL is used to communicate with |
| the server. Note that SSLv2 is disabled in the code and cannot be enabled |
| using any configuration option. Use "ssl-min-ver" and "ssl-max-ver" instead. |
| |
| Supported in default-server: No |
| |
| no-tls-tickets |
| This setting is only available when support for OpenSSL was built in. It |
| disables the stateless session resumption (RFC 5077 TLS Ticket |
| extension) and force to use stateful session resumption. Stateless |
| session resumption is more expensive in CPU usage for servers. This option |
| is also available on global statement "ssl-default-server-options". |
| The TLS ticket mechanism is only used up to TLS 1.2. |
| Forward Secrecy is compromised with TLS tickets, unless ticket keys |
| are periodically rotated (via reload or by using "tls-ticket-keys"). |
| See also "tls-tickets". |
| |
| no-tlsv10 |
| This option disables support for TLSv1.0 when SSL is used to communicate with |
| the server. Note that SSLv2 is disabled in the code and cannot be enabled |
| using any configuration option. TLSv1 is more expensive than SSLv3 so it |
| often makes sense to disable it when communicating with local servers. This |
| option is also available on global statement "ssl-default-server-options". |
| Use "ssl-min-ver" and "ssl-max-ver" instead. |
| |
| Supported in default-server: No |
| |
| no-tlsv11 |
| This option disables support for TLSv1.1 when SSL is used to communicate with |
| the server. Note that SSLv2 is disabled in the code and cannot be enabled |
| using any configuration option. TLSv1 is more expensive than SSLv3 so it |
| often makes sense to disable it when communicating with local servers. This |
| option is also available on global statement "ssl-default-server-options". |
| Use "ssl-min-ver" and "ssl-max-ver" instead. |
| |
| Supported in default-server: No |
| |
| no-tlsv12 |
| This option disables support for TLSv1.2 when SSL is used to communicate with |
| the server. Note that SSLv2 is disabled in the code and cannot be enabled |
| using any configuration option. TLSv1 is more expensive than SSLv3 so it |
| often makes sense to disable it when communicating with local servers. This |
| option is also available on global statement "ssl-default-server-options". |
| Use "ssl-min-ver" and "ssl-max-ver" instead. |
| |
| Supported in default-server: No |
| |
| no-tlsv13 |
| This option disables support for TLSv1.3 when SSL is used to communicate with |
| the server. Note that SSLv2 is disabled in the code and cannot be enabled |
| using any configuration option. TLSv1 is more expensive than SSLv3 so it |
| often makes sense to disable it when communicating with local servers. This |
| option is also available on global statement "ssl-default-server-options". |
| Use "ssl-min-ver" and "ssl-max-ver" instead. |
| |
| Supported in default-server: No |
| |
| no-verifyhost |
| This option may be used as "server" setting to reset any "verifyhost" |
| setting which would have been inherited from "default-server" directive as |
| default value. |
| It may also be used as "default-server" setting to reset any previous |
| "default-server" "verifyhost" setting. |
| |
| no-tfo |
| This option may be used as "server" setting to reset any "tfo" |
| setting which would have been inherited from "default-server" directive as |
| default value. |
| It may also be used as "default-server" setting to reset any previous |
| "default-server" "tfo" setting. |
| |
| non-stick |
| Never add connections allocated to this sever to a stick-table. |
| This may be used in conjunction with backup to ensure that |
| stick-table persistence is disabled for backup servers. |
| |
| npn <protocols> |
| This enables the NPN TLS extension and advertises the specified protocol list |
| as supported on top of NPN. The protocol list consists in a comma-delimited |
| list of protocol names, for instance: "http/1.1,http/1.0" (without quotes). |
| This requires that the SSL library is built with support for TLS extensions |
| enabled (check with haproxy -vv). Note that the NPN extension has been |
| replaced with the ALPN extension (see the "alpn" keyword), though this one is |
| only available starting with OpenSSL 1.0.2. |
| |
| observe <mode> |
| This option enables health adjusting based on observing communication with |
| the server. By default this functionality is disabled and enabling it also |
| requires to enable health checks. There are two supported modes: "layer4" and |
| "layer7". In layer4 mode, only successful/unsuccessful tcp connections are |
| significant. In layer7, which is only allowed for http proxies, responses |
| received from server are verified, like valid/wrong http code, unparsable |
| headers, a timeout, etc. Valid status codes include 100 to 499, 501 and 505. |
| |
| See also the "check", "on-error" and "error-limit". |
| |
| on-error <mode> |
| Select what should happen when enough consecutive errors are detected. |
| Currently, four modes are available: |
| - fastinter: force fastinter |
| - fail-check: simulate a failed check, also forces fastinter (default) |
| - sudden-death: simulate a pre-fatal failed health check, one more failed |
| check will mark a server down, forces fastinter |
| - mark-down: mark the server immediately down and force fastinter |
| |
| See also the "check", "observe" and "error-limit". |
| |
| on-marked-down <action> |
| Modify what occurs when a server is marked down. |
| Currently one action is available: |
| - shutdown-sessions: Shutdown peer sessions. When this setting is enabled, |
| all connections to the server are immediately terminated when the server |
| goes down. It might be used if the health check detects more complex cases |
| than a simple connection status, and long timeouts would cause the service |
| to remain unresponsive for too long a time. For instance, a health check |
| might detect that a database is stuck and that there's no chance to reuse |
| existing connections anymore. Connections killed this way are logged with |
| a 'D' termination code (for "Down"). |
| |
| Actions are disabled by default |
| |
| on-marked-up <action> |
| Modify what occurs when a server is marked up. |
| Currently one action is available: |
| - shutdown-backup-sessions: Shutdown sessions on all backup servers. This is |
| done only if the server is not in backup state and if it is not disabled |
| (it must have an effective weight > 0). This can be used sometimes to force |
| an active server to take all the traffic back after recovery when dealing |
| with long sessions (e.g. LDAP, SQL, ...). Doing this can cause more trouble |
| than it tries to solve (e.g. incomplete transactions), so use this feature |
| with extreme care. Sessions killed because a server comes up are logged |
| with an 'U' termination code (for "Up"). |
| |
| Actions are disabled by default |
| |
| pool-low-conn <max> |
| Set a low threshold on the number of idling connections for a server, below |
| which a thread will not try to steal a connection from another thread. This |
| can be useful to improve CPU usage patterns in scenarios involving many very |
| fast servers, in order to ensure all threads will keep a few idle connections |
| all the time instead of letting them accumulate over one thread and migrating |
| them from thread to thread. Typical values of twice the number of threads |
| seem to show very good performance already with sub-millisecond response |
| times. The default is zero, indicating that any idle connection can be used |
| at any time. It is the recommended setting for normal use. This only applies |
| to connections that can be shared according to the same principles as those |
| applying to "http-reuse". In case connection sharing between threads would |
| be disabled via "tune.idle-pool.shared", it can become very important to use |
| this setting to make sure each thread always has a few connections, or the |
| connection reuse rate will decrease as thread count increases. |
| |
| pool-max-conn <max> |
| Set the maximum number of idling connections for a server. -1 means unlimited |
| connections, 0 means no idle connections. The default is -1. When idle |
| connections are enabled, orphaned idle connections which do not belong to any |
| client session anymore are moved to a dedicated pool so that they remain |
| usable by future clients. This only applies to connections that can be shared |
| according to the same principles as those applying to "http-reuse". |
| |
| pool-purge-delay <delay> |
| Sets the delay to start purging idle connections. Each <delay> interval, half |
| of the idle connections are closed. 0 means we don't keep any idle connection. |
| The default is 5s. |
| |
| port <port> |
| Using the "port" parameter, it becomes possible to use a different port to |
| send health-checks or to probe the agent-check. On some servers, it may be |
| desirable to dedicate a port to a specific component able to perform complex |
| tests which are more suitable to health-checks than the application. It is |
| common to run a simple script in inetd for instance. This parameter is |
| ignored if the "check" parameter is not set. See also the "addr" parameter. |
| |
| proto <name> |
| Forces the multiplexer's protocol to use for the outgoing connections to this |
| server. It must be compatible with the mode of the backend (TCP or HTTP). It |
| must also be usable on the backend side. The list of available protocols is |
| reported in haproxy -vv.The protocols properties are reported : the mode |
| (TCP/HTTP), the side (FE/BE), the mux name and its flags. |
| |
| Some protocols are subject to the head-of-line blocking on server side |
| (flag=HOL_RISK). Finally some protocols don't support upgrades (flag=NO_UPG). |
| The HTX compatibility is also reported (flag=HTX). |
| |
| Here are the protocols that may be used as argument to a "proto" directive on |
| a server line : |
| |
| h2 : mode=HTTP side=FE|BE mux=H2 flags=HTX|HOL_RISK|NO_UPG |
| fcgi : mode=HTTP side=BE mux=FCGI flags=HTX|HOL_RISK|NO_UPG |
| h1 : mode=HTTP side=FE|BE mux=H1 flags=HTX|NO_UPG |
| none : mode=TCP side=FE|BE mux=PASS flags=NO_UPG |
| |
| Idea behind this option is to bypass the selection of the best multiplexer's |
| protocol for all connections established to this server. |
| |
| See also "ws" to use an alternative protocol for websocket streams. |
| |
| redir <prefix> |
| The "redir" parameter enables the redirection mode for all GET and HEAD |
| requests addressing this server. This means that instead of having HAProxy |
| forward the request to the server, it will send an "HTTP 302" response with |
| the "Location" header composed of this prefix immediately followed by the |
| requested URI beginning at the leading '/' of the path component. That means |
| that no trailing slash should be used after <prefix>. All invalid requests |
| will be rejected, and all non-GET or HEAD requests will be normally served by |
| the server. Note that since the response is completely forged, no header |
| mangling nor cookie insertion is possible in the response. However, cookies in |
| requests are still analyzed, making this solution completely usable to direct |
| users to a remote location in case of local disaster. Main use consists in |
| increasing bandwidth for static servers by having the clients directly |
| connect to them. Note: never use a relative location here, it would cause a |
| loop between the client and HAProxy! |
| |
| Example : server srv1 192.168.1.1:80 redir http://image1.mydomain.com check |
| |
| rise <count> |
| The "rise" parameter states that a server will be considered as operational |
| after <count> consecutive successful health checks. This value defaults to 2 |
| if unspecified. See also the "check", "inter" and "fall" parameters. |
| |
| resolve-opts <option>,<option>,... |
| Comma separated list of options to apply to DNS resolution linked to this |
| server. |
| |
| Available options: |
| |
| * allow-dup-ip |
| By default, HAProxy prevents IP address duplication in a backend when DNS |
| resolution at runtime is in operation. |
| That said, for some cases, it makes sense that two servers (in the same |
| backend, being resolved by the same FQDN) have the same IP address. |
| For such case, simply enable this option. |
| This is the opposite of prevent-dup-ip. |
| |
| * ignore-weight |
| Ignore any weight that is set within an SRV record. This is useful when |
| you would like to control the weights using an alternate method, such as |
| using an "agent-check" or through the runtime api. |
| |
| * prevent-dup-ip |
| Ensure HAProxy's default behavior is enforced on a server: prevent re-using |
| an IP address already set to a server in the same backend and sharing the |
| same fqdn. |
| This is the opposite of allow-dup-ip. |
| |
| Example: |
| backend b_myapp |
| default-server init-addr none resolvers dns |
| server s1 myapp.example.com:80 check resolve-opts allow-dup-ip |
| server s2 myapp.example.com:81 check resolve-opts allow-dup-ip |
| |
| With the option allow-dup-ip set: |
| * if the nameserver returns a single IP address, then both servers will use |
| it |
| * If the nameserver returns 2 IP addresses, then each server will pick up a |
| different address |
| |
| Default value: not set |
| |
| resolve-prefer <family> |
| When DNS resolution is enabled for a server and multiple IP addresses from |
| different families are returned, HAProxy will prefer using an IP address |
| from the family mentioned in the "resolve-prefer" parameter. |
| Available families: "ipv4" and "ipv6" |
| |
| Default value: ipv6 |
| |
| Example: |
| |
| server s1 app1.domain.com:80 resolvers mydns resolve-prefer ipv6 |
| |
| resolve-net <network>[,<network[,...]] |
| This option prioritizes the choice of an ip address matching a network. This is |
| useful with clouds to prefer a local ip. In some cases, a cloud high |
| availability service can be announced with many ip addresses on many |
| different datacenters. The latency between datacenter is not negligible, so |
| this patch permits to prefer a local datacenter. If no address matches the |
| configured network, another address is selected. |
| |
| Example: |
| |
| server s1 app1.domain.com:80 resolvers mydns resolve-net 10.0.0.0/8 |
| |
| resolvers <id> |
| Points to an existing "resolvers" section to resolve current server's |
| hostname. |
| |
| Example: |
| |
| server s1 app1.domain.com:80 check resolvers mydns |
| |
| See also section 5.3 |
| |
| send-proxy |
| The "send-proxy" parameter enforces use of the PROXY protocol over any |
| connection established to this server. The PROXY protocol informs the other |
| end about the layer 3/4 addresses of the incoming connection, so that it can |
| know the client's address or the public address it accessed to, whatever the |
| upper layer protocol. For connections accepted by an "accept-proxy" or |
| "accept-netscaler-cip" listener, the advertised address will be used. Only |
| TCPv4 and TCPv6 address families are supported. Other families such as |
| Unix sockets, will report an UNKNOWN family. Servers using this option can |
| fully be chained to another instance of HAProxy listening with an |
| "accept-proxy" setting. This setting must not be used if the server isn't |
| aware of the protocol. When health checks are sent to the server, the PROXY |
| protocol is automatically used when this option is set, unless there is an |
| explicit "port" or "addr" directive, in which case an explicit |
| "check-send-proxy" directive would also be needed to use the PROXY protocol. |
| See also the "no-send-proxy" option of this section and "accept-proxy" and |
| "accept-netscaler-cip" option of the "bind" keyword. |
| |
| send-proxy-v2 |
| The "send-proxy-v2" parameter enforces use of the PROXY protocol version 2 |
| over any connection established to this server. The PROXY protocol informs |
| the other end about the layer 3/4 addresses of the incoming connection, so |
| that it can know the client's address or the public address it accessed to, |
| whatever the upper layer protocol. It also send ALPN information if an alpn |
| have been negotiated. This setting must not be used if the server isn't aware |
| of this version of the protocol. See also the "no-send-proxy-v2" option of |
| this section and send-proxy" option of the "bind" keyword. |
| |
| proxy-v2-options <option>[,<option>]* |
| The "proxy-v2-options" parameter add options to send in PROXY protocol |
| version 2 when "send-proxy-v2" is used. Options available are: |
| |
| - ssl : See also "send-proxy-v2-ssl". |
| - cert-cn : See also "send-proxy-v2-ssl-cn". |
| - ssl-cipher: Name of the used cipher. |
| - cert-sig : Signature algorithm of the used certificate. |
| - cert-key : Key algorithm of the used certificate |
| - authority : Host name value passed by the client (only SNI from a TLS |
| connection is supported). |
| - crc32c : Checksum of the PROXYv2 header. |
| - unique-id : Send a unique ID generated using the frontend's |
| "unique-id-format" within the PROXYv2 header. |
| This unique-id is primarily meant for "mode tcp". It can |
| lead to unexpected results in "mode http", because the |
| generated unique ID is also used for the first HTTP request |
| within a Keep-Alive connection. |
| |
| send-proxy-v2-ssl |
| The "send-proxy-v2-ssl" parameter enforces use of the PROXY protocol version |
| 2 over any connection established to this server. The PROXY protocol informs |
| the other end about the layer 3/4 addresses of the incoming connection, so |
| that it can know the client's address or the public address it accessed to, |
| whatever the upper layer protocol. In addition, the SSL information extension |
| of the PROXY protocol is added to the PROXY protocol header. This setting |
| must not be used if the server isn't aware of this version of the protocol. |
| See also the "no-send-proxy-v2-ssl" option of this section and the |
| "send-proxy-v2" option of the "bind" keyword. |
| |
| send-proxy-v2-ssl-cn |
| The "send-proxy-v2-ssl" parameter enforces use of the PROXY protocol version |
| 2 over any connection established to this server. The PROXY protocol informs |
| the other end about the layer 3/4 addresses of the incoming connection, so |
| that it can know the client's address or the public address it accessed to, |
| whatever the upper layer protocol. In addition, the SSL information extension |
| of the PROXY protocol, along along with the Common Name from the subject of |
| the client certificate (if any), is added to the PROXY protocol header. This |
| setting must not be used if the server isn't aware of this version of the |
| protocol. See also the "no-send-proxy-v2-ssl-cn" option of this section and |
| the "send-proxy-v2" option of the "bind" keyword. |
| |
| shard <shard> |
| This parameter in used only in the context of stick-tables synchronisation |
| with peers protocol. The "shard" parameter identifies the peers which will |
| receive all the stick-table updates for keys with this shard as distribution |
| hash. The accepted values are 0 up to "shards" parameter value specified in |
| the "peers" section. 0 value is the default value meaning that the peer will |
| receive all the key updates. Greater values than "shards" will be ignored. |
| This is also the case for any value provided to the local peer. |
| |
| Example : |
| |
| peers mypeers |
| shards 3 |
| peer A 127.0.0.1:40001 # local peer without shard value (0 internally) |
| peer B 127.0.0.1:40002 shard 1 |
| peer C 127.0.0.1:40003 shard 2 |
| peer D 127.0.0.1:40004 shard 3 |
| |
| |
| slowstart <start_time_in_ms> |
| The "slowstart" parameter for a server accepts a value in milliseconds which |
| indicates after how long a server which has just come back up will run at |
| full speed. Just as with every other time-based parameter, it can be entered |
| in any other explicit unit among { us, ms, s, m, h, d }. The speed grows |
| linearly from 0 to 100% during this time. The limitation applies to two |
| parameters : |
| |
| - maxconn: the number of connections accepted by the server will grow from 1 |
| to 100% of the usual dynamic limit defined by (minconn,maxconn,fullconn). |
| |
| - weight: when the backend uses a dynamic weighted algorithm, the weight |
| grows linearly from 1 to 100%. In this case, the weight is updated at every |
| health-check. For this reason, it is important that the "inter" parameter |
| is smaller than the "slowstart", in order to maximize the number of steps. |
| |
| The slowstart never applies when HAProxy starts, otherwise it would cause |
| trouble to running servers. It only applies when a server has been previously |
| seen as failed. |
| |
| sni <expression> |
| The "sni" parameter evaluates the sample fetch expression, converts it to a |
| string and uses the result as the host name sent in the SNI TLS extension to |
| the server. A typical use case is to send the SNI received from the client in |
| a bridged TCP/SSL scenario, using the "ssl_fc_sni" sample fetch for the |
| expression. THIS MUST NOT BE USED FOR HTTPS, where req.hdr(host) should be |
| used instead, since SNI in HTTPS must always match the Host field and clients |
| are allowed to use different host names over the same connection). If |
| "verify required" is set (which is the recommended setting), the resulting |
| name will also be matched against the server certificate's names. See the |
| "verify" directive for more details. If you want to set a SNI for health |
| checks, see the "check-sni" directive for more details. |
| |
| source <addr>[:<pl>[-<ph>]] [usesrc { <addr2>[:<port2>] | client | clientip } ] |
| source <addr>[:<port>] [usesrc { <addr2>[:<port2>] | hdr_ip(<hdr>[,<occ>]) } ] |
| source <addr>[:<pl>[-<ph>]] [interface <name>] ... |
| The "source" parameter sets the source address which will be used when |
| connecting to the server. It follows the exact same parameters and principle |
| as the backend "source" keyword, except that it only applies to the server |
| referencing it. Please consult the "source" keyword for details. |
| |
| Additionally, the "source" statement on a server line allows one to specify a |
| source port range by indicating the lower and higher bounds delimited by a |
| dash ('-'). Some operating systems might require a valid IP address when a |
| source port range is specified. It is permitted to have the same IP/range for |
| several servers. Doing so makes it possible to bypass the maximum of 64k |
| total concurrent connections. The limit will then reach 64k connections per |
| server. |
| |
| Since Linux 4.2/libc 2.23 IP_BIND_ADDRESS_NO_PORT is set for connections |
| specifying the source address without port(s). |
| |
| ssl |
| This option enables SSL ciphering on outgoing connections to the server. It |
| is critical to verify server certificates using "verify" when using SSL to |
| connect to servers, otherwise the communication is prone to trivial man in |
| the-middle attacks rendering SSL useless. When this option is used, health |
| checks are automatically sent in SSL too unless there is a "port" or an |
| "addr" directive indicating the check should be sent to a different location. |
| See the "no-ssl" to disable "ssl" option and "check-ssl" option to force |
| SSL health checks. |
| |
| ssl-max-ver [ SSLv3 | TLSv1.0 | TLSv1.1 | TLSv1.2 | TLSv1.3 ] |
| This option enforces use of <version> or lower when SSL is used to communicate |
| with the server. This option is also available on global statement |
| "ssl-default-server-options". See also "ssl-min-ver". |
| |
| ssl-min-ver [ SSLv3 | TLSv1.0 | TLSv1.1 | TLSv1.2 | TLSv1.3 ] |
| This option enforces use of <version> or upper when SSL is used to communicate |
| with the server. This option is also available on global statement |
| "ssl-default-server-options". See also "ssl-max-ver". |
| |
| ssl-reuse |
| This option may be used as "server" setting to reset any "no-ssl-reuse" |
| setting which would have been inherited from "default-server" directive as |
| default value. |
| It may also be used as "default-server" setting to reset any previous |
| "default-server" "no-ssl-reuse" setting. |
| |
| stick |
| This option may be used as "server" setting to reset any "non-stick" |
| setting which would have been inherited from "default-server" directive as |
| default value. |
| It may also be used as "default-server" setting to reset any previous |
| "default-server" "non-stick" setting. |
| |
| socks4 <addr>:<port> |
| This option enables upstream socks4 tunnel for outgoing connections to the |
| server. Using this option won't force the health check to go via socks4 by |
| default. You will have to use the keyword "check-via-socks4" to enable it. |
| |
| tcp-ut <delay> |
| Sets the TCP User Timeout for all outgoing connections to this server. This |
| option is available on Linux since version 2.6.37. It allows HAProxy to |
| configure a timeout for sockets which contain data not receiving an |
| acknowledgment for the configured delay. This is especially useful on |
| long-lived connections experiencing long idle periods such as remote |
| terminals or database connection pools, where the client and server timeouts |
| must remain high to allow a long period of idle, but where it is important to |
| detect that the server has disappeared in order to release all resources |
| associated with its connection (and the client's session). One typical use |
| case is also to force dead server connections to die when health checks are |
| too slow or during a soft reload since health checks are then disabled. The |
| argument is a delay expressed in milliseconds by default. This only works for |
| regular TCP connections, and is ignored for other protocols. |
| |
| tfo |
| This option enables using TCP fast open when connecting to servers, on |
| systems that support it (currently only the Linux kernel >= 4.11). |
| See the "tfo" bind option for more information about TCP fast open. |
| Please note that when using tfo, you should also use the "conn-failure", |
| "empty-response" and "response-timeout" keywords for "retry-on", or HAProxy |
| won't be able to retry the connection on failure. See also "no-tfo". |
| |
| track [<backend>/]<server> |
| This option enables ability to set the current state of the server by tracking |
| another one. It is possible to track a server which itself tracks another |
| server, provided that at the end of the chain, a server has health checks |
| enabled. If <backend> is omitted the current one is used. If disable-on-404 is |
| used, it has to be enabled on both proxies. |
| |
| tls-tickets |
| This option may be used as "server" setting to reset any "no-tls-tickets" |
| setting which would have been inherited from "default-server" directive as |
| default value. |
| The TLS ticket mechanism is only used up to TLS 1.2. |
| Forward Secrecy is compromised with TLS tickets, unless ticket keys |
| are periodically rotated (via reload or by using "tls-ticket-keys"). |
| It may also be used as "default-server" setting to reset any previous |
| "default-server" "no-tls-tickets" setting. |
| |
| verify [none|required] |
| This setting is only available when support for OpenSSL was built in. If set |
| to 'none', server certificate is not verified. In the other case, The |
| certificate provided by the server is verified using CAs from 'ca-file' and |
| optional CRLs from 'crl-file' after having checked that the names provided in |
| the certificate's subject and subjectAlternateNames attributes match either |
| the name passed using the "sni" directive, or if not provided, the static |
| host name passed using the "verifyhost" directive. When no name is found, the |
| certificate's names are ignored. For this reason, without SNI it's important |
| to use "verifyhost". On verification failure the handshake is aborted. It is |
| critically important to verify server certificates when using SSL to connect |
| to servers, otherwise the communication is prone to trivial man-in-the-middle |
| attacks rendering SSL totally useless. Unless "ssl_server_verify" appears in |
| the global section, "verify" is set to "required" by default. |
| |
| verifyhost <hostname> |
| This setting is only available when support for OpenSSL was built in, and |
| only takes effect if 'verify required' is also specified. This directive sets |
| a default static hostname to check the server's certificate against when no |
| SNI was used to connect to the server. If SNI is not used, this is the only |
| way to enable hostname verification. This static hostname, when set, will |
| also be used for health checks (which cannot provide an SNI value). If none |
| of the hostnames in the certificate match the specified hostname, the |
| handshake is aborted. The hostnames in the server-provided certificate may |
| include wildcards. See also "verify", "sni" and "no-verifyhost" options. |
| |
| weight <weight> |
| The "weight" parameter is used to adjust the server's weight relative to |
| other servers. All servers will receive a load proportional to their weight |
| relative to the sum of all weights, so the higher the weight, the higher the |
| load. The default weight is 1, and the maximal value is 256. A value of 0 |
| means the server will not participate in load-balancing but will still accept |
| persistent connections. If this parameter is used to distribute the load |
| according to server's capacity, it is recommended to start with values which |
| can both grow and shrink, for instance between 10 and 100 to leave enough |
| room above and below for later adjustments. |
| |
| ws { auto | h1 | h2 } |
| This option allows to configure the protocol used when relaying websocket |
| streams. This is most notably useful when using an HTTP/2 backend without the |
| support for H2 websockets through the RFC8441. |
| |
| The default mode is "auto". This will reuse the same protocol as the main |
| one. The only difference is when using ALPN. In this case, it can try to |
| downgrade the ALPN to "http/1.1" only for websocket streams if the configured |
| server ALPN contains it. |
| |
| The value "h1" is used to force HTTP/1.1 for websockets streams, through ALPN |
| if SSL ALPN is activated for the server. Similarly, "h2" can be used to |
| force HTTP/2.0 websockets. Use this value with care : the server must support |
| RFC8441 or an error will be reported by haproxy when relaying websockets. |
| |
| Note that NPN is not taken into account as its usage has been deprecated in |
| favor of the ALPN extension. |
| |
| See also "alpn" and "proto". |
| |
| |
| 5.3. Server IP address resolution using DNS |
| ------------------------------------------- |
| |
| HAProxy allows using a host name on the server line to retrieve its IP address |
| using name servers. By default, HAProxy resolves the name when parsing the |
| configuration file, at startup and cache the result for the process's life. |
| This is not sufficient in some cases, such as in Amazon where a server's IP |
| can change after a reboot or an ELB Virtual IP can change based on current |
| workload. |
| This chapter describes how HAProxy can be configured to process server's name |
| resolution at run time. |
| Whether run time server name resolution has been enable or not, HAProxy will |
| carry on doing the first resolution when parsing the configuration. |
| |
| |
| 5.3.1. Global overview |
| ---------------------- |
| |
| As we've seen in introduction, name resolution in HAProxy occurs at two |
| different steps of the process life: |
| |
| 1. when starting up, HAProxy parses the server line definition and matches a |
| host name. It uses libc functions to get the host name resolved. This |
| resolution relies on /etc/resolv.conf file. |
| |
| 2. at run time, HAProxy performs periodically name resolutions for servers |
| requiring DNS resolutions. |
| |
| A few other events can trigger a name resolution at run time: |
| - when a server's health check ends up in a connection timeout: this may be |
| because the server has a new IP address. So we need to trigger a name |
| resolution to know this new IP. |
| |
| When using resolvers, the server name can either be a hostname, or a SRV label. |
| HAProxy considers anything that starts with an underscore as a SRV label. If a |
| SRV label is specified, then the corresponding SRV records will be retrieved |
| from the DNS server, and the provided hostnames will be used. The SRV label |
| will be checked periodically, and if any server are added or removed, HAProxy |
| will automatically do the same. |
| |
| A few things important to notice: |
| - all the name servers are queried in the meantime. HAProxy will process the |
| first valid response. |
| |
| - a resolution is considered as invalid (NX, timeout, refused), when all the |
| servers return an error. |
| |
| |
| 5.3.2. The resolvers section |
| ---------------------------- |
| |
| This section is dedicated to host information related to name resolution in |
| HAProxy. There can be as many as resolvers section as needed. Each section can |
| contain many name servers. |
| |
| At startup, HAProxy tries to generate a resolvers section named "default", if |
| no section was named this way in the configuration. This section is used by |
| default by the httpclient and uses the parse-resolv-conf keyword. If HAProxy |
| failed to generate automatically this section, no error or warning are emitted. |
| |
| When multiple name servers are configured in a resolvers section, then HAProxy |
| uses the first valid response. In case of invalid responses, only the last one |
| is treated. Purpose is to give the chance to a slow server to deliver a valid |
| answer after a fast faulty or outdated server. |
| |
| When each server returns a different error type, then only the last error is |
| used by HAProxy. The following processing is applied on this error: |
| |
| 1. HAProxy retries the same DNS query with a new query type. The A queries are |
| switch to AAAA or the opposite. SRV queries are not concerned here. Timeout |
| errors are also excluded. |
| |
| 2. When the fallback on the query type was done (or not applicable), HAProxy |
| retries the original DNS query, with the preferred query type. |
| |
| 3. HAProxy retries previous steps <resolve_retries> times. If no valid |
| response is received after that, it stops the DNS resolution and reports |
| the error. |
| |
| For example, with 2 name servers configured in a resolvers section, the |
| following scenarios are possible: |
| |
| - First response is valid and is applied directly, second response is |
| ignored |
| |
| - First response is invalid and second one is valid, then second response is |
| applied |
| |
| - First response is a NX domain and second one a truncated response, then |
| HAProxy retries the query with a new type |
| |
| - First response is a NX domain and second one is a timeout, then HAProxy |
| retries the query with a new type |
| |
| - Query timed out for both name servers, then HAProxy retries it with the |
| same query type |
| |
| As a DNS server may not answer all the IPs in one DNS request, HAProxy keeps |
| a cache of previous answers, an answer will be considered obsolete after |
| <hold obsolete> seconds without the IP returned. |
| |
| |
| resolvers <resolvers id> |
| Creates a new name server list labeled <resolvers id> |
| |
| A resolvers section accept the following parameters: |
| |
| accepted_payload_size <nb> |
| Defines the maximum payload size accepted by HAProxy and announced to all the |
| name servers configured in this resolvers section. |
| <nb> is in bytes. If not set, HAProxy announces 512. (minimal value defined |
| by RFC 6891) |
| |
| Note: the maximum allowed value is 65535. Recommended value for UDP is |
| 4096 and it is not recommended to exceed 8192 except if you are sure |
| that your system and network can handle this (over 65507 makes no sense |
| since is the maximum UDP payload size). If you are using only TCP |
| nameservers to handle huge DNS responses, you should put this value |
| to the max: 65535. |
| |
| nameserver <name> <address>[:port] [param*] |
| Used to configure a nameserver. <name> of the nameserver should ne unique. |
| By default the <address> is considered of type datagram. This means if an |
| IPv4 or IPv6 is configured without special address prefixes (paragraph 11.) |
| the UDP protocol will be used. If an stream protocol address prefix is used, |
| the nameserver will be considered as a stream server (TCP for instance) and |
| "server" parameters found in 5.2 paragraph which are relevant for DNS |
| resolving will be considered. Note: currently, in TCP mode, 4 queries are |
| pipelined on the same connections. A batch of idle connections are removed |
| every 5 seconds. "maxconn" can be configured to limit the amount of those |
| concurrent connections and TLS should also usable if the server supports. |
| |
| parse-resolv-conf |
| Adds all nameservers found in /etc/resolv.conf to this resolvers nameservers |
| list. Ordered as if each nameserver in /etc/resolv.conf was individually |
| placed in the resolvers section in place of this directive. |
| |
| hold <status> <period> |
| Upon receiving the DNS response <status>, determines whether a server's state |
| should change from UP to DOWN. To make that determination, it checks whether |
| any valid status has been received during the past <period> in order to |
| counteract the just received invalid status. |
| |
| <status> : last name resolution status. |
| nx After receiving an NXDOMAIN status, check for any valid |
| status during the concluding period. |
| |
| refused After receiving a REFUSED status, check for any valid |
| status during the concluding period. |
| |
| timeout After the "timeout retry" has struck, check for any |
| valid status during the concluding period. |
| |
| other After receiving any other invalid status, check for any |
| valid status during the concluding period. |
| |
| valid Applies only to "http-request do-resolve" and |
| "tcp-request content do-resolve" actions. It defines the |
| period for which the server will maintain a valid response |
| before triggering another resolution. It does not affect |
| dynamic resolution of servers. |
| |
| obsolete Defines how long to wait before removing obsolete DNS |
| records after an updated answer record is received. It |
| applies to SRV records. |
| |
| <period> : Amount of time into the past during which a valid response must |
| have been received. It follows the HAProxy time format and is in |
| milliseconds by default. |
| |
| For a server that relies on dynamic DNS resolution to determine its IP |
| address, receiving an invalid DNS response, such as NXDOMAIN, will lead to |
| changing the server's state from UP to DOWN. The hold directives define how |
| far into the past to look for a valid response. If a valid response has been |
| received within <period>, the just received invalid status will be ignored. |
| |
| Unless a valid response has been receiving during the concluding period, the |
| server will be marked as DOWN. For example, if "hold nx 30s" is set and the |
| last received DNS response was NXDOMAIN, the server will be marked DOWN |
| unless a valid response has been received during the last 30 seconds. |
| |
| A server in the DOWN state will be marked UP immediately upon receiving a |
| valid status from the DNS server. |
| |
| A separate behavior exists for "hold valid" and "hold obsolete". |
| |
| resolve_retries <nb> |
| Defines the number <nb> of queries to send to resolve a server name before |
| giving up. |
| Default value: 3 |
| |
| A retry occurs on name server timeout or when the full sequence of DNS query |
| type failover is over and we need to start up from the default ANY query |
| type. |
| |
| timeout <event> <time> |
| Defines timeouts related to name resolution |
| <event> : the event on which the <time> timeout period applies to. |
| events available are: |
| - resolve : default time to trigger name resolutions when no |
| other time applied. |
| Default value: 1s |
| - retry : time between two DNS queries, when no valid response |
| have been received. |
| Default value: 1s |
| <time> : time related to the event. It follows the HAProxy time format. |
| <time> is expressed in milliseconds. |
| |
| Example: |
| |
| resolvers mydns |
| nameserver dns1 10.0.0.1:53 |
| nameserver dns2 10.0.0.2:53 |
| nameserver dns3 tcp@10.0.0.3:53 |
| parse-resolv-conf |
| resolve_retries 3 |
| timeout resolve 1s |
| timeout retry 1s |
| hold other 30s |
| hold refused 30s |
| hold nx 30s |
| hold timeout 30s |
| hold valid 10s |
| hold obsolete 30s |
| |
| |
| 6. Cache |
| --------- |
| |
| HAProxy provides a cache, which was designed to perform cache on small objects |
| (favicon, css...). This is a minimalist low-maintenance cache which runs in |
| RAM. |
| |
| The cache is based on a memory area shared between all threads, and split in 1kB |
| blocks. |
| |
| If an object is not used anymore, it can be deleted to store a new object |
| independently of its expiration date. The oldest objects are deleted first |
| when we try to allocate a new one. |
| |
| The cache uses a hash of the host header and the URI as the key. |
| |
| It's possible to view the status of a cache using the Unix socket command |
| "show cache" consult section 9.3 "Unix Socket commands" of Management Guide |
| for more details. |
| |
| When an object is delivered from the cache, the server name in the log is |
| replaced by "<CACHE>". |
| |
| |
| 6.1. Limitation |
| ---------------- |
| |
| The cache won't store and won't deliver objects in these cases: |
| |
| - If the response is not a 200 |
| - If the response contains a Vary header and either the process-vary option is |
| disabled, or a currently unmanaged header is specified in the Vary value (only |
| accept-encoding and referer are managed for now) |
| - If the Content-Length + the headers size is greater than "max-object-size" |
| - If the response is not cacheable |
| - If the response does not have an explicit expiration time (s-maxage or max-age |
| Cache-Control directives or Expires header) or a validator (ETag or Last-Modified |
| headers) |
| - If the process-vary option is enabled and there are already max-secondary-entries |
| entries with the same primary key as the current response |
| - If the process-vary option is enabled and the response has an unknown encoding (not |
| mentioned in https://www.iana.org/assignments/http-parameters/http-parameters.xhtml) |
| while varying on the accept-encoding client header |
| |
| - If the request is not a GET |
| - If the HTTP version of the request is smaller than 1.1 |
| - If the request contains an Authorization header |
| |
| |
| 6.2. Setup |
| ----------- |
| |
| To setup a cache, you must define a cache section and use it in a proxy with |
| the corresponding http-request and response actions. |
| |
| |
| 6.2.1. Cache section |
| --------------------- |
| |
| cache <name> |
| Declare a cache section, allocate a shared cache memory named <name>, the |
| size of cache is mandatory. |
| |
| total-max-size <megabytes> |
| Define the size in RAM of the cache in megabytes. This size is split in |
| blocks of 1kB which are used by the cache entries. Its maximum value is 4095. |
| |
| max-object-size <bytes> |
| Define the maximum size of the objects to be cached. Must not be greater than |
| an half of "total-max-size". If not set, it equals to a 256th of the cache size. |
| All objects with sizes larger than "max-object-size" will not be cached. |
| |
| max-age <seconds> |
| Define the maximum expiration duration. The expiration is set as the lowest |
| value between the s-maxage or max-age (in this order) directive in the |
| Cache-Control response header and this value. The default value is 60 |
| seconds, which means that you can't cache an object more than 60 seconds by |
| default. |
| |
| process-vary <on/off> |
| Enable or disable the processing of the Vary header. When disabled, a response |
| containing such a header will never be cached. When enabled, we need to calculate |
| a preliminary hash for a subset of request headers on all the incoming requests |
| (which might come with a cpu cost) which will be used to build a secondary key |
| for a given request (see RFC 7234#4.1). The default value is off (disabled). |
| |
| max-secondary-entries <number> |
| Define the maximum number of simultaneous secondary entries with the same primary |
| key in the cache. This needs the vary support to be enabled. Its default value is 10 |
| and should be passed a strictly positive integer. |
| |
| |
| 6.2.2. Proxy section |
| --------------------- |
| |
| http-request cache-use <name> [ { if | unless } <condition> ] |
| Try to deliver a cached object from the cache <name>. This directive is also |
| mandatory to store the cache as it calculates the cache hash. If you want to |
| use a condition for both storage and delivering that's a good idea to put it |
| after this one. |
| |
| http-response cache-store <name> [ { if | unless } <condition> ] |
| Store an http-response within the cache. The storage of the response headers |
| is done at this step, which means you can use others http-response actions |
| to modify headers before or after the storage of the response. This action |
| is responsible for the setup of the cache storage filter. |
| |
| |
| Example: |
| |
| backend bck1 |
| mode http |
| |
| http-request cache-use foobar |
| http-response cache-store foobar |
| server srv1 127.0.0.1:80 |
| |
| cache foobar |
| total-max-size 4 |
| max-age 240 |
| |
| |
| 7. Using ACLs and fetching samples |
| ---------------------------------- |
| |
| HAProxy is capable of extracting data from request or response streams, from |
| client or server information, from tables, environmental information etc... |
| The action of extracting such data is called fetching a sample. Once retrieved, |
| these samples may be used for various purposes such as a key to a stick-table, |
| but most common usages consist in matching them against predefined constant |
| data called patterns. |
| |
| |
| 7.1. ACL basics |
| --------------- |
| |
| The use of Access Control Lists (ACL) provides a flexible solution to perform |
| content switching and generally to take decisions based on content extracted |
| from the request, the response or any environmental status. The principle is |
| simple : |
| |
| - extract a data sample from a stream, table or the environment |
| - optionally apply some format conversion to the extracted sample |
| - apply one or multiple pattern matching methods on this sample |
| - perform actions only when a pattern matches the sample |
| |
| The actions generally consist in blocking a request, selecting a backend, or |
| adding a header. |
| |
| In order to define a test, the "acl" keyword is used. The syntax is : |
| |
| acl <aclname> <criterion> [flags] [operator] [<value>] ... |
| |
| This creates a new ACL <aclname> or completes an existing one with new tests. |
| Those tests apply to the portion of request/response specified in <criterion> |
| and may be adjusted with optional flags [flags]. Some criteria also support |
| an operator which may be specified before the set of values. Optionally some |
| conversion operators may be applied to the sample, and they will be specified |
| as a comma-delimited list of keywords just after the first keyword. The values |
| are of the type supported by the criterion, and are separated by spaces. |
| |
| ACL names must be formed from upper and lower case letters, digits, '-' (dash), |
| '_' (underscore) , '.' (dot) and ':' (colon). ACL names are case-sensitive, |
| which means that "my_acl" and "My_Acl" are two different ACLs. |
| |
| There is no enforced limit to the number of ACLs. The unused ones do not affect |
| performance, they just consume a small amount of memory. |
| |
| The criterion generally is the name of a sample fetch method, or one of its ACL |
| specific declinations. The default test method is implied by the output type of |
| this sample fetch method. The ACL declinations can describe alternate matching |
| methods of a same sample fetch method. The sample fetch methods are the only |
| ones supporting a conversion. |
| |
| Sample fetch methods return data which can be of the following types : |
| - boolean |
| - integer (signed or unsigned) |
| - IPv4 or IPv6 address |
| - string |
| - data block |
| |
| Converters transform any of these data into any of these. For example, some |
| converters might convert a string to a lower-case string while other ones |
| would turn a string to an IPv4 address, or apply a netmask to an IP address. |
| The resulting sample is of the type of the last converter applied to the list, |
| which defaults to the type of the sample fetch method. |
| |
| Each sample or converter returns data of a specific type, specified with its |
| keyword in this documentation. When an ACL is declared using a standard sample |
| fetch method, certain types automatically involved a default matching method |
| which are summarized in the table below : |
| |
| +---------------------+-----------------+ |
| | Sample or converter | Default | |
| | output type | matching method | |
| +---------------------+-----------------+ |
| | boolean | bool | |
| +---------------------+-----------------+ |
| | integer | int | |
| +---------------------+-----------------+ |
| | ip | ip | |
| +---------------------+-----------------+ |
| | string | str | |
| +---------------------+-----------------+ |
| | binary | none, use "-m" | |
| +---------------------+-----------------+ |
| |
| Note that in order to match a binary samples, it is mandatory to specify a |
| matching method, see below. |
| |
| The ACL engine can match these types against patterns of the following types : |
| - boolean |
| - integer or integer range |
| - IP address / network |
| - string (exact, substring, suffix, prefix, subdir, domain) |
| - regular expression |
| - hex block |
| |
| The following ACL flags are currently supported : |
| |
| -i : ignore case during matching of all subsequent patterns. |
| -f : load patterns from a file. |
| -m : use a specific pattern matching method |
| -n : forbid the DNS resolutions |
| -M : load the file pointed by -f like a map file. |
| -u : force the unique id of the ACL |
| -- : force end of flags. Useful when a string looks like one of the flags. |
| |
| The "-f" flag is followed by the name of a file from which all lines will be |
| read as individual values. It is even possible to pass multiple "-f" arguments |
| if the patterns are to be loaded from multiple files. Empty lines as well as |
| lines beginning with a sharp ('#') will be ignored. All leading spaces and tabs |
| will be stripped. If it is absolutely necessary to insert a valid pattern |
| beginning with a sharp, just prefix it with a space so that it is not taken for |
| a comment. Depending on the data type and match method, HAProxy may load the |
| lines into a binary tree, allowing very fast lookups. This is true for IPv4 and |
| exact string matching. In this case, duplicates will automatically be removed. |
| |
| The "-M" flag allows an ACL to use a map file. If this flag is set, the file is |
| parsed as two column file. The first column contains the patterns used by the |
| ACL, and the second column contain the samples. The sample can be used later by |
| a map. This can be useful in some rare cases where an ACL would just be used to |
| check for the existence of a pattern in a map before a mapping is applied. |
| |
| The "-u" flag forces the unique id of the ACL. This unique id is used with the |
| socket interface to identify ACL and dynamically change its values. Note that a |
| file is always identified by its name even if an id is set. |
| |
| Also, note that the "-i" flag applies to subsequent entries and not to entries |
| loaded from files preceding it. For instance : |
| |
| acl valid-ua hdr(user-agent) -f exact-ua.lst -i -f generic-ua.lst test |
| |
| In this example, each line of "exact-ua.lst" will be exactly matched against |
| the "user-agent" header of the request. Then each line of "generic-ua" will be |
| case-insensitively matched. Then the word "test" will be insensitively matched |
| as well. |
| |
| The "-m" flag is used to select a specific pattern matching method on the input |
| sample. All ACL-specific criteria imply a pattern matching method and generally |
| do not need this flag. However, this flag is useful with generic sample fetch |
| methods to describe how they're going to be matched against the patterns. This |
| is required for sample fetches which return data type for which there is no |
| obvious matching method (e.g. string or binary). When "-m" is specified and |
| followed by a pattern matching method name, this method is used instead of the |
| default one for the criterion. This makes it possible to match contents in ways |
| that were not initially planned, or with sample fetch methods which return a |
| string. The matching method also affects the way the patterns are parsed. |
| |
| The "-n" flag forbids the dns resolutions. It is used with the load of ip files. |
| By default, if the parser cannot parse ip address it considers that the parsed |
| string is maybe a domain name and try dns resolution. The flag "-n" disable this |
| resolution. It is useful for detecting malformed ip lists. Note that if the DNS |
| server is not reachable, the HAProxy configuration parsing may last many minutes |
| waiting for the timeout. During this time no error messages are displayed. The |
| flag "-n" disable this behavior. Note also that during the runtime, this |
| function is disabled for the dynamic acl modifications. |
| |
| There are some restrictions however. Not all methods can be used with all |
| sample fetch methods. Also, if "-m" is used in conjunction with "-f", it must |
| be placed first. The pattern matching method must be one of the following : |
| |
| - "found" : only check if the requested sample could be found in the stream, |
| but do not compare it against any pattern. It is recommended not |
| to pass any pattern to avoid confusion. This matching method is |
| particularly useful to detect presence of certain contents such |
| as headers, cookies, etc... even if they are empty and without |
| comparing them to anything nor counting them. |
| |
| - "bool" : check the value as a boolean. It can only be applied to fetches |
| which return a boolean or integer value, and takes no pattern. |
| Value zero or false does not match, all other values do match. |
| |
| - "int" : match the value as an integer. It can be used with integer and |
| boolean samples. Boolean false is integer 0, true is integer 1. |
| |
| - "ip" : match the value as an IPv4 or IPv6 address. It is compatible |
| with IP address samples only, so it is implied and never needed. |
| |
| - "bin" : match the contents against a hexadecimal string representing a |
| binary sequence. This may be used with binary or string samples. |
| |
| - "len" : match the sample's length as an integer. This may be used with |
| binary or string samples. |
| |
| - "str" : exact match : match the contents against a string. This may be |
| used with binary or string samples. |
| |
| - "sub" : substring match : check that the contents contain at least one of |
| the provided string patterns. This may be used with binary or |
| string samples. |
| |
| - "reg" : regex match : match the contents against a list of regular |
| expressions. This may be used with binary or string samples. |
| |
| - "beg" : prefix match : check that the contents begin like the provided |
| string patterns. This may be used with binary or string samples. |
| |
| - "end" : suffix match : check that the contents end like the provided |
| string patterns. This may be used with binary or string samples. |
| |
| - "dir" : subdir match : check that a slash-delimited portion of the |
| contents exactly matches one of the provided string patterns. |
| This may be used with binary or string samples. |
| |
| - "dom" : domain match : check that a dot-delimited portion of the contents |
| exactly match one of the provided string patterns. This may be |
| used with binary or string samples. |
| |
| For example, to quickly detect the presence of cookie "JSESSIONID" in an HTTP |
| request, it is possible to do : |
| |
| acl jsess_present req.cook(JSESSIONID) -m found |
| |
| In order to apply a regular expression on the 500 first bytes of data in the |
| buffer, one would use the following acl : |
| |
| acl script_tag req.payload(0,500) -m reg -i <script> |
| |
| On systems where the regex library is much slower when using "-i", it is |
| possible to convert the sample to lowercase before matching, like this : |
| |
| acl script_tag req.payload(0,500),lower -m reg <script> |
| |
| All ACL-specific criteria imply a default matching method. Most often, these |
| criteria are composed by concatenating the name of the original sample fetch |
| method and the matching method. For example, "hdr_beg" applies the "beg" match |
| to samples retrieved using the "hdr" fetch method. This matching method is only |
| usable when the keyword is used alone, without any converter. In case any such |
| converter were to be applied after such an ACL keyword, the default matching |
| method from the ACL keyword is simply ignored since what will matter for the |
| matching is the output type of the last converter. Since all ACL-specific |
| criteria rely on a sample fetch method, it is always possible instead to use |
| the original sample fetch method and the explicit matching method using "-m". |
| |
| If an alternate match is specified using "-m" on an ACL-specific criterion, |
| the matching method is simply applied to the underlying sample fetch method. |
| For example, all ACLs below are exact equivalent : |
| |
| acl short_form hdr_beg(host) www. |
| acl alternate1 hdr_beg(host) -m beg www. |
| acl alternate2 hdr_dom(host) -m beg www. |
| acl alternate3 hdr(host) -m beg www. |
| |
| |
| The table below summarizes the compatibility matrix between sample or converter |
| types and the pattern types to fetch against. It indicates for each compatible |
| combination the name of the matching method to be used, surrounded with angle |
| brackets ">" and "<" when the method is the default one and will work by |
| default without "-m". |
| |
| +-------------------------------------------------+ |
| | Input sample type | |
| +----------------------+---------+---------+---------+---------+---------+ |
| | pattern type | boolean | integer | ip | string | binary | |
| +----------------------+---------+---------+---------+---------+---------+ |
| | none (presence only) | found | found | found | found | found | |
| +----------------------+---------+---------+---------+---------+---------+ |
| | none (boolean value) |> bool <| bool | | bool | | |
| +----------------------+---------+---------+---------+---------+---------+ |
| | integer (value) | int |> int <| int | int | | |
| +----------------------+---------+---------+---------+---------+---------+ |
| | integer (length) | len | len | len | len | len | |
| +----------------------+---------+---------+---------+---------+---------+ |
| | IP address | | |> ip <| ip | ip | |
| +----------------------+---------+---------+---------+---------+---------+ |
| | exact string | str | str | str |> str <| str | |
| +----------------------+---------+---------+---------+---------+---------+ |
| | prefix | beg | beg | beg | beg | beg | |
| +----------------------+---------+---------+---------+---------+---------+ |
| | suffix | end | end | end | end | end | |
| +----------------------+---------+---------+---------+---------+---------+ |
| | substring | sub | sub | sub | sub | sub | |
| +----------------------+---------+---------+---------+---------+---------+ |
| | subdir | dir | dir | dir | dir | dir | |
| +----------------------+---------+---------+---------+---------+---------+ |
| | domain | dom | dom | dom | dom | dom | |
| +----------------------+---------+---------+---------+---------+---------+ |
| | regex | reg | reg | reg | reg | reg | |
| +----------------------+---------+---------+---------+---------+---------+ |
| | hex block | | | | bin | bin | |
| +----------------------+---------+---------+---------+---------+---------+ |
| |
| |
| 7.1.1. Matching booleans |
| ------------------------ |
| |
| In order to match a boolean, no value is needed and all values are ignored. |
| Boolean matching is used by default for all fetch methods of type "boolean". |
| When boolean matching is used, the fetched value is returned as-is, which means |
| that a boolean "true" will always match and a boolean "false" will never match. |
| |
| Boolean matching may also be enforced using "-m bool" on fetch methods which |
| return an integer value. Then, integer value 0 is converted to the boolean |
| "false" and all other values are converted to "true". |
| |
| |
| 7.1.2. Matching integers |
| ------------------------ |
| |
| Integer matching applies by default to integer fetch methods. It can also be |
| enforced on boolean fetches using "-m int". In this case, "false" is converted |
| to the integer 0, and "true" is converted to the integer 1. |
| |
| Integer matching also supports integer ranges and operators. Note that integer |
| matching only applies to positive values. A range is a value expressed with a |
| lower and an upper bound separated with a colon, both of which may be omitted. |
| |
| For instance, "1024:65535" is a valid range to represent a range of |
| unprivileged ports, and "1024:" would also work. "0:1023" is a valid |
| representation of privileged ports, and ":1023" would also work. |
| |
| As a special case, some ACL functions support decimal numbers which are in fact |
| two integers separated by a dot. This is used with some version checks for |
| instance. All integer properties apply to those decimal numbers, including |
| ranges and operators. |
| |
| For an easier usage, comparison operators are also supported. Note that using |
| operators with ranges does not make much sense and is strongly discouraged. |
| Similarly, it does not make much sense to perform order comparisons with a set |
| of values. |
| |
| Available operators for integer matching are : |
| |
| eq : true if the tested value equals at least one value |
| ge : true if the tested value is greater than or equal to at least one value |
| gt : true if the tested value is greater than at least one value |
| le : true if the tested value is less than or equal to at least one value |
| lt : true if the tested value is less than at least one value |
| |
| For instance, the following ACL matches any negative Content-Length header : |
| |
| acl negative-length req.hdr_val(content-length) lt 0 |
| |
| This one matches SSL versions between 3.0 and 3.1 (inclusive) : |
| |
| acl sslv3 req.ssl_ver 3:3.1 |
| |
| |
| 7.1.3. Matching strings |
| ----------------------- |
| |
| String matching applies to string or binary fetch methods, and exists in 6 |
| different forms : |
| |
| - exact match (-m str) : the extracted string must exactly match the |
| patterns; |
| |
| - substring match (-m sub) : the patterns are looked up inside the |
| extracted string, and the ACL matches if any of them is found inside; |
| |
| - prefix match (-m beg) : the patterns are compared with the beginning of |
| the extracted string, and the ACL matches if any of them matches. |
| |
| - suffix match (-m end) : the patterns are compared with the end of the |
| extracted string, and the ACL matches if any of them matches. |
| |
| - subdir match (-m dir) : the patterns are looked up anywhere inside the |
| extracted string, delimited with slashes ("/"), the beginning or the end |
| of the string. The ACL matches if any of them matches. As such, the string |
| "/images/png/logo/32x32.png", would match "/images", "/images/png", |
| "images/png", "/png/logo", "logo/32x32.png" or "32x32.png" but not "png" |
| nor "32x32". |
| |
| - domain match (-m dom) : the patterns are looked up anywhere inside the |
| extracted string, delimited with dots ("."), colons (":"), slashes ("/"), |
| question marks ("?"), the beginning or the end of the string. This is made |
| to be used with URLs. Leading and trailing delimiters in the pattern are |
| ignored. The ACL matches if any of them matches. As such, in the example |
| string "http://www1.dc-eu.example.com:80/blah", the patterns "http", |
| "www1", ".www1", "dc-eu", "example", "com", "80", "dc-eu.example", |
| "blah", ":www1:", "dc-eu.example:80" would match, but not "eu" nor "dc". |
| Using it to match domain suffixes for filtering or routing is generally |
| not a good idea, as the routing could easily be fooled by prepending the |
| matching prefix in front of another domain for example. |
| |
| String matching applies to verbatim strings as they are passed, with the |
| exception of the backslash ("\") which makes it possible to escape some |
| characters such as the space. If the "-i" flag is passed before the first |
| string, then the matching will be performed ignoring the case. In order |
| to match the string "-i", either set it second, or pass the "--" flag |
| before the first string. Same applies of course to match the string "--". |
| |
| Do not use string matches for binary fetches which might contain null bytes |
| (0x00), as the comparison stops at the occurrence of the first null byte. |
| Instead, convert the binary fetch to a hex string with the hex converter first. |
| |
| Example: |
| # matches if the string <tag> is present in the binary sample |
| acl tag_found req.payload(0,0),hex -m sub 3C7461673E |
| |
| |
| 7.1.4. Matching regular expressions (regexes) |
| --------------------------------------------- |
| |
| Just like with string matching, regex matching applies to verbatim strings as |
| they are passed, with the exception of the backslash ("\") which makes it |
| possible to escape some characters such as the space. If the "-i" flag is |
| passed before the first regex, then the matching will be performed ignoring |
| the case. In order to match the string "-i", either set it second, or pass |
| the "--" flag before the first string. Same principle applies of course to |
| match the string "--". |
| |
| |
| 7.1.5. Matching arbitrary data blocks |
| ------------------------------------- |
| |
| It is possible to match some extracted samples against a binary block which may |
| not safely be represented as a string. For this, the patterns must be passed as |
| a series of hexadecimal digits in an even number, when the match method is set |
| to binary. Each sequence of two digits will represent a byte. The hexadecimal |
| digits may be used upper or lower case. |
| |
| Example : |
| # match "Hello\n" in the input stream (\x48 \x65 \x6c \x6c \x6f \x0a) |
| acl hello req.payload(0,6) -m bin 48656c6c6f0a |
| |
| |
| 7.1.6. Matching IPv4 and IPv6 addresses |
| --------------------------------------- |
| |
| IPv4 addresses values can be specified either as plain addresses or with a |
| netmask appended, in which case the IPv4 address matches whenever it is |
| within the network. Plain addresses may also be replaced with a resolvable |
| host name, but this practice is generally discouraged as it makes it more |
| difficult to read and debug configurations. If hostnames are used, you should |
| at least ensure that they are present in /etc/hosts so that the configuration |
| does not depend on any random DNS match at the moment the configuration is |
| parsed. |
| |
| The dotted IPv4 address notation is supported in both regular as well as the |
| abbreviated form with all-0-octets omitted: |
| |
| +------------------+------------------+------------------+ |
| | Example 1 | Example 2 | Example 3 | |
| +------------------+------------------+------------------+ |
| | 192.168.0.1 | 10.0.0.12 | 127.0.0.1 | |
| | 192.168.1 | 10.12 | 127.1 | |
| | 192.168.0.1/22 | 10.0.0.12/8 | 127.0.0.1/8 | |
| | 192.168.1/22 | 10.12/8 | 127.1/8 | |
| +------------------+------------------+------------------+ |
| |
| Notice that this is different from RFC 4632 CIDR address notation in which |
| 192.168.42/24 would be equivalent to 192.168.42.0/24. |
| |
| IPv6 may be entered in their usual form, with or without a netmask appended. |
| Only bit counts are accepted for IPv6 netmasks. In order to avoid any risk of |
| trouble with randomly resolved IP addresses, host names are never allowed in |
| IPv6 patterns. |
| |
| HAProxy is also able to match IPv4 addresses with IPv6 addresses in the |
| following situations : |
| - tested address is IPv4, pattern address is IPv4, the match applies |
| in IPv4 using the supplied mask if any. |
| - tested address is IPv6, pattern address is IPv6, the match applies |
| in IPv6 using the supplied mask if any. |
| - tested address is IPv6, pattern address is IPv4, the match applies in IPv4 |
| using the pattern's mask if the IPv6 address matches with 2002:IPV4::, |
| ::IPV4 or ::ffff:IPV4, otherwise it fails. |
| - tested address is IPv4, pattern address is IPv6, the IPv4 address is first |
| converted to IPv6 by prefixing ::ffff: in front of it, then the match is |
| applied in IPv6 using the supplied IPv6 mask. |
| |
| |
| 7.2. Using ACLs to form conditions |
| ---------------------------------- |
| |
| Some actions are only performed upon a valid condition. A condition is a |
| combination of ACLs with operators. 3 operators are supported : |
| |
| - AND (implicit) |
| - OR (explicit with the "or" keyword or the "||" operator) |
| - Negation with the exclamation mark ("!") |
| |
| A condition is formed as a disjunctive form: |
| |
| [!]acl1 [!]acl2 ... [!]acln { or [!]acl1 [!]acl2 ... [!]acln } ... |
| |
| Such conditions are generally used after an "if" or "unless" statement, |
| indicating when the condition will trigger the action. |
| |
| For instance, to block HTTP requests to the "*" URL with methods other than |
| "OPTIONS", as well as POST requests without content-length, and GET or HEAD |
| requests with a content-length greater than 0, and finally every request which |
| is not either GET/HEAD/POST/OPTIONS ! |
| |
| acl missing_cl req.hdr_cnt(Content-length) eq 0 |
| http-request deny if HTTP_URL_STAR !METH_OPTIONS || METH_POST missing_cl |
| http-request deny if METH_GET HTTP_CONTENT |
| http-request deny unless METH_GET or METH_POST or METH_OPTIONS |
| |
| To select a different backend for requests to static contents on the "www" site |
| and to every request on the "img", "video", "download" and "ftp" hosts : |
| |
| acl url_static path_beg /static /images /img /css |
| acl url_static path_end .gif .png .jpg .css .js |
| acl host_www hdr_beg(host) -i www |
| acl host_static hdr_beg(host) -i img. video. download. ftp. |
| |
| # now use backend "static" for all static-only hosts, and for static URLs |
| # of host "www". Use backend "www" for the rest. |
| use_backend static if host_static or host_www url_static |
| use_backend www if host_www |
| |
| It is also possible to form rules using "anonymous ACLs". Those are unnamed ACL |
| expressions that are built on the fly without needing to be declared. They must |
| be enclosed between braces, with a space before and after each brace (because |
| the braces must be seen as independent words). Example : |
| |
| The following rule : |
| |
| acl missing_cl req.hdr_cnt(Content-length) eq 0 |
| http-request deny if METH_POST missing_cl |
| |
| Can also be written that way : |
| |
| http-request deny if METH_POST { req.hdr_cnt(Content-length) eq 0 } |
| |
| It is generally not recommended to use this construct because it's a lot easier |
| to leave errors in the configuration when written that way. However, for very |
| simple rules matching only one source IP address for instance, it can make more |
| sense to use them than to declare ACLs with random names. Another example of |
| good use is the following : |
| |
| With named ACLs : |
| |
| acl site_dead nbsrv(dynamic) lt 2 |
| acl site_dead nbsrv(static) lt 2 |
| monitor fail if site_dead |
| |
| With anonymous ACLs : |
| |
| monitor fail if { nbsrv(dynamic) lt 2 } || { nbsrv(static) lt 2 } |
| |
| See section 4.2 for detailed help on the "http-request deny" and "use_backend" |
| keywords. |
| |
| |
| 7.3. Fetching samples |
| --------------------- |
| |
| Historically, sample fetch methods were only used to retrieve data to match |
| against patterns using ACLs. With the arrival of stick-tables, a new class of |
| sample fetch methods was created, most often sharing the same syntax as their |
| ACL counterpart. These sample fetch methods are also known as "fetches". As |
| of now, ACLs and fetches have converged. All ACL fetch methods have been made |
| available as fetch methods, and ACLs may use any sample fetch method as well. |
| |
| This section details all available sample fetch methods and their output type. |
| Some sample fetch methods have deprecated aliases that are used to maintain |
| compatibility with existing configurations. They are then explicitly marked as |
| deprecated and should not be used in new setups. |
| |
| The ACL derivatives are also indicated when available, with their respective |
| matching methods. These ones all have a well defined default pattern matching |
| method, so it is never necessary (though allowed) to pass the "-m" option to |
| indicate how the sample will be matched using ACLs. |
| |
| As indicated in the sample type versus matching compatibility matrix above, |
| when using a generic sample fetch method in an ACL, the "-m" option is |
| mandatory unless the sample type is one of boolean, integer, IPv4 or IPv6. When |
| the same keyword exists as an ACL keyword and as a standard fetch method, the |
| ACL engine will automatically pick the ACL-only one by default. |
| |
| Some of these keywords support one or multiple mandatory arguments, and one or |
| multiple optional arguments. These arguments are strongly typed and are checked |
| when the configuration is parsed so that there is no risk of running with an |
| incorrect argument (e.g. an unresolved backend name). Fetch function arguments |
| are passed between parenthesis and are delimited by commas. When an argument |
| is optional, it will be indicated below between square brackets ('[ ]'). When |
| all arguments are optional, the parenthesis may be omitted. |
| |
| Thus, the syntax of a standard sample fetch method is one of the following : |
| - name |
| - name(arg1) |
| - name(arg1,arg2) |
| |
| |
| 7.3.1. Converters |
| ----------------- |
| |
| Sample fetch methods may be combined with transformations to be applied on top |
| of the fetched sample (also called "converters"). These combinations form what |
| is called "sample expressions" and the result is a "sample". Initially this |
| was only supported by "stick on" and "stick store-request" directives but this |
| has now be extended to all places where samples may be used (ACLs, log-format, |
| unique-id-format, add-header, ...). |
| |
| These transformations are enumerated as a series of specific keywords after the |
| sample fetch method. These keywords may equally be appended immediately after |
| the fetch keyword's argument, delimited by a comma. These keywords can also |
| support some arguments (e.g. a netmask) which must be passed in parenthesis. |
| |
| A certain category of converters are bitwise and arithmetic operators which |
| support performing basic operations on integers. Some bitwise operations are |
| supported (and, or, xor, cpl) and some arithmetic operations are supported |
| (add, sub, mul, div, mod, neg). Some comparators are provided (odd, even, not, |
| bool) which make it possible to report a match without having to write an ACL. |
| |
| The currently available list of transformation keywords include : |
| |
| 51d.single(<prop>[,<prop>*]) |
| Returns values for the properties requested as a string, where values are |
| separated by the delimiter specified with "51degrees-property-separator". |
| The device is identified using the User-Agent header passed to the |
| converter. The function can be passed up to five property names, and if a |
| property name can't be found, the value "NoData" is returned. |
| |
| Example : |
| # Here the header "X-51D-DeviceTypeMobileTablet" is added to the request, |
| # containing values for the three properties requested by using the |
| # User-Agent passed to the converter. |
| frontend http-in |
| bind *:8081 |
| default_backend servers |
| http-request set-header X-51D-DeviceTypeMobileTablet \ |
| %[req.fhdr(User-Agent),51d.single(DeviceType,IsMobile,IsTablet)] |
| |
| rfc7239_is_valid |
| Returns true if input header is RFC 7239 compliant header value and false |
| otherwise. |
| |
| Example: |
| acl valid req.hdr(forwarded),rfc7239_is_valid |
| #input: "for=127.0.0.1;proto=http" |
| # output: TRUE |
| #input: "proto=custom" |
| # output: FALSE |
| |
| rfc7239_field(<field>) |
| Extracts a single field/parameter from RFC 7239 compliant header value input. |
| |
| Supported fields are: |
| - proto: either 'http' or 'https' |
| - host: http compliant host |
| - for: RFC7239 node |
| - by: RFC7239 node |
| |
| More info here: |
| https://www.rfc-editor.org/rfc/rfc7239.html#section-6 |
| |
| Example: |
| # extract host field from forwarded header and store it in req.fhost var |
| http-request set-var(req.fhost) req.hdr(forwarded),rfc7239_field(host) |
| #input: "proto=https;host=\"haproxy.org:80\"" |
| # output: "haproxy.org:80" |
| |
| # extract for field from forwarded header and store it in req.ffor var |
| http-request set-var(req.ffor) req.hdr(forwarded),rfc7239_field(for) |
| #input: "proto=https;host=\"haproxy.org:80\";for=\"127.0.0.1:9999\"" |
| # output: "127.0.0.1:9999" |
| |
| rfc7239_n2nn |
| Converts RFC7239 node (provided by 'for' or 'by' 7239 header fields) |
| into its corresponding nodename final form: |
| - ipv4 address |
| - ipv6 address |
| - 'unknown' |
| - '_obfs' identifier |
| |
| Example: |
| # extract 'for' field from forwarded header, extract nodename from |
| # resulting node identifier and store the result in req.fnn |
| http-request set-var(req.fnn) req.hdr(forwarded),rfc7239_field(for),rfc7239_n2nn |
| #input: "127.0.0.1:9999" |
| # output: 127.0.0.1 (ipv4) |
| #input: "[ab:cd:ff:ff:ff:ff:ff:ff]:9998" |
| # output: ab:cd:ff:ff:ff:ff:ff:ff (ipv6) |
| #input: "_name:_port" |
| # output: "_name" (string) |
| |
| rfc7239_n2np |
| Converts RFC7239 node (provided by 'for' or 'by' 7239 header fields) |
| into its corresponding nodeport final form: |
| - unsigned integer |
| - '_obfs' identifier |
| |
| Example: |
| # extract 'by' field from forwarded header, extract node port from |
| # resulting node identifier and store the result in req.fnp |
| http-request set-var(req.fnp) req.hdr(forwarded),rfc7239_field(by),rfc7239_n2np |
| #input: "127.0.0.1:9999" |
| # output: 9999 (integer) |
| #input: "[ab:cd:ff:ff:ff:ff:ff:ff]:9998" |
| # output: 9998 (integer) |
| #input: "_name:_port" |
| # output: "_port" (string) |
| |
| add(<value>) |
| Adds <value> to the input value of type signed integer, and returns the |
| result as a signed integer. <value> can be a numeric value or a variable |
| name. The name of the variable starts with an indication about its scope. The |
| scopes allowed are: |
| "proc" : the variable is shared with the whole process |
| "sess" : the variable is shared with the whole session |
| "txn" : the variable is shared with the transaction (request and response) |
| "req" : the variable is shared only during request processing |
| "res" : the variable is shared only during response processing |
| This prefix is followed by a name. The separator is a '.'. The name may only |
| contain characters 'a-z', 'A-Z', '0-9', '.' and '_'. |
| |
| add_item(<delim>,[<var>][,<suff>]]) |
| Concatenates a minimum of 2 and up to 3 fields after the current sample which |
| is then turned into a string. The first one, <delim>, is a constant string, |
| that will be appended immediately after the existing sample if an existing |
| sample is not empty and either the <var> or the <suff> is not empty. The |
| second one, <var>, is a variable name. The variable will be looked up, its |
| contents converted to a string, and it will be appended immediately after |
| the <delim> part. If the variable is not found, nothing is appended. It is |
| optional and may optionally be followed by a constant string <suff>, however |
| if <var> is omitted, then <suff> is mandatory. This converter is similar to |
| the concat converter and can be used to build new variables made of a |
| succession of other variables but the main difference is that it does the |
| checks if adding a delimiter makes sense as wouldn't be the case if e.g. the |
| current sample is empty. That situation would require 2 separate rules using |
| concat converter where the first rule would have to check if the current |
| sample string is empty before adding a delimiter. If commas or closing |
| parenthesis are needed as delimiters, they must be protected by quotes or |
| backslashes, themselves protected so that they are not stripped by the first |
| level parser (please see section 2.2 for quoting and escaping). See examples |
| below. |
| |
| Example: |
| http-request set-var(req.tagged) 'var(req.tagged),add_item(",",req.score1,"(site1)") if src,in_table(site1)' |
| http-request set-var(req.tagged) 'var(req.tagged),add_item(",",req.score2,"(site2)") if src,in_table(site2)' |
| http-request set-var(req.tagged) 'var(req.tagged),add_item(",",req.score3,"(site3)") if src,in_table(site3)' |
| http-request set-header x-tagged %[var(req.tagged)] |
| |
| http-request set-var(req.tagged) 'var(req.tagged),add_item(",",req.score1),add_item(",",req.score2)' |
| http-request set-var(req.tagged) 'var(req.tagged),add_item(",",,(site1))' if src,in_table(site1) |
| |
| aes_gcm_dec(<bits>,<nonce>,<key>,<aead_tag>) |
| Decrypts the raw byte input using the AES128-GCM, AES192-GCM or |
| AES256-GCM algorithm, depending on the <bits> parameter. All other parameters |
| need to be base64 encoded and the returned result is in raw byte format. |
| If the <aead_tag> validation fails, the converter doesn't return any data. |
| The <nonce>, <key> and <aead_tag> can either be strings or variables. This |
| converter requires at least OpenSSL 1.0.1. |
| |
| Example: |
| http-response set-header X-Decrypted-Text %[var(txn.enc),\ |
| aes_gcm_dec(128,txn.nonce,Zm9vb2Zvb29mb29wZm9vbw==,txn.aead_tag)] |
| |
| and(<value>) |
| Performs a bitwise "AND" between <value> and the input value of type signed |
| integer, and returns the result as an signed integer. <value> can be a |
| numeric value or a variable name. The name of the variable starts with an |
| indication about its scope. The scopes allowed are: |
| "proc" : the variable is shared with the whole process |
| "sess" : the variable is shared with the whole session |
| "txn" : the variable is shared with the transaction (request and response) |
| "req" : the variable is shared only during request processing |
| "res" : the variable is shared only during response processing |
| This prefix is followed by a name. The separator is a '.'. The name may only |
| contain characters 'a-z', 'A-Z', '0-9', '.' and '_'. |
| |
| b64dec |
| Converts (decodes) a base64 encoded input string to its binary |
| representation. It performs the inverse operation of base64(). |
| For base64url("URL and Filename Safe Alphabet" (RFC 4648)) variant |
| see "ub64dec". |
| |
| base64 |
| Converts a binary input sample to a base64 string. It is used to log or |
| transfer binary content in a way that can be reliably transferred (e.g. |
| an SSL ID can be copied in a header). For base64url("URL and Filename |
| Safe Alphabet" (RFC 4648)) variant see "ub64enc". |
| |
| be2dec(<separator>,<chunk_size>,[<truncate>]) |
| Converts big-endian binary input sample to a string containing an unsigned |
| integer number per <chunk_size> input bytes. <separator> is put every |
| <chunk_size> binary input bytes if specified. <truncate> flag indicates |
| whatever binary input is truncated at <chunk_size> boundaries. <chunk_size> |
| maximum value is limited by the size of long long int (8 bytes). |
| |
| Example: |
| bin(01020304050607),be2dec(:,2) # 258:772:1286:7 |
| bin(01020304050607),be2dec(-,2,1) # 258-772-1286 |
| bin(01020304050607),be2dec(,2,1) # 2587721286 |
| bin(7f000001),be2dec(.,1) # 127.0.0.1 |
| |
| be2hex([<separator>],[<chunk_size>],[<truncate>]) |
| Converts big-endian binary input sample to a hex string containing two hex |
| digits per input byte. It is used to log or transfer hex dumps of some |
| binary input data in a way that can be reliably transferred (e.g. an SSL ID |
| can be copied in a header). <separator> is put every <chunk_size> binary |
| input bytes if specified. <truncate> flag indicates whatever binary input is |
| truncated at <chunk_size> boundaries. |
| |
| Example: |
| bin(01020304050607),be2hex # 01020304050607 |
| bin(01020304050607),be2hex(:,2) # 0102:0304:0506:07 |
| bin(01020304050607),be2hex(--,2,1) # 0102--0304--0506 |
| bin(0102030405060708),be2hex(,3,1) # 010203040506 |
| |
| bool |
| Returns a boolean TRUE if the input value of type signed integer is |
| non-null, otherwise returns FALSE. Used in conjunction with and(), it can be |
| used to report true/false for bit testing on input values (e.g. verify the |
| presence of a flag). |
| |
| bytes(<offset>[,<length>]) |
| Extracts some bytes from an input binary sample. The result is a binary |
| sample starting at an offset (in bytes) of the original sample and |
| optionally truncated at the given length. |
| |
| concat([<start>],[<var>],[<end>]) |
| Concatenates up to 3 fields after the current sample which is then turned to |
| a string. The first one, <start>, is a constant string, that will be appended |
| immediately after the existing sample. It may be omitted if not used. The |
| second one, <var>, is a variable name. The variable will be looked up, its |
| contents converted to a string, and it will be appended immediately after the |
| <first> part. If the variable is not found, nothing is appended. It may be |
| omitted as well. The third field, <end> is a constant string that will be |
| appended after the variable. It may also be omitted. Together, these elements |
| allow to concatenate variables with delimiters to an existing set of |
| variables. This can be used to build new variables made of a succession of |
| other variables, such as colon-delimited values. If commas or closing |
| parenthesis are needed as delimiters, they must be protected by quotes or |
| backslashes, themselves protected so that they are not stripped by the first |
| level parser. This is often used to build composite variables from other |
| ones, but sometimes using a format string with multiple fields may be more |
| convenient. See examples below. |
| |
| Example: |
| tcp-request session set-var(sess.src) src |
| tcp-request session set-var(sess.dn) ssl_c_s_dn |
| tcp-request session set-var(txn.sig) str(),concat(<ip=,sess.ip,>),concat(<dn=,sess.dn,>) |
| tcp-request session set-var(txn.ipport) "str(),concat('addr=(',sess.ip),concat(',',sess.port,')')" |
| tcp-request session set-var-fmt(txn.ipport) "addr=(%[sess.ip],%[sess.port])" ## does the same |
| http-request set-header x-hap-sig %[var(txn.sig)] |
| |
| cpl |
| Takes the input value of type signed integer, applies a ones-complement |
| (flips all bits) and returns the result as an signed integer. |
| |
| crc32([<avalanche>]) |
| Hashes a binary input sample into an unsigned 32-bit quantity using the CRC32 |
| hash function. Optionally, it is possible to apply a full avalanche hash |
| function to the output if the optional <avalanche> argument equals 1. This |
| converter uses the same functions as used by the various hash-based load |
| balancing algorithms, so it will provide exactly the same results. It is |
| provided for compatibility with other software which want a CRC32 to be |
| computed on some input keys, so it follows the most common implementation as |
| found in Ethernet, Gzip, PNG, etc... It is slower than the other algorithms |
| but may provide a better or at least less predictable distribution. It must |
| not be used for security purposes as a 32-bit hash is trivial to break. See |
| also "djb2", "sdbm", "wt6", "crc32c" and the "hash-type" directive. |
| |
| crc32c([<avalanche>]) |
| Hashes a binary input sample into an unsigned 32-bit quantity using the CRC32C |
| hash function. Optionally, it is possible to apply a full avalanche hash |
| function to the output if the optional <avalanche> argument equals 1. This |
| converter uses the same functions as described in RFC4960, Appendix B [8]. |
| It is provided for compatibility with other software which want a CRC32C to be |
| computed on some input keys. It is slower than the other algorithms and it must |
| not be used for security purposes as a 32-bit hash is trivial to break. See |
| also "djb2", "sdbm", "wt6", "crc32" and the "hash-type" directive. |
| |
| cut_crlf |
| Cuts the string representation of the input sample on the first carriage |
| return ('\r') or newline ('\n') character found. Only the string length is |
| updated. |
| |
| da-csv-conv(<prop>[,<prop>*]) |
| Asks the DeviceAtlas converter to identify the User Agent string passed on |
| input, and to emit a string made of the concatenation of the properties |
| enumerated in argument, delimited by the separator defined by the global |
| keyword "deviceatlas-property-separator", or by default the pipe character |
| ('|'). There's a limit of 12 different properties imposed by the HAProxy |
| configuration language. |
| |
| Example: |
| frontend www |
| bind *:8881 |
| default_backend servers |
| http-request set-header X-DeviceAtlas-Data %[req.fhdr(User-Agent),da-csv(primaryHardwareType,osName,osVersion,browserName,browserVersion,browserRenderingEngine)] |
| |
| debug([<prefix][,<destination>]) |
| This converter is used as debug tool. It takes a capture of the input sample |
| and sends it to event sink <destination>, which may designate a ring buffer |
| such as "buf0", as well as "stdout", or "stderr". Available sinks may be |
| checked at run time by issuing "show events" on the CLI. When not specified, |
| the output will be "buf0", which may be consulted via the CLI's "show events" |
| command. An optional prefix <prefix> may be passed to help distinguish |
| outputs from multiple expressions. It will then appear before the colon in |
| the output message. The input sample is passed as-is on the output, so that |
| it is safe to insert the debug converter anywhere in a chain, even with non- |
| printable sample types. |
| |
| Example: |
| tcp-request connection track-sc0 src,debug(track-sc) |
| |
| digest(<algorithm>) |
| Converts a binary input sample to a message digest. The result is a binary |
| sample. The <algorithm> must be an OpenSSL message digest name (e.g. sha256). |
| |
| Please note that this converter is only available when HAProxy has been |
| compiled with USE_OPENSSL. |
| |
| div(<value>) |
| Divides the input value of type signed integer by <value>, and returns the |
| result as an signed integer. If <value> is null, the largest unsigned |
| integer is returned (typically 2^63-1). <value> can be a numeric value or a |
| variable name. The name of the variable starts with an indication about its |
| scope. The scopes allowed are: |
| "proc" : the variable is shared with the whole process |
| "sess" : the variable is shared with the whole session |
| "txn" : the variable is shared with the transaction (request and response) |
| "req" : the variable is shared only during request processing |
| "res" : the variable is shared only during response processing |
| This prefix is followed by a name. The separator is a '.'. The name may only |
| contain characters 'a-z', 'A-Z', '0-9', '.' and '_'. |
| |
| djb2([<avalanche>]) |
| Hashes a binary input sample into an unsigned 32-bit quantity using the DJB2 |
| hash function. Optionally, it is possible to apply a full avalanche hash |
| function to the output if the optional <avalanche> argument equals 1. This |
| converter uses the same functions as used by the various hash-based load |
| balancing algorithms, so it will provide exactly the same results. It is |
| mostly intended for debugging, but can be used as a stick-table entry to |
| collect rough statistics. It must not be used for security purposes as a |
| 32-bit hash is trivial to break. See also "crc32", "sdbm", "wt6", "crc32c", |
| and the "hash-type" directive. |
| |
| even |
| Returns a boolean TRUE if the input value of type signed integer is even |
| otherwise returns FALSE. It is functionally equivalent to "not,and(1),bool". |
| |
| field(<index>,<delimiters>[,<count>]) |
| Extracts the substring at the given index counting from the beginning |
| (positive index) or from the end (negative index) considering given delimiters |
| from an input string. Indexes start at 1 or -1 and delimiters are a string |
| formatted list of chars. Optionally you can specify <count> of fields to |
| extract (default: 1). Value of 0 indicates extraction of all remaining |
| fields. |
| |
| Example : |
| str(f1_f2_f3__f5),field(4,_) # <empty> |
| str(f1_f2_f3__f5),field(5,_) # f5 |
| str(f1_f2_f3__f5),field(2,_,0) # f2_f3__f5 |
| str(f1_f2_f3__f5),field(2,_,2) # f2_f3 |
| str(f1_f2_f3__f5),field(-2,_,3) # f2_f3_ |
| str(f1_f2_f3__f5),field(-3,_,0) # f1_f2_f3 |
| |
| fix_is_valid |
| Parses a binary payload and performs sanity checks regarding FIX (Financial |
| Information eXchange): |
| |
| - checks that all tag IDs and values are not empty and the tags IDs are well |
| numeric |
| - checks the BeginString tag is the first tag with a valid FIX version |
| - checks the BodyLength tag is the second one with the right body length |
| - checks the MsgType tag is the third tag. |
| - checks that last tag in the message is the CheckSum tag with a valid |
| checksum |
| |
| Due to current HAProxy design, only the first message sent by the client and |
| the server can be parsed. |
| |
| This converter returns a boolean, true if the payload contains a valid FIX |
| message, false if not. |
| |
| See also the fix_tag_value converter. |
| |
| Example: |
| tcp-request inspect-delay 10s |
| tcp-request content reject unless { req.payload(0,0),fix_is_valid } |
| |
| fix_tag_value(<tag>) |
| Parses a FIX (Financial Information eXchange) message and extracts the value |
| from the tag <tag>. <tag> can be a string or an integer pointing to the |
| desired tag. Any integer value is accepted, but only the following strings |
| are translated into their integer equivalent: BeginString, BodyLength, |
| MsgType, SenderCompID, TargetCompID, CheckSum. More tag names can be easily |
| added. |
| |
| Due to current HAProxy design, only the first message sent by the client and |
| the server can be parsed. No message validation is performed by this |
| converter. It is highly recommended to validate the message first using |
| fix_is_valid converter. |
| |
| See also the fix_is_valid converter. |
| |
| Example: |
| tcp-request inspect-delay 10s |
| tcp-request content reject unless { req.payload(0,0),fix_is_valid } |
| # MsgType tag ID is 35, so both lines below will return the same content |
| tcp-request content set-var(txn.foo) req.payload(0,0),fix_tag_value(35) |
| tcp-request content set-var(txn.bar) req.payload(0,0),fix_tag_value(MsgType) |
| |
| hex |
| Converts a binary input sample to a hex string containing two hex digits per |
| input byte. It is used to log or transfer hex dumps of some binary input data |
| in a way that can be reliably transferred (e.g. an SSL ID can be copied in a |
| header). |
| |
| hex2i |
| Converts a hex string containing two hex digits per input byte to an |
| integer. If the input value cannot be converted, then zero is returned. |
| |
| htonl |
| Converts the input integer value to its 32-bit binary representation in the |
| network byte order. Because sample fetches own signed 64-bit integer, when |
| this converter is used, the input integer value is first casted to an |
| unsigned 32-bit integer. |
| |
| hmac(<algorithm>,<key>) |
| Converts a binary input sample to a message authentication code with the given |
| key. The result is a binary sample. The <algorithm> must be one of the |
| registered OpenSSL message digest names (e.g. sha256). The <key> parameter must |
| be base64 encoded and can either be a string or a variable. |
| |
| Please note that this converter is only available when HAProxy has been |
| compiled with USE_OPENSSL. |
| |
| host_only |
| Converts a string which contains a Host header value and removes its port. |
| The input must respect the format of the host header value |
| (rfc9110#section-7.2). It will support that kind of input: hostname, |
| hostname:80, 127.0.0.1, 127.0.0.1:80, [::1], [::1]:80. |
| |
| This converter also sets the string in lowercase. |
| |
| See also: "port_only" converter which will return the port. |
| |
| http_date([<offset],[<unit>]) |
| Converts an integer supposed to contain a date since epoch to a string |
| representing this date in a format suitable for use in HTTP header fields. If |
| an offset value is specified, then it is added to the date before the |
| conversion is operated. This is particularly useful to emit Date header fields, |
| Expires values in responses when combined with a positive offset, or |
| Last-Modified values when the offset is negative. |
| If a unit value is specified, then consider the timestamp as either |
| "s" for seconds (default behavior), "ms" for milliseconds, or "us" for |
| microseconds since epoch. Offset is assumed to have the same unit as |
| input timestamp. |
| |
| iif(<true>,<false>) |
| Returns the <true> string if the input value is true. Returns the <false> |
| string otherwise. |
| |
| Example: |
| http-request set-header x-forwarded-proto %[ssl_fc,iif(https,http)] |
| |
| in_table(<table>) |
| Uses the string representation of the input sample to perform a look up in |
| the specified table. If the key is not found in the table, a boolean false |
| is returned. Otherwise a boolean true is returned. This can be used to verify |
| the presence of a certain key in a table tracking some elements (e.g. whether |
| or not a source IP address or an Authorization header was already seen). |
| |
| ipmask(<mask4>,[<mask6>]) |
| Apply a mask to an IP address, and use the result for lookups and storage. |
| This can be used to make all hosts within a certain mask to share the same |
| table entries and as such use the same server. The mask4 can be passed in |
| dotted form (e.g. 255.255.255.0) or in CIDR form (e.g. 24). The mask6 can |
| be passed in quadruplet form (e.g. ffff:ffff::) or in CIDR form (e.g. 64). |
| If no mask6 is given IPv6 addresses will fail to convert for backwards |
| compatibility reasons. |
| |
| json([<input-code>]) |
| Escapes the input string and produces an ASCII output string ready to use as a |
| JSON string. The converter tries to decode the input string according to the |
| <input-code> parameter. It can be "ascii", "utf8", "utf8s", "utf8p" or |
| "utf8ps". The "ascii" decoder never fails. The "utf8" decoder detects 3 types |
| of errors: |
| - bad UTF-8 sequence (lone continuation byte, bad number of continuation |
| bytes, ...) |
| - invalid range (the decoded value is within a UTF-8 prohibited range), |
| - code overlong (the value is encoded with more bytes than necessary). |
| |
| The UTF-8 JSON encoding can produce a "too long value" error when the UTF-8 |
| character is greater than 0xffff because the JSON string escape specification |
| only authorizes 4 hex digits for the value encoding. The UTF-8 decoder exists |
| in 4 variants designated by a combination of two suffix letters : "p" for |
| "permissive" and "s" for "silently ignore". The behaviors of the decoders |
| are : |
| - "ascii" : never fails; |
| - "utf8" : fails on any detected errors; |
| - "utf8s" : never fails, but removes characters corresponding to errors; |
| - "utf8p" : accepts and fixes the overlong errors, but fails on any other |
| error; |
| - "utf8ps" : never fails, accepts and fixes the overlong errors, but removes |
| characters corresponding to the other errors. |
| |
| This converter is particularly useful for building properly escaped JSON for |
| logging to servers which consume JSON-formatted traffic logs. |
| |
| Example: |
| capture request header Host len 15 |
| capture request header user-agent len 150 |
| log-format '{"ip":"%[src]","user-agent":"%[capture.req.hdr(1),json(utf8s)]"}' |
| |
| Input request from client 127.0.0.1: |
| GET / HTTP/1.0 |
| User-Agent: Very "Ugly" UA 1/2 |
| |
| Output log: |
| {"ip":"127.0.0.1","user-agent":"Very \"Ugly\" UA 1\/2"} |
| |
| json_query(<json_path>,[<output_type>]) |
| The json_query converter supports the JSON types string, boolean and |
| number. Floating point numbers will be returned as a string. By |
| specifying the output_type 'int' the value will be converted to an |
| Integer. If conversion is not possible the json_query converter fails. |
| |
| <json_path> must be a valid JSON Path string as defined in |
| https://datatracker.ietf.org/doc/draft-ietf-jsonpath-base/ |
| |
| Example: |
| # get a integer value from the request body |
| # "{"integer":4}" => 5 |
| http-request set-var(txn.pay_int) req.body,json_query('$.integer','int'),add(1) |
| |
| # get a key with '.' in the name |
| # {"my.key":"myvalue"} => myvalue |
| http-request set-var(txn.pay_mykey) req.body,json_query('$.my\\.key') |
| |
| # {"boolean-false":false} => 0 |
| http-request set-var(txn.pay_boolean_false) req.body,json_query('$.boolean-false') |
| |
| # get the value of the key 'iss' from a JWT Bearer token |
| http-request set-var(txn.token_payload) req.hdr(Authorization),word(2,.),ub64dec,json_query('$.iss') |
| |
| jwt_header_query([<json_path>],[<output_type>]) |
| When given a JSON Web Token (JWT) in input, either returns the decoded header |
| part of the token (the first base64-url encoded part of the JWT) if no |
| parameter is given, or performs a json_query on the decoded header part of |
| the token. See "json_query" converter for details about the accepted |
| json_path and output_type parameters. |
| |
| Please note that this converter is only available when HAProxy has been |
| compiled with USE_OPENSSL. |
| |
| jwt_payload_query([<json_path>],[<output_type>]) |
| When given a JSON Web Token (JWT) in input, either returns the decoded |
| payload part of the token (the second base64-url encoded part of the JWT) if |
| no parameter is given, or performs a json_query on the decoded payload part |
| of the token. See "json_query" converter for details about the accepted |
| json_path and output_type parameters. |
| |
| Please note that this converter is only available when HAProxy has been |
| compiled with USE_OPENSSL. |
| |
| jwt_verify(<alg>,<key>) |
| Performs a signature verification for the JSON Web Token (JWT) given in input |
| by using the <alg> algorithm and the <key> parameter, which should either |
| hold a secret or a path to a public certificate. Returns 1 in case of |
| verification success, 0 in case of verification error and a strictly negative |
| value for any other error. Because of all those non-null error return values, |
| the result of this converter should never be converted to a boolean. See |
| below for a full list of the possible return values. |
| |
| For now, only JWS tokens using the Compact Serialization format can be |
| processed (three dot-separated base64-url encoded strings). All the |
| algorithms mentioned in section 3.1 of RFC7518 are managed (HS, ES, RS and PS |
| with the 256, 384 or 512 key sizes, as well as the special "none" case). |
| |
| If the used algorithm is of the HMAC family, <key> should be the secret used |
| in the HMAC signature calculation. Otherwise, <key> should be the path to the |
| public certificate that can be used to validate the token's signature. All |
| the certificates that might be used to verify JWTs must be known during init |
| in order to be added into a dedicated certificate cache so that no disk |
| access is required during runtime. For this reason, any used certificate must |
| be mentioned explicitly at least once in a jwt_verify call. Passing an |
| intermediate variable as second parameter is then not advised. |
| |
| This converter only verifies the signature of the token and does not perform |
| a full JWT validation as specified in section 7.2 of RFC7519. We do not |
| ensure that the header and payload contents are fully valid JSON's once |
| decoded for instance, and no checks are performed regarding their respective |
| contents. |
| |
| The possible return values are the following : |
| |
| +----+----------------------------------------------------------------------+ |
| | ID | message | |
| +----+----------------------------------------------------------------------+ |
| | 0 | "Verification failure" | |
| | 1 | "Verification success" | |
| | -1 | "Unknown algorithm (not mentioned in RFC7518)" | |
| | -2 | "Unmanaged algorithm" | |
| | -3 | "Invalid token" | |
| | -4 | "Out of memory" | |
| | -5 | "Unknown certificate" | |
| +----+----------------------------------------------------------------------+ |
| |
| Please note that this converter is only available when HAProxy has been |
| compiled with USE_OPENSSL. |
| |
| Example: |
| # Get a JWT from the authorization header, extract the "alg" field of its |
| # JOSE header and use a public certificate to verify a signature |
| http-request set-var(txn.bearer) http_auth_bearer |
| http-request set-var(txn.jwt_alg) var(txn.bearer),jwt_header_query('$.alg') |
| http-request deny unless { var(txn.jwt_alg) -m str "RS256" } |
| http-request deny unless { var(txn.bearer),jwt_verify(txn.jwt_alg,"/path/to/crt.pem") 1 } |
| |
| language(<value>[,<default>]) |
| Returns the value with the highest q-factor from a list as extracted from the |
| "accept-language" header using "req.fhdr". Values with no q-factor have a |
| q-factor of 1. Values with a q-factor of 0 are dropped. Only values which |
| belong to the list of semi-colon delimited <values> will be considered. The |
| argument <value> syntax is "lang[;lang[;lang[;...]]]". If no value matches the |
| given list and a default value is provided, it is returned. Note that language |
| names may have a variant after a dash ('-'). If this variant is present in the |
| list, it will be matched, but if it is not, only the base language is checked. |
| The match is case-sensitive, and the output string is always one of those |
| provided in arguments. The ordering of arguments is meaningless, only the |
| ordering of the values in the request counts, as the first value among |
| multiple sharing the same q-factor is used. |
| |
| Example : |
| |
| # this configuration switches to the backend matching a |
| # given language based on the request : |
| |
| acl es req.fhdr(accept-language),language(es;fr;en) -m str es |
| acl fr req.fhdr(accept-language),language(es;fr;en) -m str fr |
| acl en req.fhdr(accept-language),language(es;fr;en) -m str en |
| use_backend spanish if es |
| use_backend french if fr |
| use_backend english if en |
| default_backend choose_your_language |
| |
| length |
| Get the length of the string. This can only be placed after a string |
| sample fetch function or after a transformation keyword returning a string |
| type. The result is of type integer. |
| |
| lower |
| Convert a string sample to lower case. This can only be placed after a string |
| sample fetch function or after a transformation keyword returning a string |
| type. The result is of type string. |
| |
| ltime(<format>[,<offset>]) |
| Converts an integer supposed to contain a date since epoch to a string |
| representing this date in local time using a format defined by the <format> |
| string using strftime(3). The purpose is to allow any date format to be used |
| in logs. An optional <offset> in seconds may be applied to the input date |
| (positive or negative). See the strftime() man page for the format supported |
| by your operating system. See also the utime converter. |
| |
| Example : |
| |
| # Emit two colons, one with the local time and another with ip:port |
| # e.g. 20140710162350 127.0.0.1:57325 |
| log-format %[date,ltime(%Y%m%d%H%M%S)]\ %ci:%cp |
| |
| ltrim(<chars>) |
| Skips any characters from <chars> from the beginning of the string |
| representation of the input sample. |
| |
| map(<map_file>[,<default_value>]) |
| map_<match_type>(<map_file>[,<default_value>]) |
| map_<match_type>_<output_type>(<map_file>[,<default_value>]) |
| Search the input value from <map_file> using the <match_type> matching method, |
| and return the associated value converted to the type <output_type>. If the |
| input value cannot be found in the <map_file>, the converter returns the |
| <default_value>. If the <default_value> is not set, the converter fails and |
| acts as if no input value could be fetched. If the <match_type> is not set, it |
| defaults to "str". Likewise, if the <output_type> is not set, it defaults to |
| "str". For convenience, the "map" keyword is an alias for "map_str" and maps a |
| string to another string. |
| |
| It is important to avoid overlapping between the keys : IP addresses and |
| strings are stored in trees, so the first of the finest match will be used. |
| Other keys are stored in lists, so the first matching occurrence will be used. |
| |
| The following array contains the list of all map functions available sorted by |
| input type, match type and output type. |
| |
| input type | match method | output type str | output type int | output type ip |
| -----------+--------------+-----------------+-----------------+--------------- |
| str | str | map_str | map_str_int | map_str_ip |
| -----------+--------------+-----------------+-----------------+--------------- |
| str | beg | map_beg | map_beg_int | map_end_ip |
| -----------+--------------+-----------------+-----------------+--------------- |
| str | sub | map_sub | map_sub_int | map_sub_ip |
| -----------+--------------+-----------------+-----------------+--------------- |
| str | dir | map_dir | map_dir_int | map_dir_ip |
| -----------+--------------+-----------------+-----------------+--------------- |
| str | dom | map_dom | map_dom_int | map_dom_ip |
| -----------+--------------+-----------------+-----------------+--------------- |
| str | end | map_end | map_end_int | map_end_ip |
| -----------+--------------+-----------------+-----------------+--------------- |
| str | reg | map_reg | map_reg_int | map_reg_ip |
| -----------+--------------+-----------------+-----------------+--------------- |
| str | reg | map_regm | map_reg_int | map_reg_ip |
| -----------+--------------+-----------------+-----------------+--------------- |
| int | int | map_int | map_int_int | map_int_ip |
| -----------+--------------+-----------------+-----------------+--------------- |
| ip | ip | map_ip | map_ip_int | map_ip_ip |
| -----------+--------------+-----------------+-----------------+--------------- |
| |
| The special map called "map_regm" expect matching zone in the regular |
| expression and modify the output replacing back reference (like "\1") by |
| the corresponding match text. |
| |
| The file contains one key + value per line. Lines which start with '#' are |
| ignored, just like empty lines. Leading tabs and spaces are stripped. The key |
| is then the first "word" (series of non-space/tabs characters), and the value |
| is what follows this series of space/tab till the end of the line excluding |
| trailing spaces/tabs. |
| |
| Example : |
| |
| # this is a comment and is ignored |
| 2.22.246.0/23 United Kingdom \n |
| <-><-----------><--><------------><----> |
| | | | | `- trailing spaces ignored |
| | | | `---------- value |
| | | `-------------------- middle spaces ignored |
| | `---------------------------- key |
| `------------------------------------ leading spaces ignored |
| |
| mod(<value>) |
| Divides the input value of type signed integer by <value>, and returns the |
| remainder as an signed integer. If <value> is null, then zero is returned. |
| <value> can be a numeric value or a variable name. The name of the variable |
| starts with an indication about its scope. The scopes allowed are: |
| "proc" : the variable is shared with the whole process |
| "sess" : the variable is shared with the whole session |
| "txn" : the variable is shared with the transaction (request and response) |
| "req" : the variable is shared only during request processing |
| "res" : the variable is shared only during response processing |
| This prefix is followed by a name. The separator is a '.'. The name may only |
| contain characters 'a-z', 'A-Z', '0-9', '.' and '_'. |
| |
| mqtt_field_value(<packettype>,<fieldname_or_property_ID>) |
| Returns value of <fieldname> found in input MQTT payload of type |
| <packettype>. |
| <packettype> can be either a string (case insensitive matching) or a numeric |
| value corresponding to the type of packet we're supposed to extract data |
| from. |
| Supported string and integers can be found here: |
| https://docs.oasis-open.org/mqtt/mqtt/v3.1.1/os/mqtt-v3.1.1-os.html#_Toc398718021 |
| https://docs.oasis-open.org/mqtt/mqtt/v5.0/os/mqtt-v5.0-os.html#_Toc3901022 |
| |
| <fieldname> depends on <packettype> and can be any of the following below. |
| (note that <fieldname> matching is case insensitive). |
| <property id> can only be found in MQTT v5.0 streams. check this table: |
| https://docs.oasis-open.org/mqtt/mqtt/v5.0/os/mqtt-v5.0-os.html#_Toc3901029 |
| |
| - CONNECT (or 1): flags, protocol_name, protocol_version, client_identifier, |
| will_topic, will_payload, username, password, keepalive |
| OR any property ID as a numeric value (for MQTT v5.0 |
| packets only): |
| 17: Session Expiry Interval |
| 33: Receive Maximum |
| 39: Maximum Packet Size |
| 34: Topic Alias Maximum |
| 25: Request Response Information |
| 23: Request Problem Information |
| 21: Authentication Method |
| 22: Authentication Data |
| 18: Will Delay Interval |
| 1: Payload Format Indicator |
| 2: Message Expiry Interval |
| 3: Content Type |
| 8: Response Topic |
| 9: Correlation Data |
| Not supported yet: |
| 38: User Property |
| |
| - CONNACK (or 2): flags, protocol_version, reason_code |
| OR any property ID as a numeric value (for MQTT v5.0 |
| packets only): |
| 17: Session Expiry Interval |
| 33: Receive Maximum |
| 36: Maximum QoS |
| 37: Retain Available |
| 39: Maximum Packet Size |
| 18: Assigned Client Identifier |
| 34: Topic Alias Maximum |
| 31: Reason String |
| 40; Wildcard Subscription Available |
| 41: Subscription Identifiers Available |
| 42: Shared Subscription Available |
| 19: Server Keep Alive |
| 26: Response Information |
| 28: Server Reference |
| 21: Authentication Method |
| 22: Authentication Data |
| Not supported yet: |
| 38: User Property |
| |
| Due to current HAProxy design, only the first message sent by the client and |
| the server can be parsed. Thus this converter can extract data only from |
| CONNECT and CONNACK packet types. CONNECT is the first message sent by the |
| client and CONNACK is the first response sent by the server. |
| |
| Example: |
| |
| acl data_in_buffer req.len ge 4 |
| tcp-request content set-var(txn.username) \ |
| req.payload(0,0),mqtt_field_value(connect,protocol_name) \ |
| if data_in_buffer |
| # do the same as above |
| tcp-request content set-var(txn.username) \ |
| req.payload(0,0),mqtt_field_value(1,protocol_name) \ |
| if data_in_buffer |
| |
| mqtt_is_valid |
| Checks that the binary input is a valid MQTT packet. It returns a boolean. |
| |
| Due to current HAProxy design, only the first message sent by the client and |
| the server can be parsed. Thus this converter can extract data only from |
| CONNECT and CONNACK packet types. CONNECT is the first message sent by the |
| client and CONNACK is the first response sent by the server. |
| |
| Only MQTT 3.1, 3.1.1 and 5.0 are supported. |
| |
| Example: |
| |
| acl data_in_buffer req.len ge 4 |
| tcp-request content reject unless { req.payload(0,0),mqtt_is_valid } |
| |
| mul(<value>) |
| Multiplies the input value of type signed integer by <value>, and returns |
| the product as an signed integer. In case of overflow, the largest possible |
| value for the sign is returned so that the operation doesn't wrap around. |
| <value> can be a numeric value or a variable name. The name of the variable |
| starts with an indication about its scope. The scopes allowed are: |
| "proc" : the variable is shared with the whole process |
| "sess" : the variable is shared with the whole session |
| "txn" : the variable is shared with the transaction (request and response) |
| "req" : the variable is shared only during request processing |
| "res" : the variable is shared only during response processing |
| This prefix is followed by a name. The separator is a '.'. The name may only |
| contain characters 'a-z', 'A-Z', '0-9', '.' and '_'. |
| |
| nbsrv |
| Takes an input value of type string, interprets it as a backend name and |
| returns the number of usable servers in that backend. Can be used in places |
| where we want to look up a backend from a dynamic name, like a result of a |
| map lookup. |
| |
| neg |
| Takes the input value of type signed integer, computes the opposite value, |
| and returns the remainder as an signed integer. 0 is identity. This operator |
| is provided for reversed subtracts : in order to subtract the input from a |
| constant, simply perform a "neg,add(value)". |
| |
| not |
| Returns a boolean FALSE if the input value of type signed integer is |
| non-null, otherwise returns TRUE. Used in conjunction with and(), it can be |
| used to report true/false for bit testing on input values (e.g. verify the |
| absence of a flag). |
| |
| odd |
| Returns a boolean TRUE if the input value of type signed integer is odd |
| otherwise returns FALSE. It is functionally equivalent to "and(1),bool". |
| |
| or(<value>) |
| Performs a bitwise "OR" between <value> and the input value of type signed |
| integer, and returns the result as an signed integer. <value> can be a |
| numeric value or a variable name. The name of the variable starts with an |
| indication about its scope. The scopes allowed are: |
| "proc" : the variable is shared with the whole process |
| "sess" : the variable is shared with the whole session |
| "txn" : the variable is shared with the transaction (request and response) |
| "req" : the variable is shared only during request processing |
| "res" : the variable is shared only during response processing |
| This prefix is followed by a name. The separator is a '.'. The name may only |
| contain characters 'a-z', 'A-Z', '0-9', '.' and '_'. |
| |
| param(<name>,[<delim>]) |
| This extracts the first occurrence of the parameter <name> in the input string |
| where parameters are delimited by <delim>, which defaults to "&", and the name |
| and value of the parameter are separated by a "=". If there is no "=" and |
| value before the end of the parameter segment, it is treated as equivalent to |
| a value of an empty string. |
| |
| This can be useful for extracting parameters from a query string, or possibly |
| a x-www-form-urlencoded body. In particular, `query,param(<name>)` can be used |
| as an alternative to `urlp(<name>)` which only uses "&" as a delimiter, |
| whereas "urlp" also uses "?" and ";". |
| |
| Note that this converter doesn't do anything special with url encoded |
| characters. If you want to decode the value, you can use the url_dec converter |
| on the output. If the name of the parameter in the input might contain encoded |
| characters, you'll probably want do normalize the input before calling |
| "param". This can be done using "http-request normalize-uri", in particular |
| the percent-decode-unreserved and percent-to-uppercase options. |
| |
| Example : |
| str(a=b&c=d&a=r),param(a) # b |
| str(a&b=c),param(a) # "" |
| str(a=&b&c=a),param(b) # "" |
| str(a=1;b=2;c=4),param(b,;) # 2 |
| query,param(redirect_uri),urldec() |
| |
| port_only |
| Converts a string which contains a Host header value into an integer by |
| returning its port. |
| The input must respect the format of the host header value |
| (rfc9110#section-7.2). It will support that kind of input: hostname, |
| hostname:80, 127.0.0.1, 127.0.0.1:80, [::1], [::1]:80. |
| |
| If no port were provided in the input, it will return 0. |
| |
| See also: "host_only" converter which will return the host. |
| |
| protobuf(<field_number>,[<field_type>]) |
| This extracts the protocol buffers message field in raw mode of an input binary |
| sample representation of a protocol buffer message with <field_number> as field |
| number (dotted notation) if <field_type> is not present, or as an integer sample |
| if this field is present (see also "ungrpc" below). |
| The list of the authorized types is the following one: "int32", "int64", "uint32", |
| "uint64", "sint32", "sint64", "bool", "enum" for the "varint" wire type 0 |
| "fixed64", "sfixed64", "double" for the 64bit wire type 1, "fixed32", "sfixed32", |
| "float" for the wire type 5. Note that "string" is considered as a length-delimited |
| type, so it does not require any <field_type> argument to be extracted. |
| More information may be found here about the protocol buffers message field types: |
| https://developers.google.com/protocol-buffers/docs/encoding |
| |
| regsub(<regex>,<subst>[,<flags>]) |
| Applies a regex-based substitution to the input string. It does the same |
| operation as the well-known "sed" utility with "s/<regex>/<subst>/". By |
| default it will replace in the input string the first occurrence of the |
| largest part matching the regular expression <regex> with the substitution |
| string <subst>. It is possible to replace all occurrences instead by adding |
| the flag "g" in the third argument <flags>. It is also possible to make the |
| regex case insensitive by adding the flag "i" in <flags>. Since <flags> is a |
| string, it is made up from the concatenation of all desired flags. Thus if |
| both "i" and "g" are desired, using "gi" or "ig" will have the same effect. |
| The first use of this converter is to replace certain characters or sequence |
| of characters with other ones. |
| |
| It is highly recommended to enclose the regex part using protected quotes to |
| improve clarity and never have a closing parenthesis from the regex mixed up |
| with the parenthesis from the function. Just like in Bourne shell, the first |
| level of quotes is processed when delimiting word groups on the line, a |
| second level is usable for argument. It is recommended to use single quotes |
| outside since these ones do not try to resolve backslashes nor dollar signs. |
| |
| Examples: |
| |
| # de-duplicate "/" in header "x-path". |
| # input: x-path: /////a///b/c/xzxyz/ |
| # output: x-path: /a/b/c/xzxyz/ |
| http-request set-header x-path "%[hdr(x-path),regsub('/+','/','g')]" |
| |
| # copy query string to x-query and drop all leading '?', ';' and '&' |
| http-request set-header x-query "%[query,regsub([?;&]*,'')]" |
| |
| # capture groups and backreferences |
| # both lines do the same. |
| http-request redirect location %[url,'regsub("(foo|bar)([0-9]+)?","\2\1",i)'] |
| http-request redirect location %[url,regsub(\"(foo|bar)([0-9]+)?\",\"\2\1\",i)] |
| |
| capture-req(<id>) |
| Capture the string entry in the request slot <id> and returns the entry as |
| is. If the slot doesn't exist, the capture fails silently. |
| |
| See also: "declare capture", "http-request capture", |
| "http-response capture", "capture.req.hdr" and |
| "capture.res.hdr" (sample fetches). |
| |
| capture-res(<id>) |
| Capture the string entry in the response slot <id> and returns the entry as |
| is. If the slot doesn't exist, the capture fails silently. |
| |
| See also: "declare capture", "http-request capture", |
| "http-response capture", "capture.req.hdr" and |
| "capture.res.hdr" (sample fetches). |
| |
| rtrim(<chars>) |
| Skips any characters from <chars> from the end of the string representation |
| of the input sample. |
| |
| sdbm([<avalanche>]) |
| Hashes a binary input sample into an unsigned 32-bit quantity using the SDBM |
| hash function. Optionally, it is possible to apply a full avalanche hash |
| function to the output if the optional <avalanche> argument equals 1. This |
| converter uses the same functions as used by the various hash-based load |
| balancing algorithms, so it will provide exactly the same results. It is |
| mostly intended for debugging, but can be used as a stick-table entry to |
| collect rough statistics. It must not be used for security purposes as a |
| 32-bit hash is trivial to break. See also "crc32", "djb2", "wt6", "crc32c", |
| and the "hash-type" directive. |
| |
| secure_memcmp(<var>) |
| Compares the contents of <var> with the input value. Both values are treated |
| as a binary string. Returns a boolean indicating whether both binary strings |
| match. |
| |
| If both binary strings have the same length then the comparison will be |
| performed in constant time. |
| |
| Please note that this converter is only available when HAProxy has been |
| compiled with USE_OPENSSL. |
| |
| Example : |
| |
| http-request set-var(txn.token) hdr(token) |
| # Check whether the token sent by the client matches the secret token |
| # value, without leaking the contents using a timing attack. |
| acl token_given str(my_secret_token),secure_memcmp(txn.token) |
| |
| set-var(<var>[,<cond>...]) |
| Sets a variable with the input content and returns the content on the output |
| as-is if all of the specified conditions are true (see below for a list of |
| possible conditions). The variable keeps the value and the associated input |
| type. The name of the variable starts with an indication about its scope. The |
| scopes allowed are: |
| "proc" : the variable is shared with the whole process |
| "sess" : the variable is shared with the whole session |
| "txn" : the variable is shared with the transaction (request and |
| response), |
| "req" : the variable is shared only during request processing, |
| "res" : the variable is shared only during response processing. |
| This prefix is followed by a name. The separator is a '.'. The name may only |
| contain characters 'a-z', 'A-Z', '0-9', '.' and '_'. |
| |
| You can pass at most four conditions to the converter among the following |
| possible conditions : |
| - "ifexists"/"ifnotexists": |
| Checks if the variable already existed before the current set-var call. |
| A variable is usually created through a successful set-var call. |
| Note that variables of scope "proc" are created during configuration |
| parsing so the "ifexists" condition will always be true for them. |
| - "ifempty"/"ifnotempty": |
| Checks if the input is empty or not. |
| Scalar types are never empty so the ifempty condition will be false for |
| them regardless of the input's contents (integers, booleans, IPs ...). |
| - "ifset"/"ifnotset": |
| Checks if the variable was previously set or not, or if unset-var was |
| called on the variable. |
| A variable that does not exist yet is considered as not set. A "proc" |
| variable can exist while not being set since they are created during |
| configuration parsing. |
| - "ifgt"/"iflt": |
| Checks if the content of the variable is "greater than" or "less than" |
| the input. This check can only be performed if both the input and |
| the variable are of type integer. Otherwise, the check is considered as |
| true by default. |
| |
| sha1 |
| Converts a binary input sample to a SHA-1 digest. The result is a binary |
| sample with length of 20 bytes. |
| |
| sha2([<bits>]) |
| Converts a binary input sample to a digest in the SHA-2 family. The result |
| is a binary sample with length of <bits>/8 bytes. |
| |
| Valid values for <bits> are 224, 256, 384, 512, each corresponding to |
| SHA-<bits>. The default value is 256. |
| |
| Please note that this converter is only available when HAProxy has been |
| compiled with USE_OPENSSL. |
| |
| srv_queue |
| Takes an input value of type string, either a server name or <backend>/<server> |
| format and returns the number of queued sessions on that server. Can be used |
| in places where we want to look up queued sessions from a dynamic name, like a |
| cookie value (e.g. req.cook(SRVID),srv_queue) and then make a decision to break |
| persistence or direct a request elsewhere. |
| |
| strcmp(<var>) |
| Compares the contents of <var> with the input value of type string. Returns |
| the result as a signed integer compatible with strcmp(3): 0 if both strings |
| are identical. A value less than 0 if the left string is lexicographically |
| smaller than the right string or if the left string is shorter. A value greater |
| than 0 otherwise (right string greater than left string or the right string is |
| shorter). |
| |
| See also the secure_memcmp converter if you need to compare two binary |
| strings in constant time. |
| |
| Example : |
| |
| http-request set-var(txn.host) hdr(host) |
| # Check whether the client is attempting domain fronting. |
| acl ssl_sni_http_host_match ssl_fc_sni,strcmp(txn.host) eq 0 |
| |
| |
| sub(<value>) |
| Subtracts <value> from the input value of type signed integer, and returns |
| the result as an signed integer. Note: in order to subtract the input from |
| a constant, simply perform a "neg,add(value)". <value> can be a numeric value |
| or a variable name. The name of the variable starts with an indication about |
| its scope. The scopes allowed are: |
| "proc" : the variable is shared with the whole process |
| "sess" : the variable is shared with the whole session |
| "txn" : the variable is shared with the transaction (request and |
| response), |
| "req" : the variable is shared only during request processing, |
| "res" : the variable is shared only during response processing. |
| This prefix is followed by a name. The separator is a '.'. The name may only |
| contain characters 'a-z', 'A-Z', '0-9', '.' and '_'. |
| |
| table_bytes_in_rate(<table>) |
| Uses the string representation of the input sample to perform a look up in |
| the specified table. If the key is not found in the table, integer value zero |
| is returned. Otherwise the converter returns the average client-to-server |
| bytes rate associated with the input sample in the designated table, measured |
| in amount of bytes over the period configured in the table. See also the |
| sc_bytes_in_rate sample fetch keyword. |
| |
| |
| table_bytes_out_rate(<table>) |
| Uses the string representation of the input sample to perform a look up in |
| the specified table. If the key is not found in the table, integer value zero |
| is returned. Otherwise the converter returns the average server-to-client |
| bytes rate associated with the input sample in the designated table, measured |
| in amount of bytes over the period configured in the table. See also the |
| sc_bytes_out_rate sample fetch keyword. |
| |
| table_conn_cnt(<table>) |
| Uses the string representation of the input sample to perform a look up in |
| the specified table. If the key is not found in the table, integer value zero |
| is returned. Otherwise the converter returns the cumulative number of incoming |
| connections associated with the input sample in the designated table. See |
| also the sc_conn_cnt sample fetch keyword. |
| |
| table_conn_cur(<table>) |
| Uses the string representation of the input sample to perform a look up in |
| the specified table. If the key is not found in the table, integer value zero |
| is returned. Otherwise the converter returns the current amount of concurrent |
| tracked connections associated with the input sample in the designated table. |
| See also the sc_conn_cur sample fetch keyword. |
| |
| table_conn_rate(<table>) |
| Uses the string representation of the input sample to perform a look up in |
| the specified table. If the key is not found in the table, integer value zero |
| is returned. Otherwise the converter returns the average incoming connection |
| rate associated with the input sample in the designated table. See also the |
| sc_conn_rate sample fetch keyword. |
| |
| table_expire(<table>[,<default_value>]) |
| Uses the input sample to perform a look up in the specified table. If the key |
| is not found in the table, the converter fails except if <default_value> is |
| set: this makes the converter succeed and return <default_value>. If the key |
| is found the converter returns the key expiration delay associated with the |
| input sample in the designated table. |
| See also the table_idle sample fetch keyword. |
| |
| table_gpt(<idx>,<table>) |
| Uses the string representation of the input sample to perform a lookup in |
| the specified table. If the key is not found in the table, boolean value zero |
| is returned. Otherwise the converter returns the current value of the general |
| purpose tag at the index <idx> of the array associated to the input sample |
| in the designated <table>. <idx> is an integer between 0 and 99. |
| If there is no GPT stored at this index, it also returns the boolean value 0. |
| This applies only to the 'gpt' array data_type (and not on the legacy 'gpt0' |
| data-type). |
| See also the sc_get_gpt sample fetch keyword. |
| |
| table_gpt0(<table>) |
| Uses the string representation of the input sample to perform a look up in |
| the specified table. If the key is not found in the table, boolean value zero |
| is returned. Otherwise the converter returns the current value of the first |
| general purpose tag associated with the input sample in the designated table. |
| See also the sc_get_gpt0 sample fetch keyword. |
| |
| table_gpc(<idx>,<table>) |
| Uses the string representation of the input sample to perform a lookup in |
| the specified table. If the key is not found in the table, integer value zero |
| is returned. Otherwise the converter returns the current value of the |
| General Purpose Counter at the index <idx> of the array associated |
| to the input sample in the designated <table>. <idx> is an integer |
| between 0 and 99. |
| If there is no GPC stored at this index, it also returns the boolean value 0. |
| This applies only to the 'gpc' array data_type (and not to the legacy |
| 'gpc0' nor 'gpc1' data_types). |
| See also the sc_get_gpc sample fetch keyword. |
| |
| table_gpc_rate(<idx>,<table>) |
| Uses the string representation of the input sample to perform a lookup in |
| the specified table. If the key is not found in the table, integer value zero |
| is returned. Otherwise the converter returns the frequency which the Global |
| Purpose Counter at index <idx> of the array (associated to the input sample |
| in the designated stick-table <table>) was incremented over the |
| configured period. <idx> is an integer between 0 and 99. |
| If there is no gpc_rate stored at this index, it also returns the boolean |
| value 0. |
| This applies only to the 'gpc_rate' array data_type (and not to the |
| legacy 'gpc0_rate' nor 'gpc1_rate' data_types). |
| See also the sc_gpc_rate sample fetch keyword. |
| |
| table_gpc0(<table>) |
| Uses the string representation of the input sample to perform a look up in |
| the specified table. If the key is not found in the table, integer value zero |
| is returned. Otherwise the converter returns the current value of the first |
| general purpose counter associated with the input sample in the designated |
| table. See also the sc_get_gpc0 sample fetch keyword. |
| |
| table_gpc0_rate(<table>) |
| Uses the string representation of the input sample to perform a look up in |
| the specified table. If the key is not found in the table, integer value zero |
| is returned. Otherwise the converter returns the frequency which the gpc0 |
| counter was incremented over the configured period in the table, associated |
| with the input sample in the designated table. See also the sc_get_gpc0_rate |
| sample fetch keyword. |
| |
| table_gpc1(<table>) |
| Uses the string representation of the input sample to perform a look up in |
| the specified table. If the key is not found in the table, integer value zero |
| is returned. Otherwise the converter returns the current value of the second |
| general purpose counter associated with the input sample in the designated |
| table. See also the sc_get_gpc1 sample fetch keyword. |
| |
| table_gpc1_rate(<table>) |
| Uses the string representation of the input sample to perform a look up in |
| the specified table. If the key is not found in the table, integer value zero |
| is returned. Otherwise the converter returns the frequency which the gpc1 |
| counter was incremented over the configured period in the table, associated |
| with the input sample in the designated table. See also the sc_get_gpc1_rate |
| sample fetch keyword. |
| |
| table_http_err_cnt(<table>) |
| Uses the string representation of the input sample to perform a look up in |
| the specified table. If the key is not found in the table, integer value zero |
| is returned. Otherwise the converter returns the cumulative number of HTTP |
| errors associated with the input sample in the designated table. See also the |
| sc_http_err_cnt sample fetch keyword. |
| |
| table_http_err_rate(<table>) |
| Uses the string representation of the input sample to perform a look up in |
| the specified table. If the key is not found in the table, integer value zero |
| is returned. Otherwise the average rate of HTTP errors associated with the |
| input sample in the designated table, measured in amount of errors over the |
| period configured in the table. See also the sc_http_err_rate sample fetch |
| keyword. |
| |
| table_http_fail_cnt(<table>) |
| Uses the string representation of the input sample to perform a look up in |
| the specified table. If the key is not found in the table, integer value zero |
| is returned. Otherwise the converter returns the cumulative number of HTTP |
| failures associated with the input sample in the designated table. See also |
| the sc_http_fail_cnt sample fetch keyword. |
| |
| table_http_fail_rate(<table>) |
| Uses the string representation of the input sample to perform a look up in |
| the specified table. If the key is not found in the table, integer value zero |
| is returned. Otherwise the average rate of HTTP failures associated with the |
| input sample in the designated table, measured in amount of failures over the |
| period configured in the table. See also the sc_http_fail_rate sample fetch |
| keyword. |
| |
| table_http_req_cnt(<table>) |
| Uses the string representation of the input sample to perform a look up in |
| the specified table. If the key is not found in the table, integer value zero |
| is returned. Otherwise the converter returns the cumulative number of HTTP |
| requests associated with the input sample in the designated table. See also |
| the sc_http_req_cnt sample fetch keyword. |
| |
| table_http_req_rate(<table>) |
| Uses the string representation of the input sample to perform a look up in |
| the specified table. If the key is not found in the table, integer value zero |
| is returned. Otherwise the average rate of HTTP requests associated with the |
| input sample in the designated table, measured in amount of requests over the |
| period configured in the table. See also the sc_http_req_rate sample fetch |
| keyword. |
| |
| table_idle(<table>[,<default_value>]) |
| Uses the input sample to perform a look up in the specified table. If the key |
| is not found in the table, the converter fails except if <default_value> is |
| set: this makes the converter succeed and return <default_value>. If the key |
| is found the converter returns the time the key entry associated with the |
| input sample in the designated table remained idle since the last time it was |
| updated. |
| See also the table_expire sample fetch keyword. |
| |
| table_kbytes_in(<table>) |
| Uses the string representation of the input sample to perform a look up in |
| the specified table. If the key is not found in the table, integer value zero |
| is returned. Otherwise the converter returns the cumulative number of client- |
| to-server data associated with the input sample in the designated table, |
| measured in kilobytes. The test is currently performed on 32-bit integers, |
| which limits values to 4 terabytes. See also the sc_kbytes_in sample fetch |
| keyword. |
| |
| table_kbytes_out(<table>) |
| Uses the string representation of the input sample to perform a look up in |
| the specified table. If the key is not found in the table, integer value zero |
| is returned. Otherwise the converter returns the cumulative number of server- |
| to-client data associated with the input sample in the designated table, |
| measured in kilobytes. The test is currently performed on 32-bit integers, |
| which limits values to 4 terabytes. See also the sc_kbytes_out sample fetch |
| keyword. |
| |
| table_server_id(<table>) |
| Uses the string representation of the input sample to perform a look up in |
| the specified table. If the key is not found in the table, integer value zero |
| is returned. Otherwise the converter returns the server ID associated with |
| the input sample in the designated table. A server ID is associated to a |
| sample by a "stick" rule when a connection to a server succeeds. A server ID |
| zero means that no server is associated with this key. |
| |
| table_sess_cnt(<table>) |
| Uses the string representation of the input sample to perform a look up in |
| the specified table. If the key is not found in the table, integer value zero |
| is returned. Otherwise the converter returns the cumulative number of incoming |
| sessions associated with the input sample in the designated table. Note that |
| a session here refers to an incoming connection being accepted by the |
| "tcp-request connection" rulesets. See also the sc_sess_cnt sample fetch |
| keyword. |
| |
| table_sess_rate(<table>) |
| Uses the string representation of the input sample to perform a look up in |
| the specified table. If the key is not found in the table, integer value zero |
| is returned. Otherwise the converter returns the average incoming session |
| rate associated with the input sample in the designated table. Note that a |
| session here refers to an incoming connection being accepted by the |
| "tcp-request connection" rulesets. See also the sc_sess_rate sample fetch |
| keyword. |
| |
| table_trackers(<table>) |
| Uses the string representation of the input sample to perform a look up in |
| the specified table. If the key is not found in the table, integer value zero |
| is returned. Otherwise the converter returns the current amount of concurrent |
| connections tracking the same key as the input sample in the designated |
| table. It differs from table_conn_cur in that it does not rely on any stored |
| information but on the table's reference count (the "use" value which is |
| returned by "show table" on the CLI). This may sometimes be more suited for |
| layer7 tracking. It can be used to tell a server how many concurrent |
| connections there are from a given address for example. See also the |
| sc_trackers sample fetch keyword. |
| |
| ub64dec |
| This converter is the base64url variant of b64dec converter. base64url |
| encoding is the "URL and Filename Safe Alphabet" variant of base64 encoding. |
| It is also the encoding used in JWT (JSON Web Token) standard. |
| |
| Example: |
| # Decoding a JWT payload: |
| http-request set-var(txn.token_payload) req.hdr(Authorization),word(2,.),ub64dec |
| |
| ub64enc |
| This converter is the base64url variant of base64 converter. |
| |
| upper |
| Convert a string sample to upper case. This can only be placed after a string |
| sample fetch function or after a transformation keyword returning a string |
| type. The result is of type string. |
| |
| url_dec([<in_form>]) |
| Takes an url-encoded string provided as input and returns the decoded version |
| as output. The input and the output are of type string. If the <in_form> |
| argument is set to a non-zero integer value, the input string is assumed to |
| be part of a form or query string and the '+' character will be turned into a |
| space (' '). Otherwise this will only happen after a question mark indicating |
| a query string ('?'). |
| |
| url_enc([<enc_type>]) |
| Takes a string provided as input and returns the encoded version as output. |
| The input and the output are of type string. By default the type of encoding |
| is meant for `query` type. There is no other type supported for now but the |
| optional argument is here for future changes. |
| |
| ungrpc(<field_number>,[<field_type>]) |
| This extracts the protocol buffers message field in raw mode of an input binary |
| sample representation of a gRPC message with <field_number> as field number |
| (dotted notation) if <field_type> is not present, or as an integer sample if this |
| field is present. |
| The list of the authorized types is the following one: "int32", "int64", "uint32", |
| "uint64", "sint32", "sint64", "bool", "enum" for the "varint" wire type 0 |
| "fixed64", "sfixed64", "double" for the 64bit wire type 1, "fixed32", "sfixed32", |
| "float" for the wire type 5. Note that "string" is considered as a length-delimited |
| type, so it does not require any <field_type> argument to be extracted. |
| More information may be found here about the protocol buffers message field types: |
| https://developers.google.com/protocol-buffers/docs/encoding |
| |
| Example: |
| // with such a protocol buffer .proto file content adapted from |
| // https://github.com/grpc/grpc/blob/master/examples/protos/route_guide.proto |
| |
| message Point { |
| int32 latitude = 1; |
| int32 longitude = 2; |
| } |
| |
| message PPoint { |
| Point point = 59; |
| } |
| |
| message Rectangle { |
| // One corner of the rectangle. |
| PPoint lo = 48; |
| // The other corner of the rectangle. |
| PPoint hi = 49; |
| } |
| |
| let's say a body request is made of a "Rectangle" object value (two PPoint |
| protocol buffers messages), the four protocol buffers fields could be |
| extracted with these "ungrpc" directives: |
| |
| req.body,ungrpc(48.59.1,int32) # "latitude" of "lo" first PPoint |
| req.body,ungrpc(48.59.2,int32) # "longitude" of "lo" first PPoint |
| req.body,ungrpc(49.59.1,int32) # "latitude" of "hi" second PPoint |
| req.body,ungrpc(49.59.2,int32) # "longitude" of "hi" second PPoint |
| |
| We could also extract the intermediary 48.59 field as a binary sample as follows: |
| |
| req.body,ungrpc(48.59) |
| |
| As a gRPC message is always made of a gRPC header followed by protocol buffers |
| messages, in the previous example the "latitude" of "lo" first PPoint |
| could be extracted with these equivalent directives: |
| |
| req.body,ungrpc(48.59),protobuf(1,int32) |
| req.body,ungrpc(48),protobuf(59.1,int32) |
| req.body,ungrpc(48),protobuf(59),protobuf(1,int32) |
| |
| Note that the first convert must be "ungrpc", the remaining ones must be |
| "protobuf" and only the last one may have or not a second argument to |
| interpret the previous binary sample. |
| |
| |
| unset-var(<var>) |
| Unsets a variable if the input content is defined. The name of the variable |
| starts with an indication about its scope. The scopes allowed are: |
| "proc" : the variable is shared with the whole process |
| "sess" : the variable is shared with the whole session |
| "txn" : the variable is shared with the transaction (request and |
| response), |
| "req" : the variable is shared only during request processing, |
| "res" : the variable is shared only during response processing. |
| This prefix is followed by a name. The separator is a '.'. The name may only |
| contain characters 'a-z', 'A-Z', '0-9', '.' and '_'. |
| |
| utime(<format>[,<offset>]) |
| Converts an integer supposed to contain a date since epoch to a string |
| representing this date in UTC time using a format defined by the <format> |
| string using strftime(3). The purpose is to allow any date format to be used |
| in logs. An optional <offset> in seconds may be applied to the input date |
| (positive or negative). See the strftime() man page for the format supported |
| by your operating system. See also the ltime converter. |
| |
| Example : |
| |
| # Emit two colons, one with the UTC time and another with ip:port |
| # e.g. 20140710162350 127.0.0.1:57325 |
| log-format %[date,utime(%Y%m%d%H%M%S)]\ %ci:%cp |
| |
| word(<index>,<delimiters>[,<count>]) |
| Extracts the nth word counting from the beginning (positive index) or from |
| the end (negative index) considering given delimiters from an input string. |
| Indexes start at 1 or -1 and delimiters are a string formatted list of chars. |
| Empty words are skipped. This means that delimiters at the start or end of |
| the input string are ignored and consecutive delimiters within the input |
| string are considered to be a single delimiter. |
| Optionally you can specify <count> of words to extract (default: 1). |
| Value of 0 indicates extraction of all remaining words. |
| |
| Example : |
| str(f1_f2_f3__f5),word(4,_) # f5 |
| str(f1_f2_f3__f5),word(5,_) # <not found> |
| str(f1_f2_f3__f5),word(2,_,0) # f2_f3__f5 |
| str(f1_f2_f3__f5),word(3,_,2) # f3__f5 |
| str(f1_f2_f3__f5),word(-2,_,3) # f1_f2_f3 |
| str(f1_f2_f3__f5),word(-3,_,0) # f1_f2 |
| str(/f1/f2/f3/f4),word(1,/) # f1 |
| str(/f1////f2/f3/f4),word(1,/) # f2 |
| |
| wt6([<avalanche>]) |
| Hashes a binary input sample into an unsigned 32-bit quantity using the WT6 |
| hash function. Optionally, it is possible to apply a full avalanche hash |
| function to the output if the optional <avalanche> argument equals 1. This |
| converter uses the same functions as used by the various hash-based load |
| balancing algorithms, so it will provide exactly the same results. It is |
| mostly intended for debugging, but can be used as a stick-table entry to |
| collect rough statistics. It must not be used for security purposes as a |
| 32-bit hash is trivial to break. See also "crc32", "djb2", "sdbm", "crc32c", |
| and the "hash-type" directive. |
| |
| xor(<value>) |
| Performs a bitwise "XOR" (exclusive OR) between <value> and the input value |
| of type signed integer, and returns the result as an signed integer. |
| <value> can be a numeric value or a variable name. The name of the variable |
| starts with an indication about its scope. The scopes allowed are: |
| "proc" : the variable is shared with the whole process |
| "sess" : the variable is shared with the whole session |
| "txn" : the variable is shared with the transaction (request and |
| response), |
| "req" : the variable is shared only during request processing, |
| "res" : the variable is shared only during response processing. |
| This prefix is followed by a name. The separator is a '.'. The name may only |
| contain characters 'a-z', 'A-Z', '0-9', '.' and '_'. |
| |
| xxh3([<seed>]) |
| Hashes a binary input sample into a signed 64-bit quantity using the XXH3 |
| 64-bit variant of the XXhash hash function. This hash supports a seed which |
| defaults to zero but a different value maybe passed as the <seed> argument. |
| This hash is known to be very good and very fast so it can be used to hash |
| URLs and/or URL parameters for use as stick-table keys to collect statistics |
| with a low collision rate, though care must be taken as the algorithm is not |
| considered as cryptographically secure. |
| |
| xxh32([<seed>]) |
| Hashes a binary input sample into an unsigned 32-bit quantity using the 32-bit |
| variant of the XXHash hash function. This hash supports a seed which defaults |
| to zero but a different value maybe passed as the <seed> argument. This hash |
| is known to be very good and very fast so it can be used to hash URLs and/or |
| URL parameters for use as stick-table keys to collect statistics with a low |
| collision rate, though care must be taken as the algorithm is not considered |
| as cryptographically secure. |
| |
| xxh64([<seed>]) |
| Hashes a binary input sample into a signed 64-bit quantity using the 64-bit |
| variant of the XXHash hash function. This hash supports a seed which defaults |
| to zero but a different value maybe passed as the <seed> argument. This hash |
| is known to be very good and very fast so it can be used to hash URLs and/or |
| URL parameters for use as stick-table keys to collect statistics with a low |
| collision rate, though care must be taken as the algorithm is not considered |
| as cryptographically secure. |
| |
| x509_v_err_str |
| Convert a numerical value to its corresponding X509_V_ERR constant name. It |
| is useful in ACL in order to have a configuration which works with multiple |
| version of OpenSSL since some codes might change when changing version. |
| |
| When the corresponding constant name was not found, outputs the numerical |
| value as a string. |
| |
| The list of constant provided by OpenSSL can be found at |
| https://www.openssl.org/docs/manmaster/man3/X509_STORE_CTX_get_error.html#ERROR-CODES |
| Be careful to read the page for the right version of OpenSSL. |
| |
| Example: |
| |
| bind :443 ssl crt common.pem ca-file ca-auth.crt verify optional crt-ignore-err X509_V_ERR_CERT_REVOKED,X509_V_ERR_CERT_HAS_EXPIRED |
| |
| acl cert_expired ssl_c_verify,x509_v_err_str -m str X509_V_ERR_CERT_HAS_EXPIRED |
| acl cert_revoked ssl_c_verify,x509_v_err_str -m str X509_V_ERR_CERT_REVOKED |
| acl cert_ok ssl_c_verify,x509_v_err_str -m str X509_V_OK |
| |
| http-response add-header X-SSL Ok if cert_ok |
| http-response add-header X-SSL Expired if cert_expired |
| http-response add-header X-SSL Revoked if cert_revoked |
| |
| http-response add-header X-SSL-verify %[ssl_c_verify,x509_v_err_str] |
| |
| |
| 7.3.2. Fetching samples from internal states |
| -------------------------------------------- |
| |
| A first set of sample fetch methods applies to internal information which does |
| not even relate to any client information. These ones are sometimes used with |
| "monitor fail" directives to report an internal status to external watchers. |
| The sample fetch methods described in this section are usable anywhere. |
| |
| always_false : boolean |
| Always returns the boolean "false" value. It may be used with ACLs as a |
| temporary replacement for another one when adjusting configurations. |
| |
| always_true : boolean |
| Always returns the boolean "true" value. It may be used with ACLs as a |
| temporary replacement for another one when adjusting configurations. |
| |
| avg_queue([<backend>]) : integer |
| Returns the total number of queued connections of the designated backend |
| divided by the number of active servers. The current backend is used if no |
| backend is specified. This is very similar to "queue" except that the size of |
| the farm is considered, in order to give a more accurate measurement of the |
| time it may take for a new connection to be processed. The main usage is with |
| ACL to return a sorry page to new users when it becomes certain they will get |
| a degraded service, or to pass to the backend servers in a header so that |
| they decide to work in degraded mode or to disable some functions to speed up |
| the processing a bit. Note that in the event there would not be any active |
| server anymore, twice the number of queued connections would be considered as |
| the measured value. This is a fair estimate, as we expect one server to get |
| back soon anyway, but we still prefer to send new traffic to another backend |
| if in better shape. See also the "queue", "be_conn", and "be_sess_rate" |
| sample fetches. |
| |
| be_conn([<backend>]) : integer |
| Applies to the number of currently established connections on the backend, |
| possibly including the connection being evaluated. If no backend name is |
| specified, the current one is used. But it is also possible to check another |
| backend. It can be used to use a specific farm when the nominal one is full. |
| See also the "fe_conn", "queue", "be_conn_free", and "be_sess_rate" criteria. |
| |
| be_conn_free([<backend>]) : integer |
| Returns an integer value corresponding to the number of available connections |
| across available servers in the backend. Queue slots are not included. Backup |
| servers are also not included, unless all other servers are down. If no |
| backend name is specified, the current one is used. But it is also possible |
| to check another backend. It can be used to use a specific farm when the |
| nominal one is full. See also the "be_conn", "connslots", and "srv_conn_free" |
| criteria. |
| |
| OTHER CAVEATS AND NOTES: if any of the server maxconn, or maxqueue is 0 |
| (meaning unlimited), then this fetch clearly does not make sense, in which |
| case the value returned will be -1. |
| |
| be_sess_rate([<backend>]) : integer |
| Returns an integer value corresponding to the sessions creation rate on the |
| backend, in number of new sessions per second. This is used with ACLs to |
| switch to an alternate backend when an expensive or fragile one reaches too |
| high a session rate, or to limit abuse of service (e.g. prevent sucking of an |
| online dictionary). It can also be useful to add this element to logs using a |
| log-format directive. |
| |
| Example : |
| # Redirect to an error page if the dictionary is requested too often |
| backend dynamic |
| mode http |
| acl being_scanned be_sess_rate gt 100 |
| redirect location /denied.html if being_scanned |
| |
| bin(<hex>) : bin |
| Returns a binary chain. The input is the hexadecimal representation |
| of the string. |
| |
| bool(<bool>) : bool |
| Returns a boolean value. <bool> can be 'true', 'false', '1' or '0'. |
| 'false' and '0' are the same. 'true' and '1' are the same. |
| |
| connslots([<backend>]) : integer |
| Returns an integer value corresponding to the number of connection slots |
| still available in the backend, by totaling the maximum amount of |
| connections on all servers and the maximum queue size. This is probably only |
| used with ACLs. |
| |
| The basic idea here is to be able to measure the number of connection "slots" |
| still available (connection + queue), so that anything beyond that (intended |
| usage; see "use_backend" keyword) can be redirected to a different backend. |
| |
| 'connslots' = number of available server connection slots, + number of |
| available server queue slots. |
| |
| Note that while "fe_conn" may be used, "connslots" comes in especially |
| useful when you have a case of traffic going to one single ip, splitting into |
| multiple backends (perhaps using ACLs to do name-based load balancing) and |
| you want to be able to differentiate between different backends, and their |
| available "connslots". Also, whereas "nbsrv" only measures servers that are |
| actually *down*, this fetch is more fine-grained and looks into the number of |
| available connection slots as well. See also "queue" and "avg_queue". |
| |
| OTHER CAVEATS AND NOTES: at this point in time, the code does not take care |
| of dynamic connections. Also, if any of the server maxconn, or maxqueue is 0, |
| then this fetch clearly does not make sense, in which case the value returned |
| will be -1. |
| |
| cpu_calls : integer |
| Returns the number of calls to the task processing the stream or current |
| request since it was allocated. This number is reset for each new request on |
| the same connections in case of HTTP keep-alive. This value should usually be |
| low and stable (around 2 calls for a typically simple request) but may become |
| high if some processing (compression, caching or analysis) is performed. This |
| is purely for performance monitoring purposes. |
| |
| cpu_ns_avg : integer |
| Returns the average number of nanoseconds spent in each call to the task |
| processing the stream or current request. This number is reset for each new |
| request on the same connections in case of HTTP keep-alive. This value |
| indicates the overall cost of processing the request or the connection for |
| each call. There is no good nor bad value but the time spent in a call |
| automatically causes latency for other processing (see lat_ns_avg below), |
| and may affect other connection's apparent response time. Certain operations |
| like compression, complex regex matching or heavy Lua operations may directly |
| affect this value, and having it in the logs will make it easier to spot the |
| faulty processing that needs to be fixed to recover decent performance. |
| Note: this value is exactly cpu_ns_tot divided by cpu_calls. |
| |
| cpu_ns_tot : integer |
| Returns the total number of nanoseconds spent in each call to the task |
| processing the stream or current request. This number is reset for each new |
| request on the same connections in case of HTTP keep-alive. This value |
| indicates the overall cost of processing the request or the connection for |
| each call. There is no good nor bad value but the time spent in a call |
| automatically causes latency for other processing (see lat_ns_avg below), |
| induces CPU costs on the machine, and may affect other connection's apparent |
| response time. Certain operations like compression, complex regex matching or |
| heavy Lua operations may directly affect this value, and having it in the |
| logs will make it easier to spot the faulty processing that needs to be fixed |
| to recover decent performance. The value may be artificially high due to a |
| high cpu_calls count, for example when processing many HTTP chunks, and for |
| this reason it is often preferred to log cpu_ns_avg instead. |
| |
| date([<offset>],[<unit>]) : integer |
| Returns the current date as the epoch (number of seconds since 01/01/1970). |
| |
| If an offset value is specified, then it is added to the current date before |
| returning the value. This is particularly useful to compute relative dates, |
| as both positive and negative offsets are allowed. |
| It is useful combined with the http_date converter. |
| |
| <unit> is facultative, and can be set to "s" for seconds (default behavior), |
| "ms" for milliseconds or "us" for microseconds. |
| If unit is set, return value is an integer reflecting either seconds, |
| milliseconds or microseconds since epoch, plus offset. |
| It is useful when a time resolution of less than a second is needed. |
| |
| Example : |
| |
| # set an expires header to now+1 hour in every response |
| http-response set-header Expires %[date(3600),http_date] |
| |
| # set an expires header to now+1 hour in every response, with |
| # millisecond granularity |
| http-response set-header Expires %[date(3600000,ms),http_date(0,ms)] |
| |
| date_us : integer |
| Return the microseconds part of the date (the "second" part is returned by |
| date sample). This sample is coherent with the date sample as it is comes |
| from the same timeval structure. |
| |
| env(<name>) : string |
| Returns a string containing the value of environment variable <name>. As a |
| reminder, environment variables are per-process and are sampled when the |
| process starts. This can be useful to pass some information to a next hop |
| server, or with ACLs to take specific action when the process is started a |
| certain way. |
| |
| Examples : |
| # Pass the Via header to next hop with the local hostname in it |
| http-request add-header Via 1.1\ %[env(HOSTNAME)] |
| |
| # reject cookie-less requests when the STOP environment variable is set |
| http-request deny if !{ req.cook(SESSIONID) -m found } { env(STOP) -m found } |
| |
| fe_conn([<frontend>]) : integer |
| Returns the number of currently established connections on the frontend, |
| possibly including the connection being evaluated. If no frontend name is |
| specified, the current one is used. But it is also possible to check another |
| frontend. It can be used to return a sorry page before hard-blocking, or to |
| use a specific backend to drain new requests when the farm is considered |
| full. This is mostly used with ACLs but can also be used to pass some |
| statistics to servers in HTTP headers. See also the "dst_conn", "be_conn", |
| "fe_sess_rate" fetches. |
| |
| fe_req_rate([<frontend>]) : integer |
| Returns an integer value corresponding to the number of HTTP requests per |
| second sent to a frontend. This number can differ from "fe_sess_rate" in |
| situations where client-side keep-alive is enabled. |
| |
| fe_sess_rate([<frontend>]) : integer |
| Returns an integer value corresponding to the sessions creation rate on the |
| frontend, in number of new sessions per second. This is used with ACLs to |
| limit the incoming session rate to an acceptable range in order to prevent |
| abuse of service at the earliest moment, for example when combined with other |
| layer 4 ACLs in order to force the clients to wait a bit for the rate to go |
| down below the limit. It can also be useful to add this element to logs using |
| a log-format directive. See also the "rate-limit sessions" directive for use |
| in frontends. |
| |
| Example : |
| # This frontend limits incoming mails to 10/s with a max of 100 |
| # concurrent connections. We accept any connection below 10/s, and |
| # force excess clients to wait for 100 ms. Since clients are limited to |
| # 100 max, there cannot be more than 10 incoming mails per second. |
| frontend mail |
| bind :25 |
| mode tcp |
| maxconn 100 |
| acl too_fast fe_sess_rate ge 10 |
| tcp-request inspect-delay 100ms |
| tcp-request content accept if ! too_fast |
| tcp-request content accept if WAIT_END |
| |
| hostname : string |
| Returns the system hostname. |
| |
| int(<integer>) : signed integer |
| Returns a signed integer. |
| |
| ipv4(<ipv4>) : ipv4 |
| Returns an ipv4. |
| |
| ipv6(<ipv6>) : ipv6 |
| Returns an ipv6. |
| |
| last_rule_file : string |
| This returns the name of the configuration file containing the last final |
| rule that was matched during stream analysis. A final rule is one that |
| terminates the evaluation of the rule set (like an "accept", "deny" or |
| "redirect"). This works for TCP request and response rules acting on the |
| "content" rulesets, and on HTTP rules from "http-request", "http-response" |
| and "http-after-response" rule sets. The legacy "redirect" rulesets are not |
| supported (such information is not stored there), and neither "tcp-request |
| connection" nor "tcp-request session" rulesets are supported because the |
| information is stored at the stream level and streams do not exist during |
| these rules. The main purpose of this function is to be able to report in |
| logs where was the rule that gave the final verdict, in order to help |
| figure why a request was denied for example. See also "last_rule_line". |
| |
| last_rule_line : integer |
| This returns the line number in the configuration file where is located the |
| last final rule that was matched during stream analysis. A final rule is one |
| that terminates the evaluation of the rule set (like an "accept", "deny" or |
| "redirect"). This works for TCP request and response rules acting on the |
| "content" rulesets, and on HTTP rules from "http-request", "http-response" |
| and "http-after-response" rule sets. The legacy "redirect" rulesets are not |
| supported (such information is not stored there), and neither "tcp-request |
| connection" nor "tcp-request session" rulesets are supported because the |
| information is stored at the stream level and streams do not exist during |
| these rules. The main purpose of this function is to be able to report in |
| logs where was the rule that gave the final verdict, in order to help |
| figure why a request was denied for example. See also "last_rule_file". |
| |
| lat_ns_avg : integer |
| Returns the average number of nanoseconds spent between the moment the task |
| handling the stream is woken up and the moment it is effectively called. This |
| number is reset for each new request on the same connections in case of HTTP |
| keep-alive. This value indicates the overall latency inflicted to the current |
| request by all other requests being processed in parallel, and is a direct |
| indicator of perceived performance due to noisy neighbours. In order to keep |
| the value low, it is possible to reduce the scheduler's run queue depth using |
| "tune.runqueue-depth", to reduce the number of concurrent events processed at |
| once using "tune.maxpollevents", to decrease the stream's nice value using |
| the "nice" option on the "bind" lines or in the frontend, to enable low |
| latency scheduling using "tune.sched.low-latency", or to look for other heavy |
| requests in logs (those exhibiting large values of "cpu_ns_avg"), whose |
| processing needs to be adjusted or fixed. Compression of large buffers could |
| be a culprit, like heavy regex or long lists of regex. Note: this value is |
| exactly lat_ns_tot divided by cpu_calls. |
| |
| lat_ns_tot : integer |
| Returns the total number of nanoseconds spent between the moment the task |
| handling the stream is woken up and the moment it is effectively called. This |
| number is reset for each new request on the same connections in case of HTTP |
| keep-alive. This value indicates the overall latency inflicted to the current |
| request by all other requests being processed in parallel, and is a direct |
| indicator of perceived performance due to noisy neighbours. In order to keep |
| the value low, it is possible to reduce the scheduler's run queue depth using |
| "tune.runqueue-depth", to reduce the number of concurrent events processed at |
| once using "tune.maxpollevents", to decrease the stream's nice value using |
| the "nice" option on the "bind" lines or in the frontend, to enable low |
| latency scheduling using "tune.sched.low-latency", or to look for other heavy |
| requests in logs (those exhibiting large values of "cpu_ns_avg"), whose |
| processing needs to be adjusted or fixed. Compression of large buffers could |
| be a culprit, like heavy regex or long lists of regex. Note: while it |
| may intuitively seem that the total latency adds to a transfer time, it is |
| almost never true because while a task waits for the CPU, network buffers |
| continue to fill up and the next call will process more at once. The value |
| may be artificially high due to a high cpu_calls count, for example when |
| processing many HTTP chunks, and for this reason it is often preferred to log |
| lat_ns_avg instead, which is a more relevant performance indicator. |
| |
| meth(<method>) : method |
| Returns a method. |
| |
| nbsrv([<backend>]) : integer |
| Returns an integer value corresponding to the number of usable servers of |
| either the current backend or the named backend. This is mostly used with |
| ACLs but can also be useful when added to logs. This is normally used to |
| switch to an alternate backend when the number of servers is too low to |
| to handle some load. It is useful to report a failure when combined with |
| "monitor fail". |
| |
| prio_class : integer |
| Returns the priority class of the current session for http mode or connection |
| for tcp mode. The value will be that set by the last call to "http-request |
| set-priority-class" or "tcp-request content set-priority-class". |
| |
| prio_offset : integer |
| Returns the priority offset of the current session for http mode or |
| connection for tcp mode. The value will be that set by the last call to |
| "http-request set-priority-offset" or "tcp-request content |
| set-priority-offset". |
| |
| proc : integer |
| Always returns value 1 (historically it would return the calling process |
| number). |
| |
| queue([<backend>]) : integer |
| Returns the total number of queued connections of the designated backend, |
| including all the connections in server queues. If no backend name is |
| specified, the current one is used, but it is also possible to check another |
| one. This is useful with ACLs or to pass statistics to backend servers. This |
| can be used to take actions when queuing goes above a known level, generally |
| indicating a surge of traffic or a massive slowdown on the servers. One |
| possible action could be to reject new users but still accept old ones. See |
| also the "avg_queue", "be_conn", and "be_sess_rate" fetches. |
| |
| quic_enabled : boolean |
| Return true when the support for QUIC transport protocol was compiled and |
| if this procotol was not disabled by "no-quic" global option. See also "no-quic" |
| global option. |
| |
| rand([<range>]) : integer |
| Returns a random integer value within a range of <range> possible values, |
| starting at zero. If the range is not specified, it defaults to 2^32, which |
| gives numbers between 0 and 4294967295. It can be useful to pass some values |
| needed to take some routing decisions for example, or just for debugging |
| purposes. This random must not be used for security purposes. |
| |
| srv_conn([<backend>/]<server>) : integer |
| Returns an integer value corresponding to the number of currently established |
| connections on the designated server, possibly including the connection being |
| evaluated. If <backend> is omitted, then the server is looked up in the |
| current backend. It can be used to use a specific farm when one server is |
| full, or to inform the server about our view of the number of active |
| connections with it. See also the "fe_conn", "be_conn", "queue", and |
| "srv_conn_free" fetch methods. |
| |
| srv_conn_free([<backend>/]<server>) : integer |
| Returns an integer value corresponding to the number of available connections |
| on the designated server, possibly including the connection being evaluated. |
| The value does not include queue slots. If <backend> is omitted, then the |
| server is looked up in the current backend. It can be used to use a specific |
| farm when one server is full, or to inform the server about our view of the |
| number of active connections with it. See also the "be_conn_free" and |
| "srv_conn" fetch methods. |
| |
| OTHER CAVEATS AND NOTES: If the server maxconn is 0, then this fetch clearly |
| does not make sense, in which case the value returned will be -1. |
| |
| srv_is_up([<backend>/]<server>) : boolean |
| Returns true when the designated server is UP, and false when it is either |
| DOWN or in maintenance mode. If <backend> is omitted, then the server is |
| looked up in the current backend. It is mainly used to take action based on |
| an external status reported via a health check (e.g. a geographical site's |
| availability). Another possible use which is more of a hack consists in |
| using dummy servers as boolean variables that can be enabled or disabled from |
| the CLI, so that rules depending on those ACLs can be tweaked in realtime. |
| |
| srv_queue([<backend>/]<server>) : integer |
| Returns an integer value corresponding to the number of connections currently |
| pending in the designated server's queue. If <backend> is omitted, then the |
| server is looked up in the current backend. It can sometimes be used together |
| with the "use-server" directive to force to use a known faster server when it |
| is not much loaded. See also the "srv_conn", "avg_queue" and "queue" sample |
| fetch methods. |
| |
| srv_sess_rate([<backend>/]<server>) : integer |
| Returns an integer corresponding to the sessions creation rate on the |
| designated server, in number of new sessions per second. If <backend> is |
| omitted, then the server is looked up in the current backend. This is mostly |
| used with ACLs but can make sense with logs too. This is used to switch to an |
| alternate backend when an expensive or fragile one reaches too high a session |
| rate, or to limit abuse of service (e.g. prevent latent requests from |
| overloading servers). |
| |
| Example : |
| # Redirect to a separate back |
| acl srv1_full srv_sess_rate(be1/srv1) gt 50 |
| acl srv2_full srv_sess_rate(be1/srv2) gt 50 |
| use_backend be2 if srv1_full or srv2_full |
| |
| srv_iweight([<backend>/]<server>) : integer |
| Returns an integer corresponding to the server's initial weight. If <backend> |
| is omitted, then the server is looked up in the current backend. See also |
| "srv_weight" and "srv_uweight". |
| |
| srv_uweight([<backend>/]<server>) : integer |
| Returns an integer corresponding to the user visible server's weight. If |
| <backend> is omitted, then the server is looked up in the current |
| backend. See also "srv_weight" and "srv_iweight". |
| |
| srv_weight([<backend>/]<server>) : integer |
| Returns an integer corresponding to the current (or effective) server's |
| weight. If <backend> is omitted, then the server is looked up in the current |
| backend. See also "srv_iweight" and "srv_uweight". |
| |
| stopping : boolean |
| Returns TRUE if the process calling the function is currently stopping. This |
| can be useful for logging, or for relaxing certain checks or helping close |
| certain connections upon graceful shutdown. |
| |
| str(<string>) : string |
| Returns a string. |
| |
| table_avl([<table>]) : integer |
| Returns the total number of available entries in the current proxy's |
| stick-table or in the designated stick-table. See also table_cnt. |
| |
| table_cnt([<table>]) : integer |
| Returns the total number of entries currently in use in the current proxy's |
| stick-table or in the designated stick-table. See also src_conn_cnt and |
| table_avl for other entry counting methods. |
| |
| thread : integer |
| Returns an integer value corresponding to the position of the thread calling |
| the function, between 0 and (global.nbthread-1). This is useful for logging |
| and debugging purposes. |
| |
| uuid([<version>]) : string |
| Returns a UUID following the RFC4122 standard. If the version is not |
| specified, a UUID version 4 (fully random) is returned. |
| Currently, only version 4 is supported. |
| |
| var(<var-name>[,<default>]) : undefined |
| Returns a variable with the stored type. If the variable is not set, the |
| sample fetch fails, unless a default value is provided, in which case it will |
| return it as a string. Empty strings are permitted. The name of the variable |
| starts with an indication about its scope. The scopes allowed are: |
| "proc" : the variable is shared with the whole process |
| "sess" : the variable is shared with the whole session |
| "txn" : the variable is shared with the transaction (request and |
| response), |
| "req" : the variable is shared only during request processing, |
| "res" : the variable is shared only during response processing. |
| This prefix is followed by a name. The separator is a '.'. The name may only |
| contain characters 'a-z', 'A-Z', '0-9', '.' and '_'. |
| |
| 7.3.3. Fetching samples at Layer 4 |
| ---------------------------------- |
| |
| The layer 4 usually describes just the transport layer which in HAProxy is |
| closest to the connection, where no content is yet made available. The fetch |
| methods described here are usable as low as the "tcp-request connection" rule |
| sets unless they require some future information. Those generally include |
| TCP/IP addresses and ports, as well as elements from stick-tables related to |
| the incoming connection. For retrieving a value from a sticky counters, the |
| counter number can be explicitly set as 0, 1, or 2 using the pre-defined |
| "sc0_", "sc1_", or "sc2_" prefix. These three pre-defined prefixes can only be |
| used if the global "tune.stick-counters" value does not exceed 3, otherwise the |
| counter number can be specified as the first integer argument when using the |
| "sc_" prefix starting from "sc_0" to "sc_N" where N is (tune.stick-counters-1). |
| An optional table may be specified with the "sc*" form, in which case the |
| currently tracked key will be looked up into this alternate table instead of |
| the table currently being tracked. |
| |
| bc_dst : ip |
| This is the destination ip address of the connection on the server side, |
| which is the server address HAProxy connected to. It is of type IP and works |
| on both IPv4 and IPv6 tables. On IPv6 tables, IPv4 address is mapped to its |
| IPv6 equivalent, according to RFC 4291. |
| |
| bc_dst_port : integer |
| Returns an integer value corresponding to the destination TCP port of the |
| connection on the server side, which is the port HAProxy connected to. |
| |
| bc_err : integer |
| Returns the ID of the error that might have occurred on the current backend |
| connection. See the "fc_err_str" fetch for a full list of error codes |
| and their corresponding error message. |
| |
| bc_err_str : string |
| Returns an error message describing what problem happened on the current |
| backend connection, resulting in a connection failure. See the |
| "fc_err_str" fetch for a full list of error codes and their |
| corresponding error message. |
| |
| bc_glitches : integer |
| Returns the number of protocol glitches counted on the backend connection. |
| These generally cover protocol violations as well as small anomalies that |
| generally indicate a bogus or misbehaving server that may cause trouble in |
| the infrastructure (e.g. cause connections to be aborted early, inducing |
| frequent TLS renegotiations). These may also be caused by too large responses |
| that cannot fit into a single buffer, explaining HTTP 502 errors. Ideally |
| this number should remain zero, though it's generally fine if it remains very |
| low compared to the total number of requests. These values should normally |
| not be considered as alarming (especially small ones), though a sudden jump |
| may indicate an anomaly somewhere. Not all protocol multiplexers measure this |
| metric and the only way to get more details about the events is to enable |
| traces to capture all exchanges. |
| |
| bc_http_major : integer |
| Returns the backend connection's HTTP major version encoding, which may be 1 |
| for HTTP/0.9 to HTTP/1.1 or 2 for HTTP/2. Note, this is based on the on-wire |
| encoding and not the version present in the request header. |
| |
| bc_src : ip |
| This is the source ip address of the connection on the server side, which is |
| the server address HAProxy connected from. It is of type IP and works on both |
| IPv4 and IPv6 tables. On IPv6 tables, IPv4 addresses are mapped to their IPv6 |
| equivalent, according to RFC 4291. |
| |
| bc_src_port : integer |
| Returns an integer value corresponding to the TCP source port of the |
| connection on the server side, which is the port HAProxy connected from. |
| |
| be_id : integer |
| Returns an integer containing the current backend's id. It can be used in |
| frontends with responses to check which backend processed the request. It can |
| also be used in a tcp-check or an http-check ruleset. |
| |
| be_name : string |
| Returns a string containing the current backend's name. It can be used in |
| frontends with responses to check which backend processed the request. It can |
| also be used in a tcp-check or an http-check ruleset. |
| |
| bc_rtt(<unit>) : integer |
| Returns the Round Trip Time (RTT) measured by the kernel for the backend |
| connection. <unit> is facultative, by default the unit is milliseconds. <unit> |
| can be set to "ms" for milliseconds or "us" for microseconds. If the server |
| connection is not established, if the connection is not TCP or if the |
| operating system does not support TCP_INFO, for example Linux kernels before |
| 2.4, the sample fetch fails. |
| |
| bc_rttvar(<unit>) : integer |
| Returns the Round Trip Time (RTT) variance measured by the kernel for the |
| backend connection. <unit> is facultative, by default the unit is milliseconds. |
| <unit> can be set to "ms" for milliseconds or "us" for microseconds. If the |
| server connection is not established, if the connection is not TCP or if the |
| operating system does not support TCP_INFO, for example Linux kernels before |
| 2.4, the sample fetch fails. |
| |
| be_server_timeout : integer |
| Returns the configuration value in millisecond for the server timeout of the |
| current backend. This timeout can be overwritten by a "set-timeout" rule. See |
| also the "cur_server_timeout". |
| |
| be_tunnel_timeout : integer |
| Returns the configuration value in millisecond for the tunnel timeout of the |
| current backend. This timeout can be overwritten by a "set-timeout" rule. See |
| also the "cur_tunnel_timeout". |
| |
| cur_server_timeout : integer |
| Returns the currently applied server timeout in millisecond for the stream. |
| In the default case, this will be equal to be_server_timeout unless a |
| "set-timeout" rule has been applied. See also "be_server_timeout". |
| |
| cur_tunnel_timeout : integer |
| Returns the currently applied tunnel timeout in millisecond for the stream. |
| In the default case, this will be equal to be_tunnel_timeout unless a |
| "set-timeout" rule has been applied. See also "be_tunnel_timeout". |
| |
| dst : ip |
| This is the destination IP address of the connection on the client side, |
| which is the address the client connected to. Any tcp/http rules may alter |
| this address. It can be useful when running in transparent mode. It is of |
| type IP and works on both IPv4 and IPv6 tables. On IPv6 tables, IPv4 address |
| is mapped to its IPv6 equivalent, according to RFC 4291. When the incoming |
| connection passed through address translation or redirection involving |
| connection tracking, the original destination address before the redirection |
| will be reported. On Linux systems, the source and destination may seldom |
| appear reversed if the nf_conntrack_tcp_loose sysctl is set, because a late |
| response may reopen a timed out connection and switch what is believed to be |
| the source and the destination. |
| |
| dst_conn : integer |
| Returns an integer value corresponding to the number of currently established |
| connections on the same socket including the one being evaluated. It is |
| normally used with ACLs but can as well be used to pass the information to |
| servers in an HTTP header or in logs. It can be used to either return a sorry |
| page before hard-blocking, or to use a specific backend to drain new requests |
| when the socket is considered saturated. This offers the ability to assign |
| different limits to different listening ports or addresses. See also the |
| "fe_conn" and "be_conn" fetches. |
| |
| dst_is_local : boolean |
| Returns true if the destination address of the incoming connection is local |
| to the system, or false if the address doesn't exist on the system, meaning |
| that it was intercepted in transparent mode. It can be useful to apply |
| certain rules by default to forwarded traffic and other rules to the traffic |
| targeting the real address of the machine. For example the stats page could |
| be delivered only on this address, or SSH access could be locally redirected. |
| Please note that the check involves a few system calls, so it's better to do |
| it only once per connection. |
| |
| dst_port : integer |
| Returns an integer value corresponding to the destination TCP port of the |
| connection on the client side, which is the port the client connected to. |
| Any tcp/http rules may alter this address. This might be used when running in |
| transparent mode, when assigning dynamic ports to some clients for a whole |
| application session, to stick all users to a same server, or to pass the |
| destination port information to a server using an HTTP header. |
| |
| fc_dst : ip |
| This is the original destination IP address of the connection on the client |
| side. Only "tcp-request connection" rules may alter this address. See "dst" |
| for details. |
| |
| fc_dst_is_local : boolean |
| Returns true if the original destination address of the incoming connection |
| is local to the system, or false if the address doesn't exist on the |
| system. See "dst_is_local" for details. |
| |
| fc_dst_port : integer |
| Returns an integer value corresponding to the original destination TCP port |
| of the connection on the client side. Only "tcp-request connection" rules may |
| alter this address. See "dst-port" for details. |
| |
| fc_err : integer |
| Returns the ID of the error that might have occurred on the current |
| connection. Any strictly positive value of this fetch indicates that the |
| connection did not succeed and would result in an error log being output (as |
| described in section 8.2.5). See the "fc_err_str" fetch for a full list of |
| error codes and their corresponding error message. |
| |
| fc_err_str : string |
| Returns an error message describing what problem happened on the current |
| connection, resulting in a connection failure. This string corresponds to the |
| "message" part of the error log format (see section 8.2.5). See below for a |
| full list of error codes and their corresponding error messages : |
| |
| +----+---------------------------------------------------------------------------+ |
| | ID | message | |
| +----+---------------------------------------------------------------------------+ |
| | 0 | "Success" | |
| | 1 | "Reached configured maxconn value" | |
| | 2 | "Too many sockets on the process" | |
| | 3 | "Too many sockets on the system" | |
| | 4 | "Out of system buffers" | |
| | 5 | "Protocol or address family not supported" | |
| | 6 | "General socket error" | |
| | 7 | "Source port range exhausted" | |
| | 8 | "Can't bind to source address" | |
| | 9 | "Out of local source ports on the system" | |
| | 10 | "Local source address already in use" | |
| | 11 | "Connection closed while waiting for PROXY protocol header" | |
| | 12 | "Connection error while waiting for PROXY protocol header" | |
| | 13 | "Timeout while waiting for PROXY protocol header" | |
| | 14 | "Truncated PROXY protocol header received" | |
| | 15 | "Received something which does not look like a PROXY protocol header" | |
| | 16 | "Received an invalid PROXY protocol header" | |
| | 17 | "Received an unhandled protocol in the PROXY protocol header" | |
| | 18 | "Connection closed while waiting for NetScaler Client IP header" | |
| | 19 | "Connection error while waiting for NetScaler Client IP header" | |
| | 20 | "Timeout while waiting for a NetScaler Client IP header" | |
| | 21 | "Truncated NetScaler Client IP header received" | |
| | 22 | "Received an invalid NetScaler Client IP magic number" | |
| | 23 | "Received an unhandled protocol in the NetScaler Client IP header" | |
| | 24 | "Connection closed during SSL handshake" | |
| | 25 | "Connection error during SSL handshake" | |
| | 26 | "Timeout during SSL handshake" | |
| | 27 | "Too many SSL connections" | |
| | 28 | "Out of memory when initializing an SSL connection" | |
| | 29 | "Rejected a client-initiated SSL renegotiation attempt" | |
| | 30 | "SSL client CA chain cannot be verified" | |
| | 31 | "SSL client certificate not trusted" | |
| | 32 | "Server presented an SSL certificate different from the configured one" | |
| | 33 | "Server presented an SSL certificate different from the expected one" | |
| | 34 | "SSL handshake failure" | |
| | 35 | "SSL handshake failure after heartbeat" | |
| | 36 | "Stopped a TLSv1 heartbeat attack (CVE-2014-0160)" | |
| | 37 | "Attempt to use SSL on an unknown target (internal error)" | |
| | 38 | "Server refused early data" | |
| | 39 | "SOCKS4 Proxy write error during handshake" | |
| | 40 | "SOCKS4 Proxy read error during handshake" | |
| | 41 | "SOCKS4 Proxy deny the request" | |
| | 42 | "SOCKS4 Proxy handshake aborted by server" | |
| | 43 | "SSL fatal error" | |
| +----+---------------------------------------------------------------------------+ |
| |
| fc_fackets : integer |
| Returns the fack counter measured by the kernel for the client |
| connection. If the server connection is not established, if the connection is |
| not TCP or if the operating system does not support TCP_INFO, for example |
| Linux kernels before 2.4, the sample fetch fails. |
| |
| fc_glitches : integer |
| Returns the number of protocol glitches counted on the frontend connection. |
| These generally cover protocol violations as well as small anomalies that |
| generally indicate a bogus or misbehaving client that may cause trouble in |
| the infrastructure, such as excess of errors in the logs, or many connections |
| being aborted early, inducing frequent TLS renegotiations. These may also be |
| caused by too large requests that cannot fit into a single buffer, explaining |
| HTTP 400 errors. Ideally this number should remain zero, though it may be |
| possible that some browsers playing with the protocol boundaries trigger it |
| once in a while. These values should normally not be considered as alarming |
| (especially small ones), though a sudden jump may indicate an anomaly |
| somewhere. Large values (i.e. hundreds to thousands per connection, or as |
| many as the requests) may indicate a purposely built client that is trying to |
| fingerprint or attack the protocol stack. Not all protocol multiplexers |
| measure this metric, and the only way to get more details about the events is |
| to enable traces to capture all exchanges. |
| |
| fc_http_major : integer |
| Reports the front connection's HTTP major version encoding, which may be 1 |
| for HTTP/0.9 to HTTP/1.1 or 2 for HTTP/2. Note, this is based on the on-wire |
| encoding and not on the version present in the request header. |
| |
| fc_lost : integer |
| Returns the lost counter measured by the kernel for the client |
| connection. If the server connection is not established, if the connection is |
| not TCP or if the operating system does not support TCP_INFO, for example |
| Linux kernels before 2.4, the sample fetch fails. |
| |
| fc_pp_authority : string |
| Returns the authority TLV sent by the client in the PROXY protocol header, |
| if any. |
| |
| fc_pp_unique_id : string |
| Returns the unique ID TLV sent by the client in the PROXY protocol header, |
| if any. |
| |
| fc_rcvd_proxy : boolean |
| Returns true if the client initiated the connection with a PROXY protocol |
| header. |
| |
| fc_reordering : integer |
| Returns the reordering counter measured by the kernel for the client |
| connection. If the server connection is not established, if the connection is |
| not TCP or if the operating system does not support TCP_INFO, for example |
| Linux kernels before 2.4, the sample fetch fails. |
| |
| fc_retrans : integer |
| Returns the retransmits counter measured by the kernel for the client |
| connection. If the server connection is not established, if the connection is |
| not TCP or if the operating system does not support TCP_INFO, for example |
| Linux kernels before 2.4, the sample fetch fails. |
| |
| fc_rtt(<unit>) : integer |
| Returns the Round Trip Time (RTT) measured by the kernel for the client |
| connection. <unit> is facultative, by default the unit is milliseconds. <unit> |
| can be set to "ms" for milliseconds or "us" for microseconds. If the server |
| connection is not established, if the connection is not TCP or if the |
| operating system does not support TCP_INFO, for example Linux kernels before |
| 2.4, the sample fetch fails. |
| |
| fc_rttvar(<unit>) : integer |
| Returns the Round Trip Time (RTT) variance measured by the kernel for the |
| client connection. <unit> is facultative, by default the unit is milliseconds. |
| <unit> can be set to "ms" for milliseconds or "us" for microseconds. If the |
| server connection is not established, if the connection is not TCP or if the |
| operating system does not support TCP_INFO, for example Linux kernels before |
| 2.4, the sample fetch fails. |
| |
| fc_sacked : integer |
| Returns the sacked counter measured by the kernel for the client connection. |
| If the server connection is not established, if the connection is not TCP or |
| if the operating system does not support TCP_INFO, for example Linux kernels |
| before 2.4, the sample fetch fails. |
| |
| fc_src : ip |
| This is the original source IP address of the connection on the client side |
| Only "tcp-request connection" rules may alter this address. See "src" for |
| details. |
| |
| fc_src_is_local : boolean |
| Returns true if the source address of incoming connection is local to the |
| system, or false if the address doesn't exist on the system. See |
| "src_is_local" for details. |
| |
| fc_src_port : integer |
| |
| Returns an integer value corresponding to the TCP source port of the |
| connection on the client side. Only "tcp-request connection" rules may alter |
| this address. See "src-port" for details. |
| |
| |
| fc_unacked : integer |
| Returns the unacked counter measured by the kernel for the client connection. |
| If the server connection is not established, if the connection is not TCP or |
| if the operating system does not support TCP_INFO, for example Linux kernels |
| before 2.4, the sample fetch fails. |
| |
| fe_defbe : string |
| Returns a string containing the frontend's default backend name. It can be |
| used in frontends to check which backend will handle requests by default. |
| |
| fe_id : integer |
| Returns an integer containing the current frontend's id. It can be used in |
| backends to check from which frontend it was called, or to stick all users |
| coming via a same frontend to the same server. |
| |
| fe_name : string |
| Returns a string containing the current frontend's name. It can be used in |
| backends to check from which frontend it was called, or to stick all users |
| coming via a same frontend to the same server. |
| |
| fe_client_timeout : integer |
| Returns the configuration value in millisecond for the client timeout of the |
| current frontend. |
| |
| sc_bytes_in_rate(<ctr>[,<table>]) : integer |
| sc0_bytes_in_rate([<table>]) : integer |
| sc1_bytes_in_rate([<table>]) : integer |
| sc2_bytes_in_rate([<table>]) : integer |
| Returns the average client-to-server bytes rate from the currently tracked |
| counters, measured in amount of bytes over the period configured in the |
| table. See also src_bytes_in_rate. |
| |
| sc_bytes_out_rate(<ctr>[,<table>]) : integer |
| sc0_bytes_out_rate([<table>]) : integer |
| sc1_bytes_out_rate([<table>]) : integer |
| sc2_bytes_out_rate([<table>]) : integer |
| Returns the average server-to-client bytes rate from the currently tracked |
| counters, measured in amount of bytes over the period configured in the |
| table. See also src_bytes_out_rate. |
| |
| sc_clr_gpc(<idx>,<ctr>[,<table>]) : integer |
| Clears the General Purpose Counter at the index <idx> of the array |
| associated to the designated tracked counter of ID <ctr> from current |
| proxy's stick table or from the designated stick-table <table>, and |
| returns its previous value. <idx> is an integer between 0 and 99 and |
| <ctr> an integer between 0 and 2. |
| Before the first invocation, the stored value is zero, so first invocation |
| will always return zero. |
| This fetch applies only to the 'gpc' array data_type (and not to the legacy |
| 'gpc0' nor 'gpc1' data_types). |
| |
| sc_clr_gpc0(<ctr>[,<table>]) : integer |
| sc0_clr_gpc0([<table>]) : integer |
| sc1_clr_gpc0([<table>]) : integer |
| sc2_clr_gpc0([<table>]) : integer |
| Clears the first General Purpose Counter associated to the currently tracked |
| counters, and returns its previous value. Before the first invocation, the |
| stored value is zero, so first invocation will always return zero. This is |
| typically used as a second ACL in an expression in order to mark a connection |
| when a first ACL was verified : |
| |
| Example: |
| # block if 5 consecutive requests continue to come faster than 10 sess |
| # per second, and reset the counter as soon as the traffic slows down. |
| acl abuse sc0_http_req_rate gt 10 |
| acl kill sc0_inc_gpc0 gt 5 |
| acl save sc0_clr_gpc0 ge 0 |
| tcp-request connection accept if !abuse save |
| tcp-request connection reject if abuse kill |
| |
| sc_clr_gpc1(<ctr>[,<table>]) : integer |
| sc0_clr_gpc1([<table>]) : integer |
| sc1_clr_gpc1([<table>]) : integer |
| sc2_clr_gpc1([<table>]) : integer |
| Clears the second General Purpose Counter associated to the currently tracked |
| counters, and returns its previous value. Before the first invocation, the |
| stored value is zero, so first invocation will always return zero. This is |
| typically used as a second ACL in an expression in order to mark a connection |
| when a first ACL was verified. |
| |
| sc_conn_cnt(<ctr>[,<table>]) : integer |
| sc0_conn_cnt([<table>]) : integer |
| sc1_conn_cnt([<table>]) : integer |
| sc2_conn_cnt([<table>]) : integer |
| Returns the cumulative number of incoming connections from currently tracked |
| counters. See also src_conn_cnt. |
| |
| sc_conn_cur(<ctr>[,<table>]) : integer |
| sc0_conn_cur([<table>]) : integer |
| sc1_conn_cur([<table>]) : integer |
| sc2_conn_cur([<table>]) : integer |
| Returns the current amount of concurrent connections tracking the same |
| tracked counters. This number is automatically incremented when tracking |
| begins and decremented when tracking stops. See also src_conn_cur. |
| |
| sc_conn_rate(<ctr>[,<table>]) : integer |
| sc0_conn_rate([<table>]) : integer |
| sc1_conn_rate([<table>]) : integer |
| sc2_conn_rate([<table>]) : integer |
| Returns the average connection rate from the currently tracked counters, |
| measured in amount of connections over the period configured in the table. |
| See also src_conn_rate. |
| |
| sc_get_gpc(<idx>,<ctr>[,<table>]) : integer |
| Returns the value of the General Purpose Counter at the index <idx> |
| in the GPC array and associated to the currently tracked counter of |
| ID <ctr> from the current proxy's stick-table or from the designated |
| stick-table <table>. <idx> is an integer between 0 and 99 and |
| <ctr> an integer between 0 and 2. If there is not gpc stored at this |
| index, zero is returned. |
| This fetch applies only to the 'gpc' array data_type (and not to the legacy |
| 'gpc0' nor 'gpc1' data_types). See also src_get_gpc and sc_inc_gpc. |
| |
| sc_get_gpc0(<ctr>[,<table>]) : integer |
| sc0_get_gpc0([<table>]) : integer |
| sc1_get_gpc0([<table>]) : integer |
| sc2_get_gpc0([<table>]) : integer |
| Returns the value of the first General Purpose Counter associated to the |
| currently tracked counters. See also src_get_gpc0 and sc/sc0/sc1/sc2_inc_gpc0. |
| |
| sc_get_gpc1(<ctr>[,<table>]) : integer |
| sc0_get_gpc1([<table>]) : integer |
| sc1_get_gpc1([<table>]) : integer |
| sc2_get_gpc1([<table>]) : integer |
| Returns the value of the second General Purpose Counter associated to the |
| currently tracked counters. See also src_get_gpc1 and sc/sc0/sc1/sc2_inc_gpc1. |
| |
| sc_get_gpt(<idx>,<ctr>[,<table>]) : integer |
| Returns the value of the first General Purpose Tag at the index <idx> of |
| the array associated to the tracked counter of ID <ctr> and from the |
| current proxy's sitck-table or the designated stick-table <table>. <idx> |
| is an integer between 0 and 99 and <ctr> an integer between 0 and 2. |
| If there is no GPT stored at this index, zero is returned. |
| This fetch applies only to the 'gpt' array data_type (and not on |
| the legacy 'gpt0' data-type). See also src_get_gpt. |
| |
| sc_get_gpt0(<ctr>[,<table>]) : integer |
| sc0_get_gpt0([<table>]) : integer |
| sc1_get_gpt0([<table>]) : integer |
| sc2_get_gpt0([<table>]) : integer |
| Returns the value of the first General Purpose Tag associated to the |
| currently tracked counters. See also src_get_gpt0. |
| |
| sc_gpc_rate(<idx>,<ctr>[,<table>]) : integer |
| Returns the average increment rate of the General Purpose Counter at the |
| index <idx> of the array associated to the tracked counter of ID <ctr> from |
| the current proxy's table or from the designated stick-table <table>. |
| It reports the frequency which the gpc counter was incremented over the |
| configured period. <idx> is an integer between 0 and 99 and <ctr> an integer |
| between 0 and 2. |
| Note that the 'gpc_rate' counter array must be stored in the stick-table |
| for a value to be returned, as 'gpc' only holds the event count. |
| This fetch applies only to the 'gpc_rate' array data_type (and not to |
| the legacy 'gpc0_rate' nor 'gpc1_rate' data_types). |
| See also src_gpc_rate, sc_get_gpc, and sc_inc_gpc. |
| |
| sc_gpc0_rate(<ctr>[,<table>]) : integer |
| sc0_gpc0_rate([<table>]) : integer |
| sc1_gpc0_rate([<table>]) : integer |
| sc2_gpc0_rate([<table>]) : integer |
| Returns the average increment rate of the first General Purpose Counter |
| associated to the currently tracked counters. It reports the frequency |
| which the gpc0 counter was incremented over the configured period. See also |
| src_gpc0_rate, sc/sc0/sc1/sc2_get_gpc0, and sc/sc0/sc1/sc2_inc_gpc0. Note |
| that the "gpc0_rate" counter must be stored in the stick-table for a value to |
| be returned, as "gpc0" only holds the event count. |
| |
| sc_gpc1_rate(<ctr>[,<table>]) : integer |
| sc0_gpc1_rate([<table>]) : integer |
| sc1_gpc1_rate([<table>]) : integer |
| sc2_gpc1_rate([<table>]) : integer |
| Returns the average increment rate of the second General Purpose Counter |
| associated to the currently tracked counters. It reports the frequency |
| which the gpc1 counter was incremented over the configured period. See also |
| src_gpcA_rate, sc/sc0/sc1/sc2_get_gpc1, and sc/sc0/sc1/sc2_inc_gpc1. Note |
| that the "gpc1_rate" counter must be stored in the stick-table for a value to |
| be returned, as "gpc1" only holds the event count. |
| |
| sc_http_err_cnt(<ctr>[,<table>]) : integer |
| sc0_http_err_cnt([<table>]) : integer |
| sc1_http_err_cnt([<table>]) : integer |
| sc2_http_err_cnt([<table>]) : integer |
| Returns the cumulative number of HTTP errors from the currently tracked |
| counters. This includes the both request errors and 4xx error responses. |
| See also src_http_err_cnt. |
| |
| sc_http_err_rate(<ctr>[,<table>]) : integer |
| sc0_http_err_rate([<table>]) : integer |
| sc1_http_err_rate([<table>]) : integer |
| sc2_http_err_rate([<table>]) : integer |
| Returns the average rate of HTTP errors from the currently tracked counters, |
| measured in amount of errors over the period configured in the table. This |
| includes the both request errors and 4xx error responses. See also |
| src_http_err_rate. |
| |
| sc_http_fail_cnt(<ctr>[,<table>]) : integer |
| sc0_http_fail_cnt([<table>]) : integer |
| sc1_http_fail_cnt([<table>]) : integer |
| sc2_http_fail_cnt([<table>]) : integer |
| Returns the cumulative number of HTTP response failures from the currently |
| tracked counters. This includes the both response errors and 5xx status codes |
| other than 501 and 505. See also src_http_fail_cnt. |
| |
| sc_http_fail_rate(<ctr>[,<table>]) : integer |
| sc0_http_fail_rate([<table>]) : integer |
| sc1_http_fail_rate([<table>]) : integer |
| sc2_http_fail_rate([<table>]) : integer |
| Returns the average rate of HTTP response failures from the currently tracked |
| counters, measured in amount of failures over the period configured in the |
| table. This includes the both response errors and 5xx status codes other than |
| 501 and 505. See also src_http_fail_rate. |
| |
| sc_http_req_cnt(<ctr>[,<table>]) : integer |
| sc0_http_req_cnt([<table>]) : integer |
| sc1_http_req_cnt([<table>]) : integer |
| sc2_http_req_cnt([<table>]) : integer |
| Returns the cumulative number of HTTP requests from the currently tracked |
| counters. This includes every started request, valid or not. See also |
| src_http_req_cnt. |
| |
| sc_http_req_rate(<ctr>[,<table>]) : integer |
| sc0_http_req_rate([<table>]) : integer |
| sc1_http_req_rate([<table>]) : integer |
| sc2_http_req_rate([<table>]) : integer |
| Returns the average rate of HTTP requests from the currently tracked |
| counters, measured in amount of requests over the period configured in |
| the table. This includes every started request, valid or not. See also |
| src_http_req_rate. |
| |
| sc_inc_gpc(<idx>,<ctr>[,<table>]) : integer |
| Increments the General Purpose Counter at the index <idx> of the array |
| associated to the designated tracked counter of ID <ctr> from current |
| proxy's stick table or from the designated stick-table <table>, and |
| returns its new value. <idx> is an integer between 0 and 99 and |
| <ctr> an integer between 0 and 2. |
| Before the first invocation, the stored value is zero, so first invocation |
| will increase it to 1 and will return 1. |
| This fetch applies only to the 'gpc' array data_type (and not to the legacy |
| 'gpc0' nor 'gpc1' data_types). |
| |
| sc_inc_gpc0(<ctr>[,<table>]) : integer |
| sc0_inc_gpc0([<table>]) : integer |
| sc1_inc_gpc0([<table>]) : integer |
| sc2_inc_gpc0([<table>]) : integer |
| Increments the first General Purpose Counter associated to the currently |
| tracked counters, and returns its new value. Before the first invocation, |
| the stored value is zero, so first invocation will increase it to 1 and will |
| return 1. This is typically used as a second ACL in an expression in order |
| to mark a connection when a first ACL was verified : |
| |
| Example: |
| acl abuse sc0_http_req_rate gt 10 |
| acl kill sc0_inc_gpc0 gt 0 |
| tcp-request connection reject if abuse kill |
| |
| sc_inc_gpc1(<ctr>[,<table>]) : integer |
| sc0_inc_gpc1([<table>]) : integer |
| sc1_inc_gpc1([<table>]) : integer |
| sc2_inc_gpc1([<table>]) : integer |
| Increments the second General Purpose Counter associated to the currently |
| tracked counters, and returns its new value. Before the first invocation, |
| the stored value is zero, so first invocation will increase it to 1 and will |
| return 1. This is typically used as a second ACL in an expression in order |
| to mark a connection when a first ACL was verified. |
| |
| sc_kbytes_in(<ctr>[,<table>]) : integer |
| sc0_kbytes_in([<table>]) : integer |
| sc1_kbytes_in([<table>]) : integer |
| sc2_kbytes_in([<table>]) : integer |
| Returns the total amount of client-to-server data from the currently tracked |
| counters, measured in kilobytes. The test is currently performed on 32-bit |
| integers, which limits values to 4 terabytes. See also src_kbytes_in. |
| |
| sc_kbytes_out(<ctr>[,<table>]) : integer |
| sc0_kbytes_out([<table>]) : integer |
| sc1_kbytes_out([<table>]) : integer |
| sc2_kbytes_out([<table>]) : integer |
| Returns the total amount of server-to-client data from the currently tracked |
| counters, measured in kilobytes. The test is currently performed on 32-bit |
| integers, which limits values to 4 terabytes. See also src_kbytes_out. |
| |
| sc_sess_cnt(<ctr>[,<table>]) : integer |
| sc0_sess_cnt([<table>]) : integer |
| sc1_sess_cnt([<table>]) : integer |
| sc2_sess_cnt([<table>]) : integer |
| Returns the cumulative number of incoming connections that were transformed |
| into sessions, which means that they were accepted by a "tcp-request |
| connection" rule, from the currently tracked counters. A backend may count |
| more sessions than connections because each connection could result in many |
| backend sessions if some HTTP keep-alive is performed over the connection |
| with the client. See also src_sess_cnt. |
| |
| sc_sess_rate(<ctr>[,<table>]) : integer |
| sc0_sess_rate([<table>]) : integer |
| sc1_sess_rate([<table>]) : integer |
| sc2_sess_rate([<table>]) : integer |
| Returns the average session rate from the currently tracked counters, |
| measured in amount of sessions over the period configured in the table. A |
| session is a connection that got past the early "tcp-request connection" |
| rules. A backend may count more sessions than connections because each |
| connection could result in many backend sessions if some HTTP keep-alive is |
| performed over the connection with the client. See also src_sess_rate. |
| |
| sc_tracked(<ctr>[,<table>]) : boolean |
| sc0_tracked([<table>]) : boolean |
| sc1_tracked([<table>]) : boolean |
| sc2_tracked([<table>]) : boolean |
| Returns true if the designated session counter is currently being tracked by |
| the current session. This can be useful when deciding whether or not we want |
| to set some values in a header passed to the server. |
| |
| sc_trackers(<ctr>[,<table>]) : integer |
| sc0_trackers([<table>]) : integer |
| sc1_trackers([<table>]) : integer |
| sc2_trackers([<table>]) : integer |
| Returns the current amount of concurrent connections tracking the same |
| tracked counters. This number is automatically incremented when tracking |
| begins and decremented when tracking stops. It differs from sc0_conn_cur in |
| that it does not rely on any stored information but on the table's reference |
| count (the "use" value which is returned by "show table" on the CLI). This |
| may sometimes be more suited for layer7 tracking. It can be used to tell a |
| server how many concurrent connections there are from a given address for |
| example. |
| |
| so_id : integer |
| Returns an integer containing the current listening socket's id. It is useful |
| in frontends involving many "bind" lines, or to stick all users coming via a |
| same socket to the same server. |
| |
| so_name : string |
| Returns a string containing the current listening socket's name, as defined |
| with name on a "bind" line. It can serve the same purposes as so_id but with |
| strings instead of integers. |
| |
| src : ip |
| This is the source IP address of the client of the session. Any tcp/http |
| rules may alter this address. It is of type IP and works on both IPv4 and |
| IPv6 tables. On IPv6 tables, IPv4 addresses are mapped to their IPv6 |
| equivalent, according to RFC 4291. Note that it is the TCP-level source |
| address which is used, and not the address of a client behind a |
| proxy. However if the "accept-proxy" or "accept-netscaler-cip" bind directive |
| is used, it can be the address of a client behind another PROXY-protocol |
| compatible component for all rule sets except "tcp-request connection" which |
| sees the real address. When the incoming connection passed through address |
| translation or redirection involving connection tracking, the original |
| destination address before the redirection will be reported. On Linux |
| systems, the source and destination may seldom appear reversed if the |
| nf_conntrack_tcp_loose sysctl is set, because a late response may reopen a |
| timed out connection and switch what is believed to be the source and the |
| destination. |
| |
| Example: |
| # add an HTTP header in requests with the originating address' country |
| http-request set-header X-Country %[src,map_ip(geoip.lst)] |
| |
| src_bytes_in_rate([<table>]) : integer |
| Returns the average bytes rate from the incoming connection's source address |
| in the current proxy's stick-table or in the designated stick-table, measured |
| in amount of bytes over the period configured in the table. If the address is |
| not found, zero is returned. See also sc/sc0/sc1/sc2_bytes_in_rate. |
| |
| src_bytes_out_rate([<table>]) : integer |
| Returns the average bytes rate to the incoming connection's source address in |
| the current proxy's stick-table or in the designated stick-table, measured in |
| amount of bytes over the period configured in the table. If the address is |
| not found, zero is returned. See also sc/sc0/sc1/sc2_bytes_out_rate. |
| |
| src_clr_gpc(<idx>,[<table>]) : integer |
| Clears the General Purpose Counter at the index <idx> of the array |
| associated to the incoming connection's source address in the current proxy's |
| stick-table or in the designated stick-table <table>, and returns its |
| previous value. <idx> is an integer between 0 and 99. |
| If the address is not found, an entry is created and 0 is returned. |
| This fetch applies only to the 'gpc' array data_type (and not to the legacy |
| 'gpc0' nor 'gpc1' data_types). |
| See also sc_clr_gpc. |
| |
| src_clr_gpc0([<table>]) : integer |
| Clears the first General Purpose Counter associated to the incoming |
| connection's source address in the current proxy's stick-table or in the |
| designated stick-table, and returns its previous value. If the address is not |
| found, an entry is created and 0 is returned. This is typically used as a |
| second ACL in an expression in order to mark a connection when a first ACL |
| was verified : |
| |
| Example: |
| # block if 5 consecutive requests continue to come faster than 10 sess |
| # per second, and reset the counter as soon as the traffic slows down. |
| acl abuse src_http_req_rate gt 10 |
| acl kill src_inc_gpc0 gt 5 |
| acl save src_clr_gpc0 ge 0 |
| tcp-request connection accept if !abuse save |
| tcp-request connection reject if abuse kill |
| |
| src_clr_gpc1([<table>]) : integer |
| Clears the second General Purpose Counter associated to the incoming |
| connection's source address in the current proxy's stick-table or in the |
| designated stick-table, and returns its previous value. If the address is not |
| found, an entry is created and 0 is returned. This is typically used as a |
| second ACL in an expression in order to mark a connection when a first ACL |
| was verified. |
| |
| src_conn_cnt([<table>]) : integer |
| Returns the cumulative number of connections initiated from the current |
| incoming connection's source address in the current proxy's stick-table or in |
| the designated stick-table. If the address is not found, zero is returned. |
| See also sc/sc0/sc1/sc2_conn_cnt. |
| |
| src_conn_cur([<table>]) : integer |
| Returns the current amount of concurrent connections initiated from the |
| current incoming connection's source address in the current proxy's |
| stick-table or in the designated stick-table. If the address is not found, |
| zero is returned. See also sc/sc0/sc1/sc2_conn_cur. |
| |
| src_conn_rate([<table>]) : integer |
| Returns the average connection rate from the incoming connection's source |
| address in the current proxy's stick-table or in the designated stick-table, |
| measured in amount of connections over the period configured in the table. If |
| the address is not found, zero is returned. See also sc/sc0/sc1/sc2_conn_rate. |
| |
| src_get_gpc(<idx>,[<table>]) : integer |
| Returns the value of the General Purpose Counter at the index <idx> of the |
| array associated to the incoming connection's source address in the |
| current proxy's stick-table or in the designated stick-table <table>. <idx> |
| is an integer between 0 and 99. |
| If the address is not found or there is no gpc stored at this index, zero |
| is returned. |
| This fetch applies only to the 'gpc' array data_type (and not on the legacy |
| 'gpc0' nor 'gpc1' data_types). |
| See also sc_get_gpc and src_inc_gpc. |
| |
| src_get_gpc0([<table>]) : integer |
| Returns the value of the first General Purpose Counter associated to the |
| incoming connection's source address in the current proxy's stick-table or in |
| the designated stick-table. If the address is not found, zero is returned. |
| See also sc/sc0/sc1/sc2_get_gpc0 and src_inc_gpc0. |
| |
| src_get_gpc1([<table>]) : integer |
| Returns the value of the second General Purpose Counter associated to the |
| incoming connection's source address in the current proxy's stick-table or in |
| the designated stick-table. If the address is not found, zero is returned. |
| See also sc/sc0/sc1/sc2_get_gpc1 and src_inc_gpc1. |
| |
| src_get_gpt(<idx>[,<table>]) : integer |
| Returns the value of the first General Purpose Tag at the index <idx> of |
| the array associated to the incoming connection's source address in the |
| current proxy's stick-table or in the designated stick-table <table>. |
| <idx> is an integer between 0 and 99. |
| If the address is not found or the GPT is not stored, zero is returned. |
| See also the sc_get_gpt sample fetch keyword. |
| |
| src_get_gpt0([<table>]) : integer |
| Returns the value of the first General Purpose Tag associated to the |
| incoming connection's source address in the current proxy's stick-table or in |
| the designated stick-table. If the address is not found, zero is returned. |
| See also sc/sc0/sc1/sc2_get_gpt0. |
| |
| src_gpc_rate(<idx>[,<table>]) : integer |
| Returns the average increment rate of the General Purpose Counter at the |
| index <idx> of the array associated to the incoming connection's |
| source address in the current proxy's stick-table or in the designated |
| stick-table <table>. It reports the frequency which the gpc counter was |
| incremented over the configured period. <idx> is an integer between 0 and 99. |
| Note that the 'gpc_rate' counter must be stored in the stick-table for a |
| value to be returned, as 'gpc' only holds the event count. |
| This fetch applies only to the 'gpc_rate' array data_type (and not to |
| the legacy 'gpc0_rate' nor 'gpc1_rate' data_types). |
| See also sc_gpc_rate, src_get_gpc, and sc_inc_gpc. |
| |
| src_gpc0_rate([<table>]) : integer |
| Returns the average increment rate of the first General Purpose Counter |
| associated to the incoming connection's source address in the current proxy's |
| stick-table or in the designated stick-table. It reports the frequency |
| which the gpc0 counter was incremented over the configured period. See also |
| sc/sc0/sc1/sc2_gpc0_rate, src_get_gpc0, and sc/sc0/sc1/sc2_inc_gpc0. Note |
| that the "gpc0_rate" counter must be stored in the stick-table for a value to |
| be returned, as "gpc0" only holds the event count. |
| |
| src_gpc1_rate([<table>]) : integer |
| Returns the average increment rate of the second General Purpose Counter |
| associated to the incoming connection's source address in the current proxy's |
| stick-table or in the designated stick-table. It reports the frequency |
| which the gpc1 counter was incremented over the configured period. See also |
| sc/sc0/sc1/sc2_gpc1_rate, src_get_gpc1, and sc/sc0/sc1/sc2_inc_gpc1. Note |
| that the "gpc1_rate" counter must be stored in the stick-table for a value to |
| be returned, as "gpc1" only holds the event count. |
| |
| src_http_err_cnt([<table>]) : integer |
| Returns the cumulative number of HTTP errors from the incoming connection's |
| source address in the current proxy's stick-table or in the designated |
| stick-table. This includes the both request errors and 4xx error responses. |
| See also sc/sc0/sc1/sc2_http_err_cnt. If the address is not found, zero is |
| returned. |
| |
| src_http_err_rate([<table>]) : integer |
| Returns the average rate of HTTP errors from the incoming connection's source |
| address in the current proxy's stick-table or in the designated stick-table, |
| measured in amount of errors over the period configured in the table. This |
| includes the both request errors and 4xx error responses. If the address is |
| not found, zero is returned. See also sc/sc0/sc1/sc2_http_err_rate. |
| |
| src_http_fail_cnt([<table>]) : integer |
| Returns the cumulative number of HTTP response failures triggered by the |
| incoming connection's source address in the current proxy's stick-table or in |
| the designated stick-table. This includes the both response errors and 5xx |
| status codes other than 501 and 505. See also sc/sc0/sc1/sc2_http_fail_cnt. |
| If the address is not found, zero is returned. |
| |
| src_http_fail_rate([<table>]) : integer |
| Returns the average rate of HTTP response failures triggered by the incoming |
| connection's source address in the current proxy's stick-table or in the |
| designated stick-table, measured in amount of failures over the period |
| configured in the table. This includes the both response errors and 5xx |
| status codes other than 501 and 505. If the address is not found, zero is |
| returned. See also sc/sc0/sc1/sc2_http_fail_rate. |
| |
| src_http_req_cnt([<table>]) : integer |
| Returns the cumulative number of HTTP requests from the incoming connection's |
| source address in the current proxy's stick-table or in the designated stick- |
| table. This includes every started request, valid or not. If the address is |
| not found, zero is returned. See also sc/sc0/sc1/sc2_http_req_cnt. |
| |
| src_http_req_rate([<table>]) : integer |
| Returns the average rate of HTTP requests from the incoming connection's |
| source address in the current proxy's stick-table or in the designated stick- |
| table, measured in amount of requests over the period configured in the |
| table. This includes every started request, valid or not. If the address is |
| not found, zero is returned. See also sc/sc0/sc1/sc2_http_req_rate. |
| |
| src_inc_gpc(<idx>,[<table>]) : integer |
| Increments the General Purpose Counter at index <idx> of the array |
| associated to the incoming connection's source address in the current proxy's |
| stick-table or in the designated stick-table <table>, and returns its new |
| value. <idx> is an integer between 0 and 99. |
| If the address is not found, an entry is created and 1 is returned. |
| This fetch applies only to the 'gpc' array data_type (and not to the legacy |
| 'gpc0' nor 'gpc1' data_types). |
| See also sc_inc_gpc. |
| |
| src_inc_gpc0([<table>]) : integer |
| Increments the first General Purpose Counter associated to the incoming |
| connection's source address in the current proxy's stick-table or in the |
| designated stick-table, and returns its new value. If the address is not |
| found, an entry is created and 1 is returned. See also sc0/sc2/sc2_inc_gpc0. |
| This is typically used as a second ACL in an expression in order to mark a |
| connection when a first ACL was verified : |
| |
| Example: |
| acl abuse src_http_req_rate gt 10 |
| acl kill src_inc_gpc0 gt 0 |
| tcp-request connection reject if abuse kill |
| |
| src_inc_gpc1([<table>]) : integer |
| Increments the second General Purpose Counter associated to the incoming |
| connection's source address in the current proxy's stick-table or in the |
| designated stick-table, and returns its new value. If the address is not |
| found, an entry is created and 1 is returned. See also sc0/sc2/sc2_inc_gpc1. |
| This is typically used as a second ACL in an expression in order to mark a |
| connection when a first ACL was verified. |
| |
| src_is_local : boolean |
| Returns true if the source address of the incoming connection is local to the |
| system, or false if the address doesn't exist on the system, meaning that it |
| comes from a remote machine. Note that UNIX addresses are considered local. |
| It can be useful to apply certain access restrictions based on where the |
| client comes from (e.g. require auth or https for remote machines). Please |
| note that the check involves a few system calls, so it's better to do it only |
| once per connection. |
| |
| src_kbytes_in([<table>]) : integer |
| Returns the total amount of data received from the incoming connection's |
| source address in the current proxy's stick-table or in the designated |
| stick-table, measured in kilobytes. If the address is not found, zero is |
| returned. The test is currently performed on 32-bit integers, which limits |
| values to 4 terabytes. See also sc/sc0/sc1/sc2_kbytes_in. |
| |
| src_kbytes_out([<table>]) : integer |
| Returns the total amount of data sent to the incoming connection's source |
| address in the current proxy's stick-table or in the designated stick-table, |
| measured in kilobytes. If the address is not found, zero is returned. The |
| test is currently performed on 32-bit integers, which limits values to 4 |
| terabytes. See also sc/sc0/sc1/sc2_kbytes_out. |
| |
| src_port : integer |
| Returns an integer value corresponding to the TCP source port of the |
| connection on the client side, which is the port the client connected |
| from. Any tcp/http rules may alter this address. Usage of this function is |
| very limited as modern protocols do not care much about source ports |
| nowadays. |
| |
| src_sess_cnt([<table>]) : integer |
| Returns the cumulative number of connections initiated from the incoming |
| connection's source IPv4 address in the current proxy's stick-table or in the |
| designated stick-table, that were transformed into sessions, which means that |
| they were accepted by "tcp-request" rules. If the address is not found, zero |
| is returned. See also sc/sc0/sc1/sc2_sess_cnt. |
| |
| src_sess_rate([<table>]) : integer |
| Returns the average session rate from the incoming connection's source |
| address in the current proxy's stick-table or in the designated stick-table, |
| measured in amount of sessions over the period configured in the table. A |
| session is a connection that went past the early "tcp-request" rules. If the |
| address is not found, zero is returned. See also sc/sc0/sc1/sc2_sess_rate. |
| |
| src_updt_conn_cnt([<table>]) : integer |
| Creates or updates the entry associated to the incoming connection's source |
| address in the current proxy's stick-table or in the designated stick-table. |
| This table must be configured to store the "conn_cnt" data type, otherwise |
| the match will be ignored. The current count is incremented by one, and the |
| expiration timer refreshed. The updated count is returned, so this match |
| can't return zero. This was used to reject service abusers based on their |
| source address. Note: it is recommended to use the more complete "track-sc*" |
| actions in "tcp-request" rules instead. |
| |
| Example : |
| # This frontend limits incoming SSH connections to 3 per 10 second for |
| # each source address, and rejects excess connections until a 10 second |
| # silence is observed. At most 20 addresses are tracked. |
| listen ssh |
| bind :22 |
| mode tcp |
| maxconn 100 |
| stick-table type ip size 20 expire 10s store conn_cnt |
| tcp-request content reject if { src_updt_conn_cnt gt 3 } |
| server local 127.0.0.1:22 |
| |
| srv_id : integer |
| Returns an integer containing the server's id when processing the response. |
| While it's almost only used with ACLs, it may be used for logging or |
| debugging. It can also be used in a tcp-check or an http-check ruleset. |
| |
| srv_name : string |
| Returns a string containing the server's name when processing the response. |
| While it's almost only used with ACLs, it may be used for logging or |
| debugging. It can also be used in a tcp-check or an http-check ruleset. |
| |
| 7.3.4. Fetching samples at Layer 5 |
| ---------------------------------- |
| |
| The layer 5 usually describes just the session layer which in HAProxy is |
| closest to the session once all the connection handshakes are finished, but |
| when no content is yet made available. The fetch methods described here are |
| usable as low as the "tcp-request content" rule sets unless they require some |
| future information. Those generally include the results of SSL negotiations. |
| |
| 51d.all(<prop>[,<prop>*]) : string |
| Returns values for the properties requested as a string, where values are |
| separated by the delimiter specified with "51degrees-property-separator". |
| The device is identified using all the important HTTP headers from the |
| request. The function can be passed up to five property names, and if a |
| property name can't be found, the value "NoData" is returned. |
| |
| Example : |
| # Here the header "X-51D-DeviceTypeMobileTablet" is added to the request |
| # containing the three properties requested using all relevant headers from |
| # the request. |
| frontend http-in |
| bind *:8081 |
| default_backend servers |
| http-request set-header X-51D-DeviceTypeMobileTablet \ |
| %[51d.all(DeviceType,IsMobile,IsTablet)] |
| |
| ssl_bc : boolean |
| Returns true when the back connection was made via an SSL/TLS transport |
| layer and is locally deciphered. This means the outgoing connection was made |
| to a server with the "ssl" option. It can be used in a tcp-check or an |
| http-check ruleset. |
| |
| ssl_bc_alg_keysize : integer |
| Returns the symmetric cipher key size supported in bits when the outgoing |
| connection was made over an SSL/TLS transport layer. It can be used in a |
| tcp-check or an http-check ruleset. |
| |
| ssl_bc_alpn : string |
| This extracts the Application Layer Protocol Negotiation field from an |
| outgoing connection made via a TLS transport layer. |
| The result is a string containing the protocol name negotiated with the |
| server. The SSL library must have been built with support for TLS |
| extensions enabled (check haproxy -vv). Note that the TLS ALPN extension is |
| not advertised unless the "alpn" keyword on the "server" line specifies a |
| protocol list. Also, nothing forces the server to pick a protocol from this |
| list, any other one may be requested. The TLS ALPN extension is meant to |
| replace the TLS NPN extension. See also "ssl_bc_npn". It can be used in a |
| tcp-check or an http-check ruleset. |
| |
| ssl_bc_cipher : string |
| Returns the name of the used cipher when the outgoing connection was made |
| over an SSL/TLS transport layer. It can be used in a tcp-check or an |
| http-check ruleset. |
| |
| ssl_bc_client_random : binary |
| Returns the client random of the back connection when the incoming connection |
| was made over an SSL/TLS transport layer. It is useful to to decrypt traffic |
| sent using ephemeral ciphers. This requires OpenSSL >= 1.1.0, or BoringSSL. |
| It can be used in a tcp-check or an http-check ruleset. |
| |
| ssl_bc_err : integer |
| When the outgoing connection was made over an SSL/TLS transport layer, |
| returns the ID of the last error of the first error stack raised on the |
| backend side. It can raise handshake errors as well as other read or write |
| errors occurring during the connection's lifetime. In order to get a text |
| description of this error code, you can either use the "ssl_bc_err_str" |
| sample fetch or use the "openssl errstr" command (which takes an error code |
| in hexadecimal representation as parameter). Please refer to your SSL |
| library's documentation to find the exhaustive list of error codes. |
| |
| ssl_bc_err_str : string |
| When the outgoing connection was made over an SSL/TLS transport layer, |
| returns a string representation of the last error of the first error stack |
| that was raised on the connection from the backend's perspective. See also |
| "ssl_fc_err". |
| |
| ssl_bc_is_resumed : boolean |
| Returns true when the back connection was made over an SSL/TLS transport |
| layer and the newly created SSL session was resumed using a cached |
| session or a TLS ticket. It can be used in a tcp-check or an http-check |
| ruleset. |
| |
| ssl_bc_npn : string |
| This extracts the Next Protocol Negotiation field from an outgoing connection |
| made via a TLS transport layer. The result is a string containing the |
| protocol name negotiated with the server . The SSL library must have been |
| built with support for TLS extensions enabled (check haproxy -vv). Note that |
| the TLS NPN extension is not advertised unless the "npn" keyword on the |
| "server" line specifies a protocol list. Also, nothing forces the server to |
| pick a protocol from this list, any other one may be used. Please note that |
| the TLS NPN extension was replaced with ALPN. It can be used in a tcp-check |
| or an http-check ruleset. |
| |
| ssl_bc_protocol : string |
| Returns the name of the used protocol when the outgoing connection was made |
| over an SSL/TLS transport layer. It can be used in a tcp-check or an |
| http-check ruleset. |
| |
| ssl_bc_unique_id : binary |
| When the outgoing connection was made over an SSL/TLS transport layer, |
| returns the TLS unique ID as defined in RFC5929 section 3. The unique id |
| can be encoded to base64 using the converter: "ssl_bc_unique_id,base64". It |
| can be used in a tcp-check or an http-check ruleset. |
| |
| ssl_bc_server_random : binary |
| Returns the server random of the back connection when the incoming connection |
| was made over an SSL/TLS transport layer. It is useful to to decrypt traffic |
| sent using ephemeral ciphers. This requires OpenSSL >= 1.1.0, or BoringSSL. |
| It can be used in a tcp-check or an http-check ruleset. |
| |
| ssl_bc_session_id : binary |
| Returns the SSL ID of the back connection when the outgoing connection was |
| made over an SSL/TLS transport layer. It is useful to log if we want to know |
| if session was reused or not. It can be used in a tcp-check or an http-check |
| ruleset. |
| |
| ssl_bc_session_key : binary |
| Returns the SSL session master key of the back connection when the outgoing |
| connection was made over an SSL/TLS transport layer. It is useful to decrypt |
| traffic sent using ephemeral ciphers. This requires OpenSSL >= 1.1.0, or |
| BoringSSL. It can be used in a tcp-check or an http-check ruleset. |
| |
| ssl_bc_use_keysize : integer |
| Returns the symmetric cipher key size used in bits when the outgoing |
| connection was made over an SSL/TLS transport layer. It can be used in a |
| tcp-check or an http-check ruleset. |
| |
| ssl_c_ca_err : integer |
| When the incoming connection was made over an SSL/TLS transport layer, |
| returns the ID of the first error detected during verification of the client |
| certificate at depth > 0, or 0 if no error was encountered during this |
| verification process. Please refer to your SSL library's documentation to |
| find the exhaustive list of error codes. |
| |
| ssl_c_ca_err_depth : integer |
| When the incoming connection was made over an SSL/TLS transport layer, |
| returns the depth in the CA chain of the first error detected during the |
| verification of the client certificate. If no error is encountered, 0 is |
| returned. |
| |
| ssl_c_chain_der : binary |
| Returns the DER formatted chain certificate presented by the client when the |
| incoming connection was made over an SSL/TLS transport layer. When used for |
| an ACL, the value(s) to match against can be passed in hexadecimal form. One |
| can parse the result with any lib accepting ASN.1 DER data. It currently |
| does not support resumed sessions. |
| |
| ssl_c_der : binary |
| Returns the DER formatted certificate presented by the client when the |
| incoming connection was made over an SSL/TLS transport layer. When used for |
| an ACL, the value(s) to match against can be passed in hexadecimal form. |
| |
| ssl_c_err : integer |
| When the incoming connection was made over an SSL/TLS transport layer, |
| returns the ID of the first error detected during verification at depth 0, or |
| 0 if no error was encountered during this verification process. Please refer |
| to your SSL library's documentation to find the exhaustive list of error |
| codes. |
| |
| ssl_c_i_dn([<entry>[,<occ>[,<format>]]]) : string |
| When the incoming connection was made over an SSL/TLS transport layer, |
| returns the full distinguished name of the issuer of the certificate |
| presented by the client when no <entry> is specified, or the value of the |
| first given entry found from the beginning of the DN. If a positive/negative |
| occurrence number is specified as the optional second argument, it returns |
| the value of the nth given entry value from the beginning/end of the DN. |
| For instance, "ssl_c_i_dn(OU,2)" the second organization unit, and |
| "ssl_c_i_dn(CN)" retrieves the common name. |
| The <format> parameter allows you to receive the DN suitable for |
| consumption by different protocols. Currently supported is rfc2253 for |
| LDAP v3. |
| If you'd like to modify the format only you can specify an empty string |
| and zero for the first two parameters. Example: ssl_c_i_dn(,0,rfc2253) |
| |
| ssl_c_key_alg : string |
| Returns the name of the algorithm used to generate the key of the certificate |
| presented by the client when the incoming connection was made over an SSL/TLS |
| transport layer. |
| |
| ssl_c_notafter : string |
| Returns the end date presented by the client as a formatted string |
| YYMMDDhhmmss[Z] when the incoming connection was made over an SSL/TLS |
| transport layer. |
| |
| ssl_c_notbefore : string |
| Returns the start date presented by the client as a formatted string |
| YYMMDDhhmmss[Z] when the incoming connection was made over an SSL/TLS |
| transport layer. |
| |
| ssl_c_r_dn([<entry>[,<occ>[,<format>]]]) : string |
| When the incoming connection was made over an SSL/TLS transport layer, and is |
| successfully validated with the configured ca-file, returns the full |
| distinguished name of the root CA of the certificate presented by the client |
| when no <entry> is specified, or the value of the first given entry found from |
| the beginning of the DN. If a positive/negative occurrence number is specified |
| as the optional second argument, it returns the value of the nth given entry |
| value from the beginning/end of the DN. For instance, "ssl_c_r_dn(OU,2)" the |
| second organization unit, and "ssl_c_r_dn(CN)" retrieves the common name. The |
| <format> parameter allows you to receive the DN suitable for consumption by |
| different protocols. Currently supported is rfc2253 for LDAP v3. If you'd like |
| to modify the format only you can specify an empty string and zero for the |
| first two parameters. Example: ssl_c_r_dn(,0,rfc2253) |
| |
| ssl_c_s_dn([<entry>[,<occ>[,<format>]]]) : string |
| When the incoming connection was made over an SSL/TLS transport layer, |
| returns the full distinguished name of the subject of the certificate |
| presented by the client when no <entry> is specified, or the value of the |
| first given entry found from the beginning of the DN. If a positive/negative |
| occurrence number is specified as the optional second argument, it returns |
| the value of the nth given entry value from the beginning/end of the DN. |
| For instance, "ssl_c_s_dn(OU,2)" the second organization unit, and |
| "ssl_c_s_dn(CN)" retrieves the common name. |
| The <format> parameter allows you to receive the DN suitable for |
| consumption by different protocols. Currently supported is rfc2253 for |
| LDAP v3. |
| If you'd like to modify the format only you can specify an empty string |
| and zero for the first two parameters. Example: ssl_c_s_dn(,0,rfc2253) |
| |
| ssl_c_serial : binary |
| Returns the serial of the certificate presented by the client when the |
| incoming connection was made over an SSL/TLS transport layer. When used for |
| an ACL, the value(s) to match against can be passed in hexadecimal form. |
| |
| ssl_c_sha1 : binary |
| Returns the SHA-1 fingerprint of the certificate presented by the client when |
| the incoming connection was made over an SSL/TLS transport layer. This can be |
| used to stick a client to a server, or to pass this information to a server. |
| Note that the output is binary, so if you want to pass that signature to the |
| server, you need to encode it in hex or base64, such as in the example below: |
| |
| Example: |
| http-request set-header X-SSL-Client-SHA1 %[ssl_c_sha1,hex] |
| |
| ssl_c_sig_alg : string |
| Returns the name of the algorithm used to sign the certificate presented by |
| the client when the incoming connection was made over an SSL/TLS transport |
| layer. |
| |
| ssl_c_used : boolean |
| Returns true if current SSL session uses a client certificate even if current |
| connection uses SSL session resumption. See also "ssl_fc_has_crt". |
| |
| ssl_c_verify : integer |
| Returns the verify result error ID when the incoming connection was made over |
| an SSL/TLS transport layer, otherwise zero if no error is encountered. Please |
| refer to your SSL library's documentation for an exhaustive list of error |
| codes. |
| |
| ssl_c_version : integer |
| Returns the version of the certificate presented by the client when the |
| incoming connection was made over an SSL/TLS transport layer. |
| |
| ssl_f_der : binary |
| Returns the DER formatted certificate presented by the frontend when the |
| incoming connection was made over an SSL/TLS transport layer. When used for |
| an ACL, the value(s) to match against can be passed in hexadecimal form. |
| |
| ssl_f_i_dn([<entry>[,<occ>[,<format>]]]) : string |
| When the incoming connection was made over an SSL/TLS transport layer, |
| returns the full distinguished name of the issuer of the certificate |
| presented by the frontend when no <entry> is specified, or the value of the |
| first given entry found from the beginning of the DN. If a positive/negative |
| occurrence number is specified as the optional second argument, it returns |
| the value of the nth given entry value from the beginning/end of the DN. |
| For instance, "ssl_f_i_dn(OU,2)" the second organization unit, and |
| "ssl_f_i_dn(CN)" retrieves the common name. |
| The <format> parameter allows you to receive the DN suitable for |
| consumption by different protocols. Currently supported is rfc2253 for |
| LDAP v3. |
| If you'd like to modify the format only you can specify an empty string |
| and zero for the first two parameters. Example: ssl_f_i_dn(,0,rfc2253) |
| |
| ssl_f_key_alg : string |
| Returns the name of the algorithm used to generate the key of the certificate |
| presented by the frontend when the incoming connection was made over an |
| SSL/TLS transport layer. |
| |
| ssl_f_notafter : string |
| Returns the end date presented by the frontend as a formatted string |
| YYMMDDhhmmss[Z] when the incoming connection was made over an SSL/TLS |
| transport layer. |
| |
| ssl_f_notbefore : string |
| Returns the start date presented by the frontend as a formatted string |
| YYMMDDhhmmss[Z] when the incoming connection was made over an SSL/TLS |
| transport layer. |
| |
| ssl_f_s_dn([<entry>[,<occ>[,<format>]]]) : string |
| When the incoming connection was made over an SSL/TLS transport layer, |
| returns the full distinguished name of the subject of the certificate |
| presented by the frontend when no <entry> is specified, or the value of the |
| first given entry found from the beginning of the DN. If a positive/negative |
| occurrence number is specified as the optional second argument, it returns |
| the value of the nth given entry value from the beginning/end of the DN. |
| For instance, "ssl_f_s_dn(OU,2)" the second organization unit, and |
| "ssl_f_s_dn(CN)" retrieves the common name. |
| The <format> parameter allows you to receive the DN suitable for |
| consumption by different protocols. Currently supported is rfc2253 for |
| LDAP v3. |
| If you'd like to modify the format only you can specify an empty string |
| and zero for the first two parameters. Example: ssl_f_s_dn(,0,rfc2253) |
| |
| ssl_f_serial : binary |
| Returns the serial of the certificate presented by the frontend when the |
| incoming connection was made over an SSL/TLS transport layer. When used for |
| an ACL, the value(s) to match against can be passed in hexadecimal form. |
| |
| ssl_f_sha1 : binary |
| Returns the SHA-1 fingerprint of the certificate presented by the frontend |
| when the incoming connection was made over an SSL/TLS transport layer. This |
| can be used to know which certificate was chosen using SNI. |
| |
| ssl_f_sig_alg : string |
| Returns the name of the algorithm used to sign the certificate presented by |
| the frontend when the incoming connection was made over an SSL/TLS transport |
| layer. |
| |
| ssl_f_version : integer |
| Returns the version of the certificate presented by the frontend when the |
| incoming connection was made over an SSL/TLS transport layer. |
| |
| ssl_fc : boolean |
| Returns true when the front connection was made via an SSL/TLS transport |
| layer and is locally deciphered. This means it has matched a socket declared |
| with a "bind" line having the "ssl" option. |
| |
| Example : |
| # This passes "X-Proto: https" to servers when client connects over SSL |
| listen http-https |
| bind :80 |
| bind :443 ssl crt /etc/haproxy.pem |
| http-request add-header X-Proto https if { ssl_fc } |
| |
| ssl_fc_alg_keysize : integer |
| Returns the symmetric cipher key size supported in bits when the incoming |
| connection was made over an SSL/TLS transport layer. |
| |
| ssl_fc_alpn : string |
| This extracts the Application Layer Protocol Negotiation field from an |
| incoming connection made via a TLS transport layer and locally deciphered by |
| HAProxy. The result is a string containing the protocol name advertised by |
| the client. The SSL library must have been built with support for TLS |
| extensions enabled (check haproxy -vv). Note that the TLS ALPN extension is |
| not advertised unless the "alpn" keyword on the "bind" line specifies a |
| protocol list. Also, nothing forces the client to pick a protocol from this |
| list, any other one may be requested. The TLS ALPN extension is meant to |
| replace the TLS NPN extension. See also "ssl_fc_npn". |
| |
| ssl_fc_cipher : string |
| Returns the name of the used cipher when the incoming connection was made |
| over an SSL/TLS transport layer. |
| |
| ssl_fc_cipherlist_bin([<filter_option>]) : binary |
| Returns the binary form of the client hello cipher list. The maximum |
| returned value length is limited by the shared capture buffer size |
| controlled by "tune.ssl.capture-buffer-size" setting. Setting |
| <filter_option> allows to filter returned data. Accepted values: |
| 0 : return the full list of ciphers (default) |
| 1 : exclude GREASE (RFC8701) values from the output |
| |
| Example: |
| http-request set-header X-SSL-JA3 %[ssl_fc_protocol_hello_id],\ |
| %[ssl_fc_cipherlist_bin(1),be2dec(-,2)],\ |
| %[ssl_fc_extlist_bin(1),be2dec(-,2)],\ |
| %[ssl_fc_eclist_bin(1),be2dec(-,2)],\ |
| %[ssl_fc_ecformats_bin,be2dec(-,1)] |
| acl is_malware req.fhdr(x-ssl-ja3),digest(md5),hex \ |
| -f /path/to/file/with/malware-ja3.lst |
| http-request set-header X-Malware True if is_malware |
| http-request set-header X-Malware False if !is_malware |
| |
| ssl_fc_cipherlist_hex([<filter_option>]) : string |
| Returns the binary form of the client hello cipher list encoded as |
| hexadecimal. The maximum returned value length is limited by the shared |
| capture buffer size controlled by "tune.ssl.capture-buffer-size" setting. |
| Setting <filter_option> allows to filter returned data. Accepted values: |
| 0 : return the full list of ciphers (default) |
| 1 : exclude GREASE (RFC8701) values from the output |
| |
| ssl_fc_cipherlist_str([<filter_option>]) : string |
| Returns the decoded text form of the client hello cipher list. The maximum |
| returned value length is limited by the shared capture buffer size |
| controlled by "tune.ssl.capture-buffer-size" setting. Setting |
| <filter_option> allows to filter returned data. Accepted values: |
| 0 : return the full list of ciphers (default) |
| 1 : exclude GREASE (RFC8701) values from the output |
| Note that this sample-fetch is only available with OpenSSL >= 1.0.2. If the |
| function is not enabled, this sample-fetch returns the hash like |
| "ssl_fc_cipherlist_xxh". |
| |
| ssl_fc_cipherlist_xxh : integer |
| Returns a xxh64 of the cipher list. This hash can return only if the value |
| "tune.ssl.capture-buffer-size" is set greater than 0, however the hash take |
| into account all the data of the cipher list. |
| |
| ssl_fc_ecformats_bin : binary |
| Return the binary form of the client hello supported elliptic curve point |
| formats. The maximum returned value length is limited by the shared capture |
| buffer size controlled by "tune.ssl.capture-buffer-size" setting. |
| |
| Example: |
| http-request set-header X-SSL-JA3 %[ssl_fc_protocol_hello_id],\ |
| %[ssl_fc_cipherlist_bin(1),be2dec(-,2)],\ |
| %[ssl_fc_extlist_bin(1),be2dec(-,2)],\ |
| %[ssl_fc_eclist_bin(1),be2dec(-,2)],\ |
| %[ssl_fc_ecformats_bin,be2dec(-,1)] |
| acl is_malware req.fhdr(x-ssl-ja3),digest(md5),hex \ |
| -f /path/to/file/with/malware-ja3.lst |
| http-request set-header X-Malware True if is_malware |
| http-request set-header X-Malware False if !is_malware |
| |
| ssl_fc_eclist_bin([<filter_option>]) : binary |
| Returns the binary form of the client hello supported elliptic curves. The |
| maximum returned value length is limited by the shared capture buffer size |
| controlled by "tune.ssl.capture-buffer-size" setting. Setting |
| <filter_option> allows to filter returned data. Accepted values: |
| 0 : return the full list of supported elliptic curves (default) |
| 1 : exclude GREASE (RFC8701) values from the output |
| |
| Example: |
| http-request set-header X-SSL-JA3 %[ssl_fc_protocol_hello_id],\ |
| %[ssl_fc_cipherlist_bin(1),be2dec(-,2)],\ |
| %[ssl_fc_extlist_bin(1),be2dec(-,2)],\ |
| %[ssl_fc_eclist_bin(1),be2dec(-,2)],\ |
| %[ssl_fc_ecformats_bin,be2dec(-,1)] |
| acl is_malware req.fhdr(x-ssl-ja3),digest(md5),hex \ |
| -f /path/to/file/with/malware-ja3.lst |
| http-request set-header X-Malware True if is_malware |
| http-request set-header X-Malware False if !is_malware |
| |
| ssl_fc_extlist_bin([<filter_option>]) : binary |
| Returns the binary form of the client hello extension list. The maximum |
| returned value length is limited by the shared capture buffer size |
| controlled by "tune.ssl.capture-buffer-size" setting. Setting |
| <filter_option> allows to filter returned data. Accepted values: |
| 0 : return the full list of extensions (default) |
| 1 : exclude GREASE (RFC8701) values from the output |
| |
| Example: |
| http-request set-header X-SSL-JA3 %[ssl_fc_protocol_hello_id],\ |
| %[ssl_fc_cipherlist_bin(1),be2dec(-,2)],\ |
| %[ssl_fc_extlist_bin(1),be2dec(-,2)],\ |
| %[ssl_fc_eclist_bin(1),be2dec(-,2)],\ |
| %[ssl_fc_ecformats_bin,be2dec(-,1)] |
| acl is_malware req.fhdr(x-ssl-ja3),digest(md5),hex \ |
| -f /path/to/file/with/malware-ja3.lst |
| http-request set-header X-Malware True if is_malware |
| http-request set-header X-Malware False if !is_malware |
| |
| ssl_fc_client_random : binary |
| Returns the client random of the front connection when the incoming connection |
| was made over an SSL/TLS transport layer. It is useful to to decrypt traffic |
| sent using ephemeral ciphers. This requires OpenSSL >= 1.1.0, or BoringSSL. |
| |
| ssl_fc_client_early_traffic_secret : string |
| Return the CLIENT_EARLY_TRAFFIC_SECRET as an hexadecimal string for the |
| front connection when the incoming connection was made over a TLS 1.3 |
| transport layer. |
| Require OpenSSL >= 1.1.1. This is one of the keys dumped by the OpenSSL |
| keylog callback to generate the SSLKEYLOGFILE. The SSL Key logging must be |
| activated with "tune.ssl.keylog on" in the global section. See also |
| "tune.ssl.keylog" |
| |
| ssl_fc_client_handshake_traffic_secret : string |
| Return the CLIENT_HANDSHAKE_TRAFFIC_SECRET as an hexadecimal string for the |
| front connection when the incoming connection was made over a TLS 1.3 |
| transport layer. |
| Require OpenSSL >= 1.1.1. This is one of the keys dumped by the OpenSSL |
| keylog callback to generate the SSLKEYLOGFILE. The SSL Key logging must be |
| activated with "tune.ssl.keylog on" in the global section. See also |
| "tune.ssl.keylog" |
| |
| ssl_fc_client_traffic_secret_0 : string |
| Return the CLIENT_TRAFFIC_SECRET_0 as an hexadecimal string for the |
| front connection when the incoming connection was made over a TLS 1.3 |
| transport layer. |
| Require OpenSSL >= 1.1.1. This is one of the keys dumped by the OpenSSL |
| keylog callback to generate the SSLKEYLOGFILE. The SSL Key logging must be |
| activated with "tune.ssl.keylog on" in the global section. See also |
| "tune.ssl.keylog" |
| |
| ssl_fc_exporter_secret : string |
| Return the EXPORTER_SECRET as an hexadecimal string for the |
| front connection when the incoming connection was made over a TLS 1.3 |
| transport layer. |
| Require OpenSSL >= 1.1.1. This is one of the keys dumped by the OpenSSL |
| keylog callback to generate the SSLKEYLOGFILE. The SSL Key logging must be |
| activated with "tune.ssl.keylog on" in the global section. See also |
| "tune.ssl.keylog" |
| |
| ssl_fc_early_exporter_secret : string |
| Return the EARLY_EXPORTER_SECRET as an hexadecimal string for the |
| front connection when the incoming connection was made over an TLS 1.3 |
| transport layer. |
| Require OpenSSL >= 1.1.1. This is one of the keys dumped by the OpenSSL |
| keylog callback to generate the SSLKEYLOGFILE. The SSL Key logging must be |
| activated with "tune.ssl.keylog on" in the global section. See also |
| "tune.ssl.keylog" |
| |
| ssl_fc_err : integer |
| When the incoming connection was made over an SSL/TLS transport layer, |
| returns the ID of the last error of the first error stack raised on the |
| frontend side, or 0 if no error was encountered. It can be used to identify |
| handshake related errors other than verify ones (such as cipher mismatch), as |
| well as other read or write errors occurring during the connection's |
| lifetime. Any error happening during the client's certificate verification |
| process will not be raised through this fetch but via the existing |
| "ssl_c_err", "ssl_c_ca_err" and "ssl_c_ca_err_depth" fetches. In order to get |
| a text description of this error code, you can either use the |
| "ssl_fc_err_str" sample fetch or use the "openssl errstr" command (which |
| takes an error code in hexadecimal representation as parameter). Please refer |
| to your SSL library's documentation to find the exhaustive list of error |
| codes. |
| |
| ssl_fc_err_str : string |
| When the incoming connection was made over an SSL/TLS transport layer, |
| returns a string representation of the last error of the first error stack |
| that was raised on the frontend side. Any error happening during the client's |
| certificate verification process will not be raised through this fetch. See |
| also "ssl_fc_err". |
| |
| ssl_fc_has_crt : boolean |
| Returns true if a client certificate is present in an incoming connection over |
| SSL/TLS transport layer. Useful if 'verify' statement is set to 'optional'. |
| Note: on SSL session resumption with Session ID or TLS ticket, client |
| certificate is not present in the current connection but may be retrieved |
| from the cache or the ticket. So prefer "ssl_c_used" if you want to check if |
| current SSL session uses a client certificate. |
| |
| ssl_fc_has_early : boolean |
| Returns true if early data were sent, and the handshake didn't happen yet. As |
| it has security implications, it is useful to be able to refuse those, or |
| wait until the handshake happened. |
| |
| ssl_fc_has_sni : boolean |
| This checks for the presence of a Server Name Indication TLS extension (SNI) |
| in an incoming connection was made over an SSL/TLS transport layer. Returns |
| true when the incoming connection presents a TLS SNI field. This requires |
| that the SSL library is built with support for TLS extensions enabled (check |
| haproxy -vv). |
| |
| ssl_fc_is_resumed : boolean |
| Returns true if the SSL/TLS session has been resumed through the use of |
| SSL session cache or TLS tickets on an incoming connection over an SSL/TLS |
| transport layer. |
| |
| ssl_fc_npn : string |
| This extracts the Next Protocol Negotiation field from an incoming connection |
| made via a TLS transport layer and locally deciphered by HAProxy. The result |
| is a string containing the protocol name advertised by the client. The SSL |
| library must have been built with support for TLS extensions enabled (check |
| haproxy -vv). Note that the TLS NPN extension is not advertised unless the |
| "npn" keyword on the "bind" line specifies a protocol list. Also, nothing |
| forces the client to pick a protocol from this list, any other one may be |
| requested. Please note that the TLS NPN extension was replaced with ALPN. |
| |
| ssl_fc_protocol : string |
| Returns the name of the used protocol when the incoming connection was made |
| over an SSL/TLS transport layer. |
| |
| ssl_fc_protocol_hello_id : integer |
| The version of the TLS protocol by which the client wishes to communicate |
| during the session as indicated in client hello message. This value can |
| return only if the value "tune.ssl.capture-buffer-size" is set greater than |
| 0. |
| |
| Example: |
| http-request set-header X-SSL-JA3 %[ssl_fc_protocol_hello_id],\ |
| %[ssl_fc_cipherlist_bin(1),be2dec(-,2)],\ |
| %[ssl_fc_extlist_bin(1),be2dec(-,2)],\ |
| %[ssl_fc_eclist_bin(1),be2dec(-,2)],\ |
| %[ssl_fc_ecformats_bin,be2dec(-,1)] |
| acl is_malware req.fhdr(x-ssl-ja3),digest(md5),hex \ |
| -f /path/to/file/with/malware-ja3.lst |
| http-request set-header X-Malware True if is_malware |
| http-request set-header X-Malware False if !is_malware |
| |
| ssl_fc_unique_id : binary |
| When the incoming connection was made over an SSL/TLS transport layer, |
| returns the TLS unique ID as defined in RFC5929 section 3. The unique id |
| can be encoded to base64 using the converter: "ssl_fc_unique_id,base64". |
| |
| ssl_fc_server_handshake_traffic_secret : string |
| Return the SERVER_HANDSHAKE_TRAFFIC_SECRET as an hexadecimal string for the |
| front connection when the incoming connection was made over a TLS 1.3 |
| transport layer. |
| Require OpenSSL >= 1.1.1. This is one of the keys dumped by the OpenSSL |
| keylog callback to generate the SSLKEYLOGFILE. The SSL Key logging must be |
| activated with "tune.ssl.keylog on" in the global section. See also |
| "tune.ssl.keylog" |
| |
| ssl_fc_server_traffic_secret_0 : string |
| Return the SERVER_TRAFFIC_SECRET_0 as an hexadecimal string for the |
| front connection when the incoming connection was made over an TLS 1.3 |
| transport layer. |
| Require OpenSSL >= 1.1.1. This is one of the keys dumped by the OpenSSL |
| keylog callback to generate the SSLKEYLOGFILE. The SSL Key logging must be |
| activated with "tune.ssl.keylog on" in the global section. See also |
| "tune.ssl.keylog" |
| |
| ssl_fc_server_random : binary |
| Returns the server random of the front connection when the incoming connection |
| was made over an SSL/TLS transport layer. It is useful to to decrypt traffic |
| sent using ephemeral ciphers. This requires OpenSSL >= 1.1.0, or BoringSSL. |
| |
| ssl_fc_session_id : binary |
| Returns the SSL ID of the front connection when the incoming connection was |
| made over an SSL/TLS transport layer. It is useful to stick a given client to |
| a server. It is important to note that some browsers refresh their session ID |
| every few minutes. |
| |
| ssl_fc_session_key : binary |
| Returns the SSL session master key of the front connection when the incoming |
| connection was made over an SSL/TLS transport layer. It is useful to decrypt |
| traffic sent using ephemeral ciphers. This requires OpenSSL >= 1.1.0, or |
| BoringSSL. |
| |
| |
| ssl_fc_sni : string |
| This extracts the Server Name Indication TLS extension (SNI) field from an |
| incoming connection made via an SSL/TLS transport layer and locally |
| deciphered by HAProxy. The result (when present) typically is a string |
| matching the HTTPS host name (253 chars or less). The SSL library must have |
| been built with support for TLS extensions enabled (check haproxy -vv). |
| |
| This fetch is different from "req.ssl_sni" above in that it applies to the |
| connection being deciphered by HAProxy and not to SSL contents being blindly |
| forwarded. See also "ssl_fc_sni_end" and "ssl_fc_sni_reg" below. This |
| requires that the SSL library is built with support for TLS extensions |
| enabled (check haproxy -vv). |
| |
| CAUTION! Except under very specific conditions, it is normally not correct to |
| use this field as a substitute for the HTTP "Host" header field. For example, |
| when forwarding an HTTPS connection to a server, the SNI field must be set |
| from the HTTP Host header field using "req.hdr(host)" and not from the front |
| SNI value. The reason is that SNI is solely used to select the certificate |
| the server side will present, and that clients are then allowed to send |
| requests with different Host values as long as they match the names in the |
| certificate. As such, "ssl_fc_sni" should normally not be used as an argument |
| to the "sni" server keyword, unless the backend works in TCP mode. |
| |
| ACL derivatives : |
| ssl_fc_sni_end : suffix match |
| ssl_fc_sni_reg : regex match |
| |
| ssl_fc_use_keysize : integer |
| Returns the symmetric cipher key size used in bits when the incoming |
| connection was made over an SSL/TLS transport layer. |
| |
| ssl_s_der : binary |
| Returns the DER formatted certificate presented by the server when the |
| outgoing connection was made over an SSL/TLS transport layer. When used for |
| an ACL, the value(s) to match against can be passed in hexadecimal form. |
| |
| ssl_s_chain_der : binary |
| Returns the DER formatted chain certificate presented by the server when the |
| outgoing connection was made over an SSL/TLS transport layer. When used for |
| an ACL, the value(s) to match against can be passed in hexadecimal form. One |
| can parse the result with any lib accepting ASN.1 DER data. It currently |
| does not support resumed sessions. |
| |
| ssl_s_key_alg : string |
| Returns the name of the algorithm used to generate the key of the certificate |
| presented by the server when the outgoing connection was made over an |
| SSL/TLS transport layer. |
| |
| ssl_s_notafter : string |
| Returns the end date presented by the server as a formatted string |
| YYMMDDhhmmss[Z] when the outgoing connection was made over an SSL/TLS |
| transport layer. |
| |
| ssl_s_notbefore : string |
| Returns the start date presented by the server as a formatted string |
| YYMMDDhhmmss[Z] when the outgoing connection was made over an SSL/TLS |
| transport layer. |
| |
| ssl_s_i_dn([<entry>[,<occ>[,<format>]]]) : string |
| When the outgoing connection was made over an SSL/TLS transport layer, |
| returns the full distinguished name of the issuer of the certificate |
| presented by the server when no <entry> is specified, or the value of the |
| first given entry found from the beginning of the DN. If a positive/negative |
| occurrence number is specified as the optional second argument, it returns |
| the value of the nth given entry value from the beginning/end of the DN. |
| For instance, "ssl_s_i_dn(OU,2)" the second organization unit, and |
| "ssl_s_i_dn(CN)" retrieves the common name. |
| The <format> parameter allows you to receive the DN suitable for |
| consumption by different protocols. Currently supported is rfc2253 for |
| LDAP v3. |
| If you'd like to modify the format only you can specify an empty string |
| and zero for the first two parameters. Example: ssl_s_i_dn(,0,rfc2253) |
| |
| ssl_s_s_dn([<entry>[,<occ>[,<format>]]]) : string |
| When the outgoing connection was made over an SSL/TLS transport layer, |
| returns the full distinguished name of the subject of the certificate |
| presented by the server when no <entry> is specified, or the value of the |
| first given entry found from the beginning of the DN. If a positive/negative |
| occurrence number is specified as the optional second argument, it returns |
| the value of the nth given entry value from the beginning/end of the DN. |
| For instance, "ssl_s_s_dn(OU,2)" the second organization unit, and |
| "ssl_s_s_dn(CN)" retrieves the common name. |
| The <format> parameter allows you to receive the DN suitable for |
| consumption by different protocols. Currently supported is rfc2253 for |
| LDAP v3. |
| If you'd like to modify the format only you can specify an empty string |
| and zero for the first two parameters. Example: ssl_s_s_dn(,0,rfc2253) |
| |
| ssl_s_serial : binary |
| Returns the serial of the certificate presented by the server when the |
| outgoing connection was made over an SSL/TLS transport layer. When used for |
| an ACL, the value(s) to match against can be passed in hexadecimal form. |
| |
| ssl_s_sha1 : binary |
| Returns the SHA-1 fingerprint of the certificate presented by the server |
| when the outgoing connection was made over an SSL/TLS transport layer. This |
| can be used to know which certificate was chosen using SNI. |
| |
| ssl_s_sig_alg : string |
| Returns the name of the algorithm used to sign the certificate presented by |
| the server when the outgoing connection was made over an SSL/TLS transport |
| layer. |
| |
| ssl_s_version : integer |
| Returns the version of the certificate presented by the server when the |
| outgoing connection was made over an SSL/TLS transport layer. |
| |
| 7.3.5. Fetching samples from buffer contents (Layer 6) |
| ------------------------------------------------------ |
| |
| Fetching samples from buffer contents is a bit different from the previous |
| sample fetches above because the sampled data are ephemeral. These data can |
| only be used when they're available and will be lost when they're forwarded. |
| For this reason, samples fetched from buffer contents during a request cannot |
| be used in a response for example. Even while the data are being fetched, they |
| can change. Sometimes it is necessary to set some delays or combine multiple |
| sample fetch methods to ensure that the expected data are complete and usable, |
| for example through TCP request content inspection. Please see the "tcp-request |
| content" keyword for more detailed information on the subject. |
| |
| Warning : Following sample fetches are ignored if used from HTTP proxies. They |
| only deal with raw contents found in the buffers. On their side, |
| HTTP proxies use structured content. Thus raw representation of |
| these data are meaningless. A warning is emitted if an ACL relies on |
| one of the following sample fetches. But it is not possible to detect |
| all invalid usage (for instance inside a log-format string or a |
| sample expression). So be careful. |
| |
| distcc_body(<token>[,<occ>]) : binary |
| Parses a distcc message and returns the body associated to occurrence #<occ> |
| of the token <token>. Occurrences start at 1, and when unspecified, any may |
| match though in practice only the first one is checked for now. This can be |
| used to extract file names or arguments in files built using distcc through |
| HAProxy. Please refer to distcc's protocol documentation for the complete |
| list of supported tokens. |
| |
| distcc_param(<token>[,<occ>]) : integer |
| Parses a distcc message and returns the parameter associated to occurrence |
| #<occ> of the token <token>. Occurrences start at 1, and when unspecified, |
| any may match though in practice only the first one is checked for now. This |
| can be used to extract certain information such as the protocol version, the |
| file size or the argument in files built using distcc through HAProxy. |
| Another use case consists in waiting for the start of the preprocessed file |
| contents before connecting to the server to avoid keeping idle connections. |
| Please refer to distcc's protocol documentation for the complete list of |
| supported tokens. |
| |
| Example : |
| # wait up to 20s for the pre-processed file to be uploaded |
| tcp-request inspect-delay 20s |
| tcp-request content accept if { distcc_param(DOTI) -m found } |
| # send large files to the big farm |
| use_backend big_farm if { distcc_param(DOTI) gt 1000000 } |
| |
| payload(<offset>,<length>) : binary (deprecated) |
| This is an alias for "req.payload" when used in the context of a request (e.g. |
| "stick on", "stick match"), and for "res.payload" when used in the context of |
| a response such as in "stick store response". |
| |
| payload_lv(<offset1>,<length>[,<offset2>]) : binary (deprecated) |
| This is an alias for "req.payload_lv" when used in the context of a request |
| (e.g. "stick on", "stick match"), and for "res.payload_lv" when used in the |
| context of a response such as in "stick store response". |
| |
| req.len : integer |
| req_len : integer (deprecated) |
| Returns an integer value corresponding to the number of bytes present in the |
| request buffer. This is mostly used in ACL. It is important to understand |
| that this test does not return false as long as the buffer is changing. This |
| means that a check with equality to zero will almost always immediately match |
| at the beginning of the session, while a test for more data will wait for |
| that data to come in and return false only when HAProxy is certain that no |
| more data will come in. This test was designed to be used with TCP request |
| content inspection. |
| |
| req.payload(<offset>,<length>) : binary |
| This extracts a binary block of <length> bytes and starting at byte <offset> |
| in the request buffer. As a special case, if the <length> argument is zero, |
| the the whole buffer from <offset> to the end is extracted. This can be used |
| with ACLs in order to check for the presence of some content in a buffer at |
| any location. |
| |
| ACL derivatives : |
| req.payload(<offset>,<length>) : hex binary match |
| |
| req.payload_lv(<offset1>,<length>[,<offset2>]) : binary |
| This extracts a binary block whose size is specified at <offset1> for <length> |
| bytes, and which starts at <offset2> if specified or just after the length in |
| the request buffer. The <offset2> parameter also supports relative offsets if |
| prepended with a '+' or '-' sign. |
| |
| ACL derivatives : |
| req.payload_lv(<offset1>,<length>[,<offset2>]) : hex binary match |
| |
| Example : please consult the example from the "stick store-response" keyword. |
| |
| req.proto_http : boolean |
| req_proto_http : boolean (deprecated) |
| Returns true when data in the request buffer look like HTTP and correctly |
| parses as such. It is the same parser as the common HTTP request parser which |
| is used so there should be no surprises. The test does not match until the |
| request is complete, failed or timed out. This test may be used to report the |
| protocol in TCP logs, but the biggest use is to block TCP request analysis |
| until a complete HTTP request is present in the buffer, for example to track |
| a header. |
| |
| Example: |
| # track request counts per "base" (concatenation of Host+URL) |
| tcp-request inspect-delay 10s |
| tcp-request content reject if !HTTP |
| tcp-request content track-sc0 base table req-rate |
| |
| req.rdp_cookie([<name>]) : string |
| rdp_cookie([<name>]) : string (deprecated) |
| When the request buffer looks like the RDP protocol, extracts the RDP cookie |
| <name>, or any cookie if unspecified. The parser only checks for the first |
| cookie, as illustrated in the RDP protocol specification. The cookie name is |
| case insensitive. Generally the "MSTS" cookie name will be used, as it can |
| contain the user name of the client connecting to the server if properly |
| configured on the client. The "MSTSHASH" cookie is often used as well for |
| session stickiness to servers. |
| |
| This differs from "balance rdp-cookie" in that any balancing algorithm may be |
| used and thus the distribution of clients to backend servers is not linked to |
| a hash of the RDP cookie. It is envisaged that using a balancing algorithm |
| such as "balance roundrobin" or "balance leastconn" will lead to a more even |
| distribution of clients to backend servers than the hash used by "balance |
| rdp-cookie". |
| |
| ACL derivatives : |
| req.rdp_cookie([<name>]) : exact string match |
| |
| Example : |
| listen tse-farm |
| bind 0.0.0.0:3389 |
| # wait up to 5s for an RDP cookie in the request |
| tcp-request inspect-delay 5s |
| tcp-request content accept if RDP_COOKIE |
| # apply RDP cookie persistence |
| persist rdp-cookie |
| # Persist based on the mstshash cookie |
| # This is only useful makes sense if |
| # balance rdp-cookie is not used |
| stick-table type string size 204800 |
| stick on req.rdp_cookie(mstshash) |
| server srv1 1.1.1.1:3389 |
| server srv1 1.1.1.2:3389 |
| |
| See also : "balance rdp-cookie", "persist rdp-cookie", "tcp-request" and the |
| "req.rdp_cookie" ACL. |
| |
| req.rdp_cookie_cnt([name]) : integer |
| rdp_cookie_cnt([name]) : integer (deprecated) |
| Tries to parse the request buffer as RDP protocol, then returns an integer |
| corresponding to the number of RDP cookies found. If an optional cookie name |
| is passed, only cookies matching this name are considered. This is mostly |
| used in ACL. |
| |
| ACL derivatives : |
| req.rdp_cookie_cnt([<name>]) : integer match |
| |
| req.ssl_alpn : string |
| Returns a string containing the values of the Application-Layer Protocol |
| Negotiation (ALPN) TLS extension (RFC7301), sent by the client within the SSL |
| ClientHello message. Note that this only applies to raw contents found in the |
| request buffer and not to the contents deciphered via an SSL data layer, so |
| this will not work with "bind" lines having the "ssl" option. This is useful |
| in ACL to make a routing decision based upon the ALPN preferences of a TLS |
| client, like in the example below. See also "ssl_fc_alpn". |
| |
| Examples : |
| # Wait for a client hello for at most 5 seconds |
| tcp-request inspect-delay 5s |
| tcp-request content accept if { req.ssl_hello_type 1 } |
| use_backend bk_acme if { req.ssl_alpn acme-tls/1 } |
| default_backend bk_default |
| |
| req.ssl_ec_ext : boolean |
| Returns a boolean identifying if client sent the Supported Elliptic Curves |
| Extension as defined in RFC4492, section 5.1. within the SSL ClientHello |
| message. This can be used to present ECC compatible clients with EC |
| certificate and to use RSA for all others, on the same IP address. Note that |
| this only applies to raw contents found in the request buffer and not to |
| contents deciphered via an SSL data layer, so this will not work with "bind" |
| lines having the "ssl" option. |
| |
| req.ssl_hello_type : integer |
| req_ssl_hello_type : integer (deprecated) |
| Returns an integer value containing the type of the SSL hello message found |
| in the request buffer if the buffer contains data that parse as a complete |
| SSL (v3 or superior) client hello message. Note that this only applies to raw |
| contents found in the request buffer and not to contents deciphered via an |
| SSL data layer, so this will not work with "bind" lines having the "ssl" |
| option. This is mostly used in ACL to detect presence of an SSL hello message |
| that is supposed to contain an SSL session ID usable for stickiness. |
| |
| req.ssl_sni : string |
| req_ssl_sni : string (deprecated) |
| Returns a string containing the value of the Server Name TLS extension sent |
| by a client in a TLS stream passing through the request buffer if the buffer |
| contains data that parse as a complete SSL (v3 or superior) client hello |
| message. Note that this only applies to raw contents found in the request |
| buffer and not to contents deciphered via an SSL data layer, so this will not |
| work with "bind" lines having the "ssl" option. This will only work for actual |
| implicit TLS based protocols like HTTPS (443), IMAPS (993), SMTPS (465), |
| however it will not work for explicit TLS based protocols, like SMTP (25/587) |
| or IMAP (143). SNI normally contains the name of the host the client tries to |
| connect to (for recent browsers). SNI is useful for allowing or denying access |
| to certain hosts when SSL/TLS is used by the client. This test was designed to |
| be used with TCP request content inspection. If content switching is needed, |
| it is recommended to first wait for a complete client hello (type 1), like in |
| the example below. See also "ssl_fc_sni". |
| |
| ACL derivatives : |
| req.ssl_sni : exact string match |
| |
| Examples : |
| # Wait for a client hello for at most 5 seconds |
| tcp-request inspect-delay 5s |
| tcp-request content accept if { req.ssl_hello_type 1 } |
| use_backend bk_allow if { req.ssl_sni -f allowed_sites } |
| default_backend bk_sorry_page |
| |
| req.ssl_st_ext : integer |
| Returns 0 if the client didn't send a SessionTicket TLS Extension (RFC5077) |
| Returns 1 if the client sent SessionTicket TLS Extension |
| Returns 2 if the client also sent non-zero length TLS SessionTicket |
| Note that this only applies to raw contents found in the request buffer and |
| not to contents deciphered via an SSL data layer, so this will not work with |
| "bind" lines having the "ssl" option. This can for example be used to detect |
| whether the client sent a SessionTicket or not and stick it accordingly, if |
| no SessionTicket then stick on SessionID or don't stick as there's no server |
| side state is there when SessionTickets are in use. |
| |
| req.ssl_ver : integer |
| req_ssl_ver : integer (deprecated) |
| Returns an integer value containing the version of the SSL/TLS protocol of a |
| stream present in the request buffer. Both SSLv2 hello messages and SSLv3 |
| messages are supported. TLSv1 is announced as SSL version 3.1. The value is |
| composed of the major version multiplied by 65536, added to the minor |
| version. Note that this only applies to raw contents found in the request |
| buffer and not to contents deciphered via an SSL data layer, so this will not |
| work with "bind" lines having the "ssl" option. The ACL version of the test |
| matches against a decimal notation in the form MAJOR.MINOR (e.g. 3.1). This |
| fetch is mostly used in ACL. |
| |
| ACL derivatives : |
| req.ssl_ver : decimal match |
| |
| res.len : integer |
| Returns an integer value corresponding to the number of bytes present in the |
| response buffer. This is mostly used in ACL. It is important to understand |
| that this test does not return false as long as the buffer is changing. This |
| means that a check with equality to zero will almost always immediately match |
| at the beginning of the session, while a test for more data will wait for |
| that data to come in and return false only when HAProxy is certain that no |
| more data will come in. This test was designed to be used with TCP response |
| content inspection. But it may also be used in tcp-check based expect rules. |
| |
| res.payload(<offset>,<length>) : binary |
| This extracts a binary block of <length> bytes and starting at byte <offset> |
| in the response buffer. As a special case, if the <length> argument is zero, |
| the whole buffer from <offset> to the end is extracted. This can be used |
| with ACLs in order to check for the presence of some content in a buffer at |
| any location. It may also be used in tcp-check based expect rules. |
| |
| res.payload_lv(<offset1>,<length>[,<offset2>]) : binary |
| This extracts a binary block whose size is specified at <offset1> for <length> |
| bytes, and which starts at <offset2> if specified or just after the length in |
| the response buffer. The <offset2> parameter also supports relative offsets |
| if prepended with a '+' or '-' sign. It may also be used in tcp-check based |
| expect rules. |
| |
| Example : please consult the example from the "stick store-response" keyword. |
| |
| res.ssl_hello_type : integer |
| rep_ssl_hello_type : integer (deprecated) |
| Returns an integer value containing the type of the SSL hello message found |
| in the response buffer if the buffer contains data that parses as a complete |
| SSL (v3 or superior) hello message. Note that this only applies to raw |
| contents found in the response buffer and not to contents deciphered via an |
| SSL data layer, so this will not work with "server" lines having the "ssl" |
| option. This is mostly used in ACL to detect presence of an SSL hello message |
| that is supposed to contain an SSL session ID usable for stickiness. |
| |
| wait_end : boolean |
| This fetch either returns true when the inspection period is over, or does |
| not fetch. It is only used in ACLs, in conjunction with content analysis to |
| avoid returning a wrong verdict early. It may also be used to delay some |
| actions, such as a delayed reject for some special addresses. Since it either |
| stops the rules evaluation or immediately returns true, it is recommended to |
| use this acl as the last one in a rule. Please note that the default ACL |
| "WAIT_END" is always usable without prior declaration. This test was designed |
| to be used with TCP request content inspection. |
| |
| Examples : |
| # delay every incoming request by 2 seconds |
| tcp-request inspect-delay 2s |
| tcp-request content accept if WAIT_END |
| |
| # don't immediately tell bad guys they are rejected |
| tcp-request inspect-delay 10s |
| acl goodguys src 10.0.0.0/24 |
| acl badguys src 10.0.1.0/24 |
| tcp-request content accept if goodguys |
| tcp-request content reject if badguys WAIT_END |
| tcp-request content reject |
| |
| |
| 7.3.6. Fetching HTTP samples (Layer 7) |
| -------------------------------------- |
| |
| It is possible to fetch samples from HTTP contents, requests and responses. |
| This application layer is also called layer 7. It is only possible to fetch the |
| data in this section when a full HTTP request or response has been parsed from |
| its respective request or response buffer. This is always the case with all |
| HTTP specific rules and for sections running with "mode http". When using TCP |
| content inspection, it may be necessary to support an inspection delay in order |
| to let the request or response come in first. These fetches may require a bit |
| more CPU resources than the layer 4 ones, but not much since the request and |
| response are indexed. |
| |
| Note : Regarding HTTP processing from the tcp-request content rules, everything |
| will work as expected from an HTTP proxy. However, from a TCP proxy, |
| without an HTTP upgrade, it will only work for HTTP/1 content. For |
| HTTP/2 content, only the preface is visible. Thus, it is only possible |
| to rely to "req.proto_http", "req.ver" and eventually "method" sample |
| fetches. All other L7 sample fetches will fail. After an HTTP upgrade, |
| they will work in the same manner than from an HTTP proxy. |
| |
| base : string |
| This returns the concatenation of the first Host header and the path part of |
| the request, which starts at the first slash and ends before the question |
| mark. It can be useful in virtual hosted environments to detect URL abuses as |
| well as to improve shared caches efficiency. Using this with a limited size |
| stick table also allows one to collect statistics about most commonly |
| requested objects by host/path. With ACLs it can allow simple content |
| switching rules involving the host and the path at the same time, such as |
| "www.example.com/favicon.ico". See also "path" and "uri". |
| |
| ACL derivatives : |
| base : exact string match |
| base_beg : prefix match |
| base_dir : subdir match |
| base_dom : domain match |
| base_end : suffix match |
| base_len : length match |
| base_reg : regex match |
| base_sub : substring match |
| |
| base32 : integer |
| This returns a 32-bit hash of the value returned by the "base" fetch method |
| above. This is useful to track per-URL activity on high traffic sites without |
| having to store all URLs. Instead a shorter hash is stored, saving a lot of |
| memory. The output type is an unsigned integer. The hash function used is |
| SDBM with full avalanche on the output. Technically, base32 is exactly equal |
| to "base,sdbm(1)". |
| |
| base32+src : binary |
| This returns the concatenation of the base32 fetch above and the src fetch |
| below. The resulting type is of type binary, with a size of 8 or 20 bytes |
| depending on the source address family. This can be used to track per-IP, |
| per-URL counters. |
| |
| baseq : string |
| This returns the concatenation of the first Host header and the path part of |
| the request with the query-string, which starts at the first slash. Using this |
| instead of "base" allows one to properly identify the target resource, for |
| statistics or caching use cases. See also "path", "pathq" and "base". |
| |
| capture.req.hdr(<idx>) : string |
| This extracts the content of the header captured by the "capture request |
| header", idx is the position of the capture keyword in the configuration. |
| The first entry is an index of 0. See also: "capture request header". |
| |
| capture.req.method : string |
| This extracts the METHOD of an HTTP request. It can be used in both request |
| and response. Unlike "method", it can be used in both request and response |
| because it's allocated. |
| |
| capture.req.uri : string |
| This extracts the request's URI, which starts at the first slash and ends |
| before the first space in the request (without the host part). Unlike "path" |
| and "url", it can be used in both request and response because it's |
| allocated. |
| |
| capture.req.ver : string |
| This extracts the request's HTTP version and returns either "HTTP/1.0" or |
| "HTTP/1.1". Unlike "req.ver", it can be used in both request, response, and |
| logs because it relies on a persistent flag. |
| |
| capture.res.hdr(<idx>) : string |
| This extracts the content of the header captured by the "capture response |
| header", idx is the position of the capture keyword in the configuration. |
| The first entry is an index of 0. |
| See also: "capture response header" |
| |
| capture.res.ver : string |
| This extracts the response's HTTP version and returns either "HTTP/1.0" or |
| "HTTP/1.1". Unlike "res.ver", it can be used in logs because it relies on a |
| persistent flag. |
| |
| req.body : binary |
| This returns the HTTP request's available body as a block of data. It is |
| recommended to use "option http-buffer-request" to be sure to wait, as much |
| as possible, for the request's body. |
| |
| req.body_param([<name>[,i]]) : string |
| This fetch assumes that the body of the POST request is url-encoded. The user |
| can check if the "content-type" contains the value |
| "application/x-www-form-urlencoded". This extracts the first occurrence of the |
| parameter <name> in the body, which ends before '&'. The parameter name is |
| case-sensitive, unless "i" is added as a second argument. If no name is |
| given, any parameter will match, and the first one will be returned. The |
| result is a string corresponding to the value of the parameter <name> as |
| presented in the request body (no URL decoding is performed). Note that the |
| ACL version of this fetch iterates over multiple parameters and will |
| iteratively report all parameters values if no name is given. |
| |
| req.body_len : integer |
| This returns the length of the HTTP request's available body in bytes. It may |
| be lower than the advertised length if the body is larger than the buffer. It |
| is recommended to use "option http-buffer-request" to be sure to wait, as |
| much as possible, for the request's body. |
| |
| req.body_size : integer |
| This returns the advertised length of the HTTP request's body in bytes. It |
| will represent the advertised Content-Length header, or the size of the |
| available data in case of chunked encoding. |
| |
| req.cook([<name>]) : string |
| cook([<name>]) : string (deprecated) |
| This extracts the last occurrence of the cookie name <name> on a "Cookie" |
| header line from the request, and returns its value as string. If no name is |
| specified, the first cookie value is returned. When used with ACLs, all |
| matching cookies are evaluated. Spaces around the name and the value are |
| ignored as requested by the Cookie header specification (RFC6265). The cookie |
| name is case-sensitive. Empty cookies are valid, so an empty cookie may very |
| well return an empty value if it is present. Use the "found" match to detect |
| presence. Use the res.cook() variant for response cookies sent by the server. |
| |
| ACL derivatives : |
| req.cook([<name>]) : exact string match |
| req.cook_beg([<name>]) : prefix match |
| req.cook_dir([<name>]) : subdir match |
| req.cook_dom([<name>]) : domain match |
| req.cook_end([<name>]) : suffix match |
| req.cook_len([<name>]) : length match |
| req.cook_reg([<name>]) : regex match |
| req.cook_sub([<name>]) : substring match |
| |
| req.cook_cnt([<name>]) : integer |
| cook_cnt([<name>]) : integer (deprecated) |
| Returns an integer value representing the number of occurrences of the cookie |
| <name> in the request, or all cookies if <name> is not specified. |
| |
| req.cook_val([<name>]) : integer |
| cook_val([<name>]) : integer (deprecated) |
| This extracts the last occurrence of the cookie name <name> on a "Cookie" |
| header line from the request, and converts its value to an integer which is |
| returned. If no name is specified, the first cookie value is returned. When |
| used in ACLs, all matching names are iterated over until a value matches. |
| |
| cookie([<name>]) : string (deprecated) |
| This extracts the last occurrence of the cookie name <name> on a "Cookie" |
| header line from the request, or a "Set-Cookie" header from the response, and |
| returns its value as a string. A typical use is to get multiple clients |
| sharing a same profile use the same server. This can be similar to what |
| "appsession" did with the "request-learn" statement, but with support for |
| multi-peer synchronization and state keeping across restarts. If no name is |
| specified, the first cookie value is returned. This fetch should not be used |
| anymore and should be replaced by req.cook() or res.cook() instead as it |
| ambiguously uses the direction based on the context where it is used. |
| |
| hdr([<name>[,<occ>]]) : string |
| This is equivalent to req.hdr() when used on requests, and to res.hdr() when |
| used on responses. Please refer to these respective fetches for more details. |
| In case of doubt about the fetch direction, please use the explicit ones. |
| Note that contrary to the hdr() sample fetch method, the hdr_* ACL keywords |
| unambiguously apply to the request headers. |
| |
| req.fhdr(<name>[,<occ>]) : string |
| This returns the full value of the last occurrence of header <name> in an |
| HTTP request. It differs from req.hdr() in that any commas present in the |
| value are returned and are not used as delimiters. This is sometimes useful |
| with headers such as User-Agent. |
| |
| When used from an ACL, all occurrences are iterated over until a match is |
| found. |
| |
| Optionally, a specific occurrence might be specified as a position number. |
| Positive values indicate a position from the first occurrence, with 1 being |
| the first one. Negative values indicate positions relative to the last one, |
| with -1 being the last one. |
| |
| req.fhdr_cnt([<name>]) : integer |
| Returns an integer value representing the number of occurrences of request |
| header field name <name>, or the total number of header fields if <name> is |
| not specified. Like req.fhdr() it differs from res.hdr_cnt() by not splitting |
| headers at commas. |
| |
| req.hdr([<name>[,<occ>]]) : string |
| This returns the last comma-separated value of the header <name> in an HTTP |
| request. The fetch considers any comma as a delimiter for distinct values. |
| This is useful if you need to process headers that are defined to be a list |
| of values, such as Accept, or X-Forwarded-For. If full-line headers are |
| desired instead, use req.fhdr(). Please carefully check RFC 7231 to know how |
| certain headers are supposed to be parsed. Also, some of them are case |
| insensitive (e.g. Connection). |
| |
| When used from an ACL, all occurrences are iterated over until a match is |
| found. |
| |
| Optionally, a specific occurrence might be specified as a position number. |
| Positive values indicate a position from the first occurrence, with 1 being |
| the first one. Negative values indicate positions relative to the last one, |
| with -1 being the last one. |
| |
| A typical use is with the X-Forwarded-For header once converted to IP, |
| associated with an IP stick-table. |
| |
| ACL derivatives : |
| hdr([<name>[,<occ>]]) : exact string match |
| hdr_beg([<name>[,<occ>]]) : prefix match |
| hdr_dir([<name>[,<occ>]]) : subdir match |
| hdr_dom([<name>[,<occ>]]) : domain match |
| hdr_end([<name>[,<occ>]]) : suffix match |
| hdr_len([<name>[,<occ>]]) : length match |
| hdr_reg([<name>[,<occ>]]) : regex match |
| hdr_sub([<name>[,<occ>]]) : substring match |
| |
| req.hdr_cnt([<name>]) : integer |
| hdr_cnt([<header>]) : integer (deprecated) |
| Returns an integer value representing the number of occurrences of request |
| header field name <name>, or the total number of header field values if |
| <name> is not specified. Like req.hdr() it counts each comma separated |
| part of the header's value. If counting of full-line headers is desired, |
| then req.fhdr_cnt() should be used instead. |
| |
| With ACLs, it can be used to detect presence, absence or abuse of a specific |
| header, as well as to block request smuggling attacks by rejecting requests |
| which contain more than one of certain headers. |
| |
| Refer to req.hdr() for more information on header matching. |
| |
| req.hdr_ip([<name>[,<occ>]]) : ip |
| hdr_ip([<name>[,<occ>]]) : ip (deprecated) |
| This extracts the last occurrence of header <name> in an HTTP request, |
| converts it to an IPv4 or IPv6 address and returns this address. When used |
| with ACLs, all occurrences are checked, and if <name> is omitted, every value |
| of every header is checked. The parser strictly adheres to the format |
| described in RFC7239, with the extension that IPv4 addresses may optionally |
| be followed by a colon (':') and a valid decimal port number (0 to 65535), |
| which will be silently dropped. All other forms will not match and will |
| cause the address to be ignored. |
| |
| The <occ> parameter is processed as with req.hdr(). |
| |
| A typical use is with the X-Forwarded-For and X-Client-IP headers. |
| |
| req.hdr_val([<name>[,<occ>]]) : integer |
| hdr_val([<name>[,<occ>]]) : integer (deprecated) |
| This extracts the last occurrence of header <name> in an HTTP request, and |
| converts it to an integer value. When used with ACLs, all occurrences are |
| checked, and if <name> is omitted, every value of every header is checked. |
| |
| The <occ> parameter is processed as with req.hdr(). |
| |
| A typical use is with the X-Forwarded-For header. |
| |
| req.hdrs : string |
| Returns the current request headers as string including the last empty line |
| separating headers from the request body. The last empty line can be used to |
| detect a truncated header block. This sample fetch is useful for some SPOE |
| headers analyzers and for advanced logging. |
| |
| req.hdrs_bin : binary |
| Returns the current request headers contained in preparsed binary form. This |
| is useful for offloading some processing with SPOE. Each string is described |
| by a length followed by the number of bytes indicated in the length. The |
| length is represented using the variable integer encoding detailed in the |
| SPOE documentation. The end of the list is marked by a couple of empty header |
| names and values (length of 0 for both). |
| |
| *(<str:header-name><str:header-value>)<empty string><empty string> |
| |
| int: refer to the SPOE documentation for the encoding |
| str: <int:length><bytes> |
| |
| http_auth(<userlist>) : boolean |
| Returns a boolean indicating whether the authentication data received from |
| the client match a username & password stored in the specified userlist. This |
| fetch function is not really useful outside of ACLs. Currently only http |
| basic auth is supported. |
| |
| http_auth_bearer([<header>]) : string |
| Returns the client-provided token found in the authorization data when the |
| Bearer scheme is used (to send JSON Web Tokens for instance). No check is |
| performed on the data sent by the client. |
| If a specific <header> is supplied, it will parse this header instead of the |
| Authorization one. |
| |
| http_auth_group(<userlist>) : string |
| Returns a string corresponding to the user name found in the authentication |
| data received from the client if both the user name and password are valid |
| according to the specified userlist. The main purpose is to use it in ACLs |
| where it is then checked whether the user belongs to any group within a list. |
| This fetch function is not really useful outside of ACLs. Currently only http |
| basic auth is supported. |
| |
| ACL derivatives : |
| http_auth_group(<userlist>) : group ... |
| Returns true when the user extracted from the request and whose password is |
| valid according to the specified userlist belongs to at least one of the |
| groups. |
| |
| http_auth_pass : string |
| Returns the user's password found in the authentication data received from |
| the client, as supplied in the Authorization header. Not checks are |
| performed by this sample fetch. Only Basic authentication is supported. |
| |
| http_auth_type : string |
| Returns the authentication method found in the authentication data received from |
| the client, as supplied in the Authorization header. Not checks are |
| performed by this sample fetch. Only Basic authentication is supported. |
| |
| http_auth_user : string |
| Returns the user name found in the authentication data received from the |
| client, as supplied in the Authorization header. Not checks are performed by |
| this sample fetch. Only Basic authentication is supported. |
| |
| http_first_req : boolean |
| Returns true when the request being processed is the first one of the |
| connection. This can be used to add or remove headers that may be missing |
| from some requests when a request is not the first one, or to help grouping |
| requests in the logs. |
| |
| method : integer + string |
| Returns an integer value corresponding to the method in the HTTP request. For |
| example, "GET" equals 1 (check sources to establish the matching). Value 9 |
| means "other method" and may be converted to a string extracted from the |
| stream. This should not be used directly as a sample, this is only meant to |
| be used from ACLs, which transparently convert methods from patterns to these |
| integer + string values. Some predefined ACL already check for most common |
| methods. |
| |
| ACL derivatives : |
| method : case insensitive method match |
| |
| Example : |
| # only accept GET and HEAD requests |
| acl valid_method method GET HEAD |
| http-request deny if ! valid_method |
| |
| path : string |
| This extracts the request's URL path, which starts at the first slash and |
| ends before the question mark (without the host part). A typical use is with |
| prefetch-capable caches, and with portals which need to aggregate multiple |
| information from databases and keep them in caches. Note that with outgoing |
| caches, it would be wiser to use "url" instead. With ACLs, it's typically |
| used to match exact file names (e.g. "/login.php"), or directory parts using |
| the derivative forms. See also the "url" and "base" fetch methods. Please |
| note that any fragment reference in the URI ('#' after the path) is strictly |
| forbidden by the HTTP standard and will be rejected. However, if the frontend |
| receiving the request has "option accept-invalid-http-request", then this |
| fragment part will be accepted and will also appear in the path. |
| |
| ACL derivatives : |
| path : exact string match |
| path_beg : prefix match |
| path_dir : subdir match |
| path_dom : domain match |
| path_end : suffix match |
| path_len : length match |
| path_reg : regex match |
| path_sub : substring match |
| |
| pathq : string |
| This extracts the request's URL path with the query-string, which starts at |
| the first slash. This sample fetch is pretty handy to always retrieve a |
| relative URI, excluding the scheme and the authority part, if any. Indeed, |
| while it is the common representation for an HTTP/1.1 request target, in |
| HTTP/2, an absolute URI is often used. This sample fetch will return the same |
| result in both cases. Please note that any fragment reference in the URI ('#' |
| after the path) is strictly forbidden by the HTTP standard and will be |
| rejected. However, if the frontend receiving the request has "option |
| accept-invalid-http-request", then this fragment part will be accepted and |
| will also appear in the path. |
| |
| query : string |
| This extracts the request's query string, which starts after the first |
| question mark. If no question mark is present, this fetch returns nothing. If |
| a question mark is present but nothing follows, it returns an empty string. |
| This means it's possible to easily know whether a query string is present |
| using the "found" matching method. This fetch is the complement of "path" |
| which stops before the question mark. |
| |
| req.hdr_names([<delim>]) : string |
| This builds a string made from the concatenation of all header names as they |
| appear in the request when the rule is evaluated. The default delimiter is |
| the comma (',') but it may be overridden as an optional argument <delim>. In |
| this case, only the first character of <delim> is considered. |
| |
| req.ver : string |
| req_ver : string (deprecated) |
| Returns the version string from the HTTP request, for example "1.1". This can |
| be useful for ACL. For logs use the "%HV" log variable. Some predefined ACL |
| already check for versions 1.0 and 1.1. |
| |
| Common values are "1.0", "1.1", "2.0" or "3.0". |
| |
| In the case of http/2 and http/3, the value is not extracted from the HTTP |
| version in the request line but is determined by the negociated protocol |
| version. |
| |
| ACL derivatives : |
| req.ver : exact string match |
| |
| res.body : binary |
| This returns the HTTP response's available body as a block of data. Unlike |
| the request side, there is no directive to wait for the response's body. This |
| sample fetch is really useful (and usable) in the health-check context. |
| |
| It may be used in tcp-check based expect rules. |
| |
| res.body_len : integer |
| This returns the length of the HTTP response available body in bytes. Unlike |
| the request side, there is no directive to wait for the response's body. This |
| sample fetch is really useful (and usable) in the health-check context. |
| |
| It may be used in tcp-check based expect rules. |
| |
| res.body_size : integer |
| This returns the advertised length of the HTTP response body in bytes. It |
| will represent the advertised Content-Length header, or the size of the |
| available data in case of chunked encoding. Unlike the request side, there is |
| no directive to wait for the response body. This sample fetch is really |
| useful (and usable) in the health-check context. |
| |
| It may be used in tcp-check based expect rules. |
| |
| res.cache_hit : boolean |
| Returns the boolean "true" value if the response has been built out of an |
| HTTP cache entry, otherwise returns boolean "false". |
| |
| res.cache_name : string |
| Returns a string containing the name of the HTTP cache that was used to |
| build the HTTP response if res.cache_hit is true, otherwise returns an |
| empty string. |
| |
| res.comp : boolean |
| Returns the boolean "true" value if the response has been compressed by |
| HAProxy, otherwise returns boolean "false". This may be used to add |
| information in the logs. |
| |
| res.comp_algo : string |
| Returns a string containing the name of the algorithm used if the response |
| was compressed by HAProxy, for example : "deflate". This may be used to add |
| some information in the logs. |
| |
| res.cook([<name>]) : string |
| scook([<name>]) : string (deprecated) |
| This extracts the last occurrence of the cookie name <name> on a "Set-Cookie" |
| header line from the response, and returns its value as string. If no name is |
| specified, the first cookie value is returned. |
| |
| It may be used in tcp-check based expect rules. |
| |
| ACL derivatives : |
| res.scook([<name>] : exact string match |
| |
| res.cook_cnt([<name>]) : integer |
| scook_cnt([<name>]) : integer (deprecated) |
| Returns an integer value representing the number of occurrences of the cookie |
| <name> in the response, or all cookies if <name> is not specified. This is |
| mostly useful when combined with ACLs to detect suspicious responses. |
| |
| It may be used in tcp-check based expect rules. |
| |
| res.cook_val([<name>]) : integer |
| scook_val([<name>]) : integer (deprecated) |
| This extracts the last occurrence of the cookie name <name> on a "Set-Cookie" |
| header line from the response, and converts its value to an integer which is |
| returned. If no name is specified, the first cookie value is returned. |
| |
| It may be used in tcp-check based expect rules. |
| |
| res.fhdr([<name>[,<occ>]]) : string |
| This fetch works like the req.fhdr() fetch with the difference that it acts |
| on the headers within an HTTP response. |
| |
| Like req.fhdr() the res.fhdr() fetch returns full values. If the header is |
| defined to be a list you should use res.hdr(). |
| |
| This fetch is sometimes useful with headers such as Date or Expires. |
| |
| It may be used in tcp-check based expect rules. |
| |
| res.fhdr_cnt([<name>]) : integer |
| This fetch works like the req.fhdr_cnt() fetch with the difference that it |
| acts on the headers within an HTTP response. |
| |
| Like req.fhdr_cnt() the res.fhdr_cnt() fetch acts on full values. If the |
| header is defined to be a list you should use res.hdr_cnt(). |
| |
| It may be used in tcp-check based expect rules. |
| |
| res.hdr([<name>[,<occ>]]) : string |
| shdr([<name>[,<occ>]]) : string (deprecated) |
| This fetch works like the req.hdr() fetch with the difference that it acts |
| on the headers within an HTTP response. |
| |
| Like req.hdr() the res.hdr() fetch considers the comma to be a delimiter. If |
| this is not desired res.fhdr() should be used. |
| |
| It may be used in tcp-check based expect rules. |
| |
| ACL derivatives : |
| res.hdr([<name>[,<occ>]]) : exact string match |
| res.hdr_beg([<name>[,<occ>]]) : prefix match |
| res.hdr_dir([<name>[,<occ>]]) : subdir match |
| res.hdr_dom([<name>[,<occ>]]) : domain match |
| res.hdr_end([<name>[,<occ>]]) : suffix match |
| res.hdr_len([<name>[,<occ>]]) : length match |
| res.hdr_reg([<name>[,<occ>]]) : regex match |
| res.hdr_sub([<name>[,<occ>]]) : substring match |
| |
| res.hdr_cnt([<name>]) : integer |
| shdr_cnt([<name>]) : integer (deprecated) |
| This fetch works like the req.hdr_cnt() fetch with the difference that it |
| acts on the headers within an HTTP response. |
| |
| Like req.hdr_cnt() the res.hdr_cnt() fetch considers the comma to be a |
| delimiter. If this is not desired res.fhdr_cnt() should be used. |
| |
| It may be used in tcp-check based expect rules. |
| |
| res.hdr_ip([<name>[,<occ>]]) : ip |
| shdr_ip([<name>[,<occ>]]) : ip (deprecated) |
| This fetch works like the req.hdr_ip() fetch with the difference that it |
| acts on the headers within an HTTP response. |
| |
| This can be useful to learn some data into a stick table. |
| |
| It may be used in tcp-check based expect rules. |
| |
| res.hdr_names([<delim>]) : string |
| This builds a string made from the concatenation of all header names as they |
| appear in the response when the rule is evaluated. The default delimiter is |
| the comma (',') but it may be overridden as an optional argument <delim>. In |
| this case, only the first character of <delim> is considered. |
| |
| It may be used in tcp-check based expect rules. |
| |
| res.hdr_val([<name>[,<occ>]]) : integer |
| shdr_val([<name>[,<occ>]]) : integer (deprecated) |
| This fetch works like the req.hdr_val() fetch with the difference that it |
| acts on the headers within an HTTP response. |
| |
| This can be useful to learn some data into a stick table. |
| |
| It may be used in tcp-check based expect rules. |
| |
| res.hdrs : string |
| Returns the current response headers as string including the last empty line |
| separating headers from the request body. The last empty line can be used to |
| detect a truncated header block. This sample fetch is useful for some SPOE |
| headers analyzers and for advanced logging. |
| |
| It may also be used in tcp-check based expect rules. |
| |
| res.hdrs_bin : binary |
| Returns the current response headers contained in preparsed binary form. This |
| is useful for offloading some processing with SPOE. It may be used in |
| tcp-check based expect rules. Each string is described by a length followed |
| by the number of bytes indicated in the length. The length is represented |
| using the variable integer encoding detailed in the SPOE documentation. The |
| end of the list is marked by a couple of empty header names and values |
| (length of 0 for both). |
| |
| *(<str:header-name><str:header-value>)<empty string><empty string> |
| |
| int: refer to the SPOE documentation for the encoding |
| str: <int:length><bytes> |
| |
| res.ver : string |
| resp_ver : string (deprecated) |
| Returns the version string from the HTTP response, for example "1.1". This |
| can be useful for logs, but is mostly there for ACL. |
| |
| It may be used in tcp-check based expect rules. |
| |
| ACL derivatives : |
| resp.ver : exact string match |
| |
| set-cookie([<name>]) : string (deprecated) |
| This extracts the last occurrence of the cookie name <name> on a "Set-Cookie" |
| header line from the response and uses the corresponding value to match. This |
| can be comparable to what "appsession" did with default options, but with |
| support for multi-peer synchronization and state keeping across restarts. |
| |
| This fetch function is deprecated and has been superseded by the "res.cook" |
| fetch. This keyword will disappear soon. |
| |
| status : integer |
| Returns an integer containing the HTTP status code in the HTTP response, for |
| example, 302. It is mostly used within ACLs and integer ranges, for example, |
| to remove any Location header if the response is not a 3xx. |
| |
| It may be used in tcp-check based expect rules. |
| |
| unique-id : string |
| Returns the unique-id attached to the request. The directive |
| "unique-id-format" must be set. If it is not set, the unique-id sample fetch |
| fails. Note that the unique-id is usually used with HTTP requests, however this |
| sample fetch can be used with other protocols. Obviously, if it is used with |
| other protocols than HTTP, the unique-id-format directive must not contain |
| HTTP parts. See: unique-id-format and unique-id-header |
| |
| url : string |
| This extracts the request's URL as presented in the request. A typical use is |
| with prefetch-capable caches, and with portals which need to aggregate |
| multiple information from databases and keep them in caches. With ACLs, using |
| "path" is preferred over using "url", because clients may send a full URL as |
| is normally done with proxies. The only real use is to match "*" which does |
| not match in "path", and for which there is already a predefined ACL. See |
| also "path" and "base". Please note that any fragment reference in the URI |
| ('#' after the path) is strictly forbidden by the HTTP standard and will be |
| rejected. However, if the frontend receiving the request has "option |
| accept-invalid-http-request", then this fragment part will be accepted and |
| will also appear in the url. |
| |
| ACL derivatives : |
| url : exact string match |
| url_beg : prefix match |
| url_dir : subdir match |
| url_dom : domain match |
| url_end : suffix match |
| url_len : length match |
| url_reg : regex match |
| url_sub : substring match |
| |
| url_ip : ip |
| This extracts the IP address from the request's URL when the host part is |
| presented as an IP address. Its use is very limited. For instance, a |
| monitoring system might use this field as an alternative for the source IP in |
| order to test what path a given source address would follow, or to force an |
| entry in a table for a given source address. It may be used in combination |
| with 'http-request set-dst' to emulate the older 'option http_proxy'. |
| |
| url_port : integer |
| This extracts the port part from the request's URL. Note that if the port is |
| not specified in the request, port 80 is assumed.. |
| |
| urlp([<name>[,<delim>[,i]]]) : string |
| url_param([<name>[,<delim>[,i]]]) : string |
| This extracts the first occurrence of the parameter <name> in the query |
| string, which begins after either '?' or <delim>, and which ends before '&', |
| ';' or <delim>. The parameter name is case-sensitive, unless"i" is added as a |
| third argument. If no name is given, any parameter will match, and the first |
| one will be returned. The result is a string corresponding to the value of the |
| parameter <name> as presented in the request (no URL decoding is performed). |
| This can be used for session stickiness based on a client ID, to extract an |
| application cookie passed as a URL parameter, or in ACLs to apply some checks. |
| Note that the ACL version of this fetch iterates over multiple parameters and |
| will iteratively report all parameters values if no name is given |
| |
| ACL derivatives : |
| urlp(<name>[,<delim>]) : exact string match |
| urlp_beg(<name>[,<delim>]) : prefix match |
| urlp_dir(<name>[,<delim>]) : subdir match |
| urlp_dom(<name>[,<delim>]) : domain match |
| urlp_end(<name>[,<delim>]) : suffix match |
| urlp_len(<name>[,<delim>]) : length match |
| urlp_reg(<name>[,<delim>]) : regex match |
| urlp_sub(<name>[,<delim>]) : substring match |
| |
| |
| Example : |
| # match http://example.com/foo?PHPSESSIONID=some_id |
| stick on urlp(PHPSESSIONID) |
| # match http://example.com/foo;JSESSIONID=some_id |
| stick on urlp(JSESSIONID,;) |
| |
| urlp_val([<name>[,<delim>[,i]]]) : integer |
| See "urlp" above. This one extracts the URL parameter <name> in the request |
| and converts it to an integer value. This can be used for session stickiness |
| based on a user ID for example, or with ACLs to match a page number or price. |
| |
| url32 : integer |
| This returns a 32-bit hash of the value obtained by concatenating the first |
| Host header and the whole URL including parameters (not only the path part of |
| the request, as in the "base32" fetch above). This is useful to track per-URL |
| activity. A shorter hash is stored, saving a lot of memory. The output type |
| is an unsigned integer. |
| |
| url32+src : binary |
| This returns the concatenation of the "url32" fetch and the "src" fetch. The |
| resulting type is of type binary, with a size of 8 or 20 bytes depending on |
| the source address family. This can be used to track per-IP, per-URL counters. |
| |
| |
| 7.3.7. Fetching samples for developers |
| --------------------------------------- |
| |
| This set of sample fetch methods is reserved to developers and must never be |
| used on a production environment, except on developer demand, for debugging |
| purposes. Moreover, no special care will be taken on backwards compatibility. |
| There is no warranty the following sample fetches will never change, be renamed |
| or simply removed. So be really careful if you should use one of them. To avoid |
| any ambiguity, these sample fetches are placed in the dedicated scope "internal", |
| for instance "internal.strm.is_htx". |
| |
| internal.htx.data : integer |
| Returns the size in bytes used by data in the HTX message associated to a |
| channel. The channel is chosen depending on the sample direction. |
| |
| internal.htx.free : integer |
| Returns the free space (size - used) in bytes in the HTX message associated |
| to a channel. The channel is chosen depending on the sample direction. |
| |
| internal.htx.free_data : integer |
| Returns the free space for the data in bytes in the HTX message associated to |
| a channel. The channel is chosen depending on the sample direction. |
| |
| internal.htx.has_eom : boolean |
| Returns true if the HTX message associated to a channel contains the |
| end-of-message flag (EOM). Otherwise, it returns false. The channel is chosen |
| depending on the sample direction. |
| |
| internal.htx.nbblks : integer |
| Returns the number of blocks present in the HTX message associated to a |
| channel. The channel is chosen depending on the sample direction. |
| |
| internal.htx.size : integer |
| Returns the total size in bytes of the HTX message associated to a |
| channel. The channel is chosen depending on the sample direction. |
| |
| internal.htx.used : integer |
| Returns the total size used in bytes (data + metadata) in the HTX message |
| associated to a channel. The channel is chosen depending on the sample |
| direction. |
| |
| internal.htx_blk.size(<idx>) : integer |
| Returns the size of the block at the position <idx> in the HTX message |
| associated to a channel or 0 if it does not exist. The channel is chosen |
| depending on the sample direction. <idx> may be any positive integer or one |
| of the special value : |
| * head : The oldest inserted block |
| * tail : The newest inserted block |
| * first : The first block where to (re)start the analysis |
| |
| internal.htx_blk.type(<idx>) : string |
| Returns the type of the block at the position <idx> in the HTX message |
| associated to a channel or "HTX_BLK_UNUSED" if it does not exist. The channel |
| is chosen depending on the sample direction. <idx> may be any positive |
| integer or one of the special value : |
| * head : The oldest inserted block |
| * tail : The newest inserted block |
| * first : The first block where to (re)start the analysis |
| |
| internal.htx_blk.data(<idx>) : binary |
| Returns the value of the DATA block at the position <idx> in the HTX message |
| associated to a channel or an empty string if it does not exist or if it is |
| not a DATA block. The channel is chosen depending on the sample direction. |
| <idx> may be any positive integer or one of the special value : |
| |
| * head : The oldest inserted block |
| * tail : The newest inserted block |
| * first : The first block where to (re)start the analysis |
| |
| internal.htx_blk.hdrname(<idx>) : string |
| Returns the header name of the HEADER block at the position <idx> in the HTX |
| message associated to a channel or an empty string if it does not exist or if |
| it is not an HEADER block. The channel is chosen depending on the sample |
| direction. <idx> may be any positive integer or one of the special value : |
| |
| * head : The oldest inserted block |
| * tail : The newest inserted block |
| * first : The first block where to (re)start the analysis |
| |
| internal.htx_blk.hdrval(<idx>) : string |
| Returns the header value of the HEADER block at the position <idx> in the HTX |
| message associated to a channel or an empty string if it does not exist or if |
| it is not an HEADER block. The channel is chosen depending on the sample |
| direction. <idx> may be any positive integer or one of the special value : |
| |
| * head : The oldest inserted block |
| * tail : The newest inserted block |
| * first : The first block where to (re)start the analysis |
| |
| internal.htx_blk.start_line(<idx>) : string |
| Returns the value of the REQ_SL or RES_SL block at the position <idx> in the |
| HTX message associated to a channel or an empty string if it does not exist |
| or if it is not a SL block. The channel is chosen depending on the sample |
| direction. <idx> may be any positive integer or one of the special value : |
| |
| * head : The oldest inserted block |
| * tail : The newest inserted block |
| * first : The first block where to (re)start the analysis |
| |
| internal.strm.is_htx : boolean |
| Returns true if the current stream is an HTX stream. It means the data in the |
| channels buffers are stored using the internal HTX representation. Otherwise, |
| it returns false. |
| |
| |
| 7.4. Pre-defined ACLs |
| --------------------- |
| |
| Some predefined ACLs are hard-coded so that they do not have to be declared in |
| every frontend which needs them. They all have their names in upper case in |
| order to avoid confusion. Their equivalence is provided below. |
| |
| ACL name Equivalent to Usage |
| ---------------+----------------------------------+------------------------------------------------------ |
| FALSE always_false never match |
| HTTP req.proto_http match if request protocol is valid HTTP |
| HTTP_1.0 req.ver 1.0 match if HTTP request version is 1.0 |
| HTTP_1.1 req.ver 1.1 match if HTTP request version is 1.1 |
| HTTP_2.0 req.ver 2.0 match if HTTP request version is 2.0 |
| HTTP_CONTENT req.hdr_val(content-length) gt 0 match an existing content-length in the HTTP request |
| HTTP_URL_ABS url_reg ^[^/:]*:// match absolute URL with scheme |
| HTTP_URL_SLASH url_beg / match URL beginning with "/" |
| HTTP_URL_STAR url * match URL equal to "*" |
| LOCALHOST src 127.0.0.1/8 ::1 match connection from local host |
| METH_CONNECT method CONNECT match HTTP CONNECT method |
| METH_DELETE method DELETE match HTTP DELETE method |
| METH_GET method GET HEAD match HTTP GET or HEAD method |
| METH_HEAD method HEAD match HTTP HEAD method |
| METH_OPTIONS method OPTIONS match HTTP OPTIONS method |
| METH_POST method POST match HTTP POST method |
| METH_PUT method PUT match HTTP PUT method |
| METH_TRACE method TRACE match HTTP TRACE method |
| RDP_COOKIE req.rdp_cookie_cnt gt 0 match presence of an RDP cookie in the request buffer |
| REQ_CONTENT req.len gt 0 match data in the request buffer |
| TRUE always_true always match |
| WAIT_END wait_end wait for end of content analysis |
| ---------------+----------------------------------+------------------------------------------------------ |
| |
| |
| 8. Logging |
| ---------- |
| |
| One of HAProxy's strong points certainly lies is its precise logs. It probably |
| provides the finest level of information available for such a product, which is |
| very important for troubleshooting complex environments. Standard information |
| provided in logs include client ports, TCP/HTTP state timers, precise session |
| state at termination and precise termination cause, information about decisions |
| to direct traffic to a server, and of course the ability to capture arbitrary |
| headers. |
| |
| In order to improve administrators reactivity, it offers a great transparency |
| about encountered problems, both internal and external, and it is possible to |
| send logs to different sources at the same time with different level filters : |
| |
| - global process-level logs (system errors, start/stop, etc..) |
| - per-instance system and internal errors (lack of resource, bugs, ...) |
| - per-instance external troubles (servers up/down, max connections) |
| - per-instance activity (client connections), either at the establishment or |
| at the termination. |
| - per-request control of log-level, e.g. |
| http-request set-log-level silent if sensitive_request |
| |
| The ability to distribute different levels of logs to different log servers |
| allow several production teams to interact and to fix their problems as soon |
| as possible. For example, the system team might monitor system-wide errors, |
| while the application team might be monitoring the up/down for their servers in |
| real time, and the security team might analyze the activity logs with one hour |
| delay. |
| |
| |
| 8.1. Log levels |
| --------------- |
| |
| TCP and HTTP connections can be logged with information such as the date, time, |
| source IP address, destination address, connection duration, response times, |
| HTTP request, HTTP return code, number of bytes transmitted, conditions |
| in which the session ended, and even exchanged cookies values. For example |
| track a particular user's problems. All messages may be sent to up to two |
| syslog servers. Check the "log" keyword in section 4.2 for more information |
| about log facilities. |
| |
| |
| 8.2. Log formats |
| ---------------- |
| |
| HAProxy supports 5 log formats. Several fields are common between these formats |
| and will be detailed in the following sections. A few of them may vary |
| slightly with the configuration, due to indicators specific to certain |
| options. The supported formats are as follows : |
| |
| - the default format, which is very basic and very rarely used. It only |
| provides very basic information about the incoming connection at the moment |
| it is accepted : source IP:port, destination IP:port, and frontend-name. |
| This mode will eventually disappear so it will not be described to great |
| extents. |
| |
| - the TCP format, which is more advanced. This format is enabled when "option |
| tcplog" is set on the frontend. HAProxy will then usually wait for the |
| connection to terminate before logging. This format provides much richer |
| information, such as timers, connection counts, queue size, etc... This |
| format is recommended for pure TCP proxies. |
| |
| - the HTTP format, which is the most advanced for HTTP proxying. This format |
| is enabled when "option httplog" is set on the frontend. It provides the |
| same information as the TCP format with some HTTP-specific fields such as |
| the request, the status code, and captures of headers and cookies. This |
| format is recommended for HTTP proxies. |
| |
| - the CLF HTTP format, which is equivalent to the HTTP format, but with the |
| fields arranged in the same order as the CLF format. In this mode, all |
| timers, captures, flags, etc... appear one per field after the end of the |
| common fields, in the same order they appear in the standard HTTP format. |
| |
| - the custom log format, allows you to make your own log line. |
| |
| Next sections will go deeper into details for each of these formats. Format |
| specification will be performed on a "field" basis. Unless stated otherwise, a |
| field is a portion of text delimited by any number of spaces. Since syslog |
| servers are susceptible of inserting fields at the beginning of a line, it is |
| always assumed that the first field is the one containing the process name and |
| identifier. |
| |
| Note : Since log lines may be quite long, the log examples in sections below |
| might be broken into multiple lines. The example log lines will be |
| prefixed with 3 closing angle brackets ('>>>') and each time a log is |
| broken into multiple lines, each non-final line will end with a |
| backslash ('\') and the next line will start indented by two characters. |
| |
| |
| 8.2.1. Default log format |
| ------------------------- |
| |
| This format is used when no specific option is set. The log is emitted as soon |
| as the connection is accepted. One should note that this currently is the only |
| format which logs the request's destination IP and ports. |
| |
| Example : |
| listen www |
| mode http |
| log global |
| server srv1 127.0.0.1:8000 |
| |
| >>> Feb 6 12:12:09 localhost \ |
| haproxy[14385]: Connect from 10.0.1.2:33312 to 10.0.3.31:8012 \ |
| (www/HTTP) |
| |
| Field Format Extract from the example above |
| 1 process_name '[' pid ']:' haproxy[14385]: |
| 2 'Connect from' Connect from |
| 3 source_ip ':' source_port 10.0.1.2:33312 |
| 4 'to' to |
| 5 destination_ip ':' destination_port 10.0.3.31:8012 |
| 6 '(' frontend_name '/' mode ')' (www/HTTP) |
| |
| Detailed fields description : |
| - "source_ip" is the IP address of the client which initiated the connection. |
| - "source_port" is the TCP port of the client which initiated the connection. |
| - "destination_ip" is the IP address the client connected to. |
| - "destination_port" is the TCP port the client connected to. |
| - "frontend_name" is the name of the frontend (or listener) which received |
| and processed the connection. |
| - "mode is the mode the frontend is operating (TCP or HTTP). |
| |
| In case of a UNIX socket, the source and destination addresses are marked as |
| "unix:" and the ports reflect the internal ID of the socket which accepted the |
| connection (the same ID as reported in the stats). |
| |
| It is advised not to use this deprecated format for newer installations as it |
| will eventually disappear. |
| |
| |
| 8.2.2. TCP log format |
| --------------------- |
| |
| The TCP format is used when "option tcplog" is specified in the frontend, and |
| is the recommended format for pure TCP proxies. It provides a lot of precious |
| information for troubleshooting. Since this format includes timers and byte |
| counts, the log is normally emitted at the end of the session. It can be |
| emitted earlier if "option logasap" is specified, which makes sense in most |
| environments with long sessions such as remote terminals. Sessions which match |
| the "monitor" rules are never logged. It is also possible not to emit logs for |
| sessions for which no data were exchanged between the client and the server, by |
| specifying "option dontlognull" in the frontend. Successful connections will |
| not be logged if "option dontlog-normal" is specified in the frontend. |
| |
| The TCP log format is internally declared as a custom log format based on the |
| exact following string, which may also be used as a basis to extend the format |
| if required. Additionally the HAPROXY_TCP_LOG_FMT variable can be used instead. |
| Refer to section 8.2.6 "Custom log format" to see how to use this: |
| |
| # strict equivalent of "option tcplog" |
| log-format "%ci:%cp [%t] %ft %b/%s %Tw/%Tc/%Tt %B %ts \ |
| %ac/%fc/%bc/%sc/%rc %sq/%bq" |
| # or using the HAPROXY_TCP_LOG_FMT variable |
| log-format "${HAPROXY_TCP_LOG_FMT}" |
| |
| A few fields may slightly vary depending on some configuration options, those |
| are marked with a star ('*') after the field name below. |
| |
| Example : |
| frontend fnt |
| mode tcp |
| option tcplog |
| log global |
| default_backend bck |
| |
| backend bck |
| server srv1 127.0.0.1:8000 |
| |
| >>> Feb 6 12:12:56 localhost \ |
| haproxy[14387]: 10.0.1.2:33313 [06/Feb/2009:12:12:51.443] fnt \ |
| bck/srv1 0/0/5007 212 -- 0/0/0/0/3 0/0 |
| |
| Field Format Extract from the example above |
| 1 process_name '[' pid ']:' haproxy[14387]: |
| 2 client_ip ':' client_port 10.0.1.2:33313 |
| 3 '[' accept_date ']' [06/Feb/2009:12:12:51.443] |
| 4 frontend_name fnt |
| 5 backend_name '/' server_name bck/srv1 |
| 6 Tw '/' Tc '/' Tt* 0/0/5007 |
| 7 bytes_read* 212 |
| 8 termination_state -- |
| 9 actconn '/' feconn '/' beconn '/' srv_conn '/' retries* 0/0/0/0/3 |
| 10 srv_queue '/' backend_queue 0/0 |
| |
| Detailed fields description : |
| - "client_ip" is the IP address of the client which initiated the TCP |
| connection to HAProxy. If the connection was accepted on a UNIX socket |
| instead, the IP address would be replaced with the word "unix". Note that |
| when the connection is accepted on a socket configured with "accept-proxy" |
| and the PROXY protocol is correctly used, or with a "accept-netscaler-cip" |
| and the NetScaler Client IP insertion protocol is correctly used, then the |
| logs will reflect the forwarded connection's information. |
| |
| - "client_port" is the TCP port of the client which initiated the connection. |
| If the connection was accepted on a UNIX socket instead, the port would be |
| replaced with the ID of the accepting socket, which is also reported in the |
| stats interface. |
| |
| - "accept_date" is the exact date when the connection was received by HAProxy |
| (which might be very slightly different from the date observed on the |
| network if there was some queuing in the system's backlog). This is usually |
| the same date which may appear in any upstream firewall's log. When used in |
| HTTP mode, the accept_date field will be reset to the first moment the |
| connection is ready to receive a new request (end of previous response for |
| HTTP/1, immediately after previous request for HTTP/2). |
| |
| - "frontend_name" is the name of the frontend (or listener) which received |
| and processed the connection. |
| |
| - "backend_name" is the name of the backend (or listener) which was selected |
| to manage the connection to the server. This will be the same as the |
| frontend if no switching rule has been applied, which is common for TCP |
| applications. |
| |
| - "server_name" is the name of the last server to which the connection was |
| sent, which might differ from the first one if there were connection errors |
| and a redispatch occurred. Note that this server belongs to the backend |
| which processed the request. If the connection was aborted before reaching |
| a server, "<NOSRV>" is indicated instead of a server name. |
| |
| - "Tw" is the total time in milliseconds spent waiting in the various queues. |
| It can be "-1" if the connection was aborted before reaching the queue. |
| See "Timers" below for more details. |
| |
| - "Tc" is the total time in milliseconds spent waiting for the connection to |
| establish to the final server, including retries. It can be "-1" if the |
| connection was aborted before a connection could be established. See |
| "Timers" below for more details. |
| |
| - "Tt" is the total time in milliseconds elapsed between the accept and the |
| last close. It covers all possible processing. There is one exception, if |
| "option logasap" was specified, then the time counting stops at the moment |
| the log is emitted. In this case, a '+' sign is prepended before the value, |
| indicating that the final one will be larger. See "Timers" below for more |
| details. |
| |
| - "bytes_read" is the total number of bytes transmitted from the server to |
| the client when the log is emitted. If "option logasap" is specified, the |
| this value will be prefixed with a '+' sign indicating that the final one |
| may be larger. Please note that this value is a 64-bit counter, so log |
| analysis tools must be able to handle it without overflowing. |
| |
| - "termination_state" is the condition the session was in when the session |
| ended. This indicates the session state, which side caused the end of |
| session to happen, and for what reason (timeout, error, ...). The normal |
| flags should be "--", indicating the session was closed by either end with |
| no data remaining in buffers. See below "Session state at disconnection" |
| for more details. |
| |
| - "actconn" is the total number of concurrent connections on the process when |
| the session was logged. It is useful to detect when some per-process system |
| limits have been reached. For instance, if actconn is close to 512 when |
| multiple connection errors occur, chances are high that the system limits |
| the process to use a maximum of 1024 file descriptors and that all of them |
| are used. See section 3 "Global parameters" to find how to tune the system. |
| |
| - "feconn" is the total number of concurrent connections on the frontend when |
| the session was logged. It is useful to estimate the amount of resource |
| required to sustain high loads, and to detect when the frontend's "maxconn" |
| has been reached. Most often when this value increases by huge jumps, it is |
| because there is congestion on the backend servers, but sometimes it can be |
| caused by a denial of service attack. |
| |
| - "beconn" is the total number of concurrent connections handled by the |
| backend when the session was logged. It includes the total number of |
| concurrent connections active on servers as well as the number of |
| connections pending in queues. It is useful to estimate the amount of |
| additional servers needed to support high loads for a given application. |
| Most often when this value increases by huge jumps, it is because there is |
| congestion on the backend servers, but sometimes it can be caused by a |
| denial of service attack. |
| |
| - "srv_conn" is the total number of concurrent connections still active on |
| the server when the session was logged. It can never exceed the server's |
| configured "maxconn" parameter. If this value is very often close or equal |
| to the server's "maxconn", it means that traffic regulation is involved a |
| lot, meaning that either the server's maxconn value is too low, or that |
| there aren't enough servers to process the load with an optimal response |
| time. When only one of the server's "srv_conn" is high, it usually means |
| that this server has some trouble causing the connections to take longer to |
| be processed than on other servers. |
| |
| - "retries" is the number of connection retries experienced by this session |
| when trying to connect to the server. It must normally be zero, unless a |
| server is being stopped at the same moment the connection was attempted. |
| Frequent retries generally indicate either a network problem between |
| HAProxy and the server, or a misconfigured system backlog on the server |
| preventing new connections from being queued. This field may optionally be |
| prefixed with a '+' sign, indicating that the session has experienced a |
| redispatch after the maximal retry count has been reached on the initial |
| server. In this case, the server name appearing in the log is the one the |
| connection was redispatched to, and not the first one, though both may |
| sometimes be the same in case of hashing for instance. So as a general rule |
| of thumb, when a '+' is present in front of the retry count, this count |
| should not be attributed to the logged server. |
| |
| - "srv_queue" is the total number of requests which were processed before |
| this one in the server queue. It is zero when the request has not gone |
| through the server queue. It makes it possible to estimate the approximate |
| server's response time by dividing the time spent in queue by the number of |
| requests in the queue. It is worth noting that if a session experiences a |
| redispatch and passes through two server queues, their positions will be |
| cumulative. A request should not pass through both the server queue and the |
| backend queue unless a redispatch occurs. |
| |
| - "backend_queue" is the total number of requests which were processed before |
| this one in the backend's global queue. It is zero when the request has not |
| gone through the global queue. It makes it possible to estimate the average |
| queue length, which easily translates into a number of missing servers when |
| divided by a server's "maxconn" parameter. It is worth noting that if a |
| session experiences a redispatch, it may pass twice in the backend's queue, |
| and then both positions will be cumulative. A request should not pass |
| through both the server queue and the backend queue unless a redispatch |
| occurs. |
| |
| |
| 8.2.3. HTTP log format |
| ---------------------- |
| |
| The HTTP format is the most complete and the best suited for HTTP proxies. It |
| is enabled by when "option httplog" is specified in the frontend. It provides |
| the same level of information as the TCP format with additional features which |
| are specific to the HTTP protocol. Just like the TCP format, the log is usually |
| emitted at the end of the session, unless "option logasap" is specified, which |
| generally only makes sense for download sites. A session which matches the |
| "monitor" rules will never logged. It is also possible not to log sessions for |
| which no data were sent by the client by specifying "option dontlognull" in the |
| frontend. Successful connections will not be logged if "option dontlog-normal" |
| is specified in the frontend. |
| |
| The HTTP log format is internally declared as a custom log format based on the |
| exact following string, which may also be used as a basis to extend the format |
| if required. Additionally the HAPROXY_HTTP_LOG_FMT variable can be used |
| instead. Refer to section 8.2.6 "Custom log format" to see how to use this: |
| |
| # strict equivalent of "option httplog" |
| log-format "%ci:%cp [%tr] %ft %b/%s %TR/%Tw/%Tc/%Tr/%Ta %ST %B %CC \ |
| %CS %tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs %{+Q}r" |
| |
| And the CLF log format is internally declared as a custom log format based on |
| this exact string: |
| |
| # strict equivalent of "option httplog clf" |
| log-format "%{+Q}o %{-Q}ci - - [%trg] %r %ST %B \"\" \"\" %cp \ |
| %ms %ft %b %s %TR %Tw %Tc %Tr %Ta %tsc %ac %fc \ |
| %bc %sc %rc %sq %bq %CC %CS %hrl %hsl" |
| # or using the HAPROXY_HTTP_LOG_FMT variable |
| log-format "${HAPROXY_HTTP_LOG_FMT}" |
| |
| Most fields are shared with the TCP log, some being different. A few fields may |
| slightly vary depending on some configuration options. Those ones are marked |
| with a star ('*') after the field name below. |
| |
| Example : |
| frontend http-in |
| mode http |
| option httplog |
| log global |
| default_backend bck |
| |
| backend static |
| server srv1 127.0.0.1:8000 |
| |
| >>> Feb 6 12:14:14 localhost \ |
| haproxy[14389]: 10.0.1.2:33317 [06/Feb/2009:12:14:14.655] http-in \ |
| static/srv1 10/0/30/69/109 200 2750 - - ---- 1/1/1/1/0 0/0 {1wt.eu} \ |
| {} "GET /index.html HTTP/1.1" |
| |
| Field Format Extract from the example above |
| 1 process_name '[' pid ']:' haproxy[14389]: |
| 2 client_ip ':' client_port 10.0.1.2:33317 |
| 3 '[' request_date ']' [06/Feb/2009:12:14:14.655] |
| 4 frontend_name http-in |
| 5 backend_name '/' server_name static/srv1 |
| 6 TR '/' Tw '/' Tc '/' Tr '/' Ta* 10/0/30/69/109 |
| 7 status_code 200 |
| 8 bytes_read* 2750 |
| 9 captured_request_cookie - |
| 10 captured_response_cookie - |
| 11 termination_state ---- |
| 12 actconn '/' feconn '/' beconn '/' srv_conn '/' retries* 1/1/1/1/0 |
| 13 srv_queue '/' backend_queue 0/0 |
| 14 '{' captured_request_headers* '}' {haproxy.1wt.eu} |
| 15 '{' captured_response_headers* '}' {} |
| 16 '"' http_request '"' "GET /index.html HTTP/1.1" |
| |
| Detailed fields description : |
| - "client_ip" is the IP address of the client which initiated the TCP |
| connection to HAProxy. If the connection was accepted on a UNIX socket |
| instead, the IP address would be replaced with the word "unix". Note that |
| when the connection is accepted on a socket configured with "accept-proxy" |
| and the PROXY protocol is correctly used, or with a "accept-netscaler-cip" |
| and the NetScaler Client IP insertion protocol is correctly used, then the |
| logs will reflect the forwarded connection's information. |
| |
| - "client_port" is the TCP port of the client which initiated the connection. |
| If the connection was accepted on a UNIX socket instead, the port would be |
| replaced with the ID of the accepting socket, which is also reported in the |
| stats interface. |
| |
| - "request_date" is the exact date when the first byte of the HTTP request |
| was received by HAProxy (log field %tr). |
| |
| - "frontend_name" is the name of the frontend (or listener) which received |
| and processed the connection. |
| |
| - "backend_name" is the name of the backend (or listener) which was selected |
| to manage the connection to the server. This will be the same as the |
| frontend if no switching rule has been applied. |
| |
| - "server_name" is the name of the last server to which the connection was |
| sent, which might differ from the first one if there were connection errors |
| and a redispatch occurred. Note that this server belongs to the backend |
| which processed the request. If the request was aborted before reaching a |
| server, "<NOSRV>" is indicated instead of a server name. If the request was |
| intercepted by the stats subsystem, "<STATS>" is indicated instead. |
| |
| - "TR" is the total time in milliseconds spent waiting for a full HTTP |
| request from the client (not counting body) after the first byte was |
| received. It can be "-1" if the connection was aborted before a complete |
| request could be received or a bad request was received. It should |
| always be very small because a request generally fits in one single packet. |
| Large times here generally indicate network issues between the client and |
| HAProxy or requests being typed by hand. See section 8.4 "Timing Events" |
| for more details. |
| |
| - "Tw" is the total time in milliseconds spent waiting in the various queues. |
| It can be "-1" if the connection was aborted before reaching the queue. |
| See section 8.4 "Timing Events" for more details. |
| |
| - "Tc" is the total time in milliseconds spent waiting for the connection to |
| establish to the final server, including retries. It can be "-1" if the |
| request was aborted before a connection could be established. See section |
| 8.4 "Timing Events" for more details. |
| |
| - "Tr" is the total time in milliseconds spent waiting for the server to send |
| a full HTTP response, not counting data. It can be "-1" if the request was |
| aborted before a complete response could be received. It generally matches |
| the server's processing time for the request, though it may be altered by |
| the amount of data sent by the client to the server. Large times here on |
| "GET" requests generally indicate an overloaded server. See section 8.4 |
| "Timing Events" for more details. |
| |
| - "Ta" is the time the request remained active in HAProxy, which is the total |
| time in milliseconds elapsed between the first byte of the request was |
| received and the last byte of response was sent. It covers all possible |
| processing except the handshake (see Th) and idle time (see Ti). There is |
| one exception, if "option logasap" was specified, then the time counting |
| stops at the moment the log is emitted. In this case, a '+' sign is |
| prepended before the value, indicating that the final one will be larger. |
| See section 8.4 "Timing Events" for more details. |
| |
| - "status_code" is the HTTP status code returned to the client. This status |
| is generally set by the server, but it might also be set by HAProxy when |
| the server cannot be reached or when its response is blocked by HAProxy. |
| |
| - "bytes_read" is the total number of bytes transmitted to the client when |
| the log is emitted. This does include HTTP headers. If "option logasap" is |
| specified, this value will be prefixed with a '+' sign indicating that |
| the final one may be larger. Please note that this value is a 64-bit |
| counter, so log analysis tools must be able to handle it without |
| overflowing. |
| |
| - "captured_request_cookie" is an optional "name=value" entry indicating that |
| the client had this cookie in the request. The cookie name and its maximum |
| length are defined by the "capture cookie" statement in the frontend |
| configuration. The field is a single dash ('-') when the option is not |
| set. Only one cookie may be captured, it is generally used to track session |
| ID exchanges between a client and a server to detect session crossing |
| between clients due to application bugs. For more details, please consult |
| the section "Capturing HTTP headers and cookies" below. |
| |
| - "captured_response_cookie" is an optional "name=value" entry indicating |
| that the server has returned a cookie with its response. The cookie name |
| and its maximum length are defined by the "capture cookie" statement in the |
| frontend configuration. The field is a single dash ('-') when the option is |
| not set. Only one cookie may be captured, it is generally used to track |
| session ID exchanges between a client and a server to detect session |
| crossing between clients due to application bugs. For more details, please |
| consult the section "Capturing HTTP headers and cookies" below. |
| |
| - "termination_state" is the condition the session was in when the session |
| ended. This indicates the session state, which side caused the end of |
| session to happen, for what reason (timeout, error, ...), just like in TCP |
| logs, and information about persistence operations on cookies in the last |
| two characters. The normal flags should begin with "--", indicating the |
| session was closed by either end with no data remaining in buffers. See |
| below "Session state at disconnection" for more details. |
| |
| - "actconn" is the total number of concurrent connections on the process when |
| the session was logged. It is useful to detect when some per-process system |
| limits have been reached. For instance, if actconn is close to 512 or 1024 |
| when multiple connection errors occur, chances are high that the system |
| limits the process to use a maximum of 1024 file descriptors and that all |
| of them are used. See section 3 "Global parameters" to find how to tune the |
| system. |
| |
| - "feconn" is the total number of concurrent connections on the frontend when |
| the session was logged. It is useful to estimate the amount of resource |
| required to sustain high loads, and to detect when the frontend's "maxconn" |
| has been reached. Most often when this value increases by huge jumps, it is |
| because there is congestion on the backend servers, but sometimes it can be |
| caused by a denial of service attack. |
| |
| - "beconn" is the total number of concurrent connections handled by the |
| backend when the session was logged. It includes the total number of |
| concurrent connections active on servers as well as the number of |
| connections pending in queues. It is useful to estimate the amount of |
| additional servers needed to support high loads for a given application. |
| Most often when this value increases by huge jumps, it is because there is |
| congestion on the backend servers, but sometimes it can be caused by a |
| denial of service attack. |
| |
| - "srv_conn" is the total number of concurrent connections still active on |
| the server when the session was logged. It can never exceed the server's |
| configured "maxconn" parameter. If this value is very often close or equal |
| to the server's "maxconn", it means that traffic regulation is involved a |
| lot, meaning that either the server's maxconn value is too low, or that |
| there aren't enough servers to process the load with an optimal response |
| time. When only one of the server's "srv_conn" is high, it usually means |
| that this server has some trouble causing the requests to take longer to be |
| processed than on other servers. |
| |
| - "retries" is the number of connection retries experienced by this session |
| when trying to connect to the server. It must normally be zero, unless a |
| server is being stopped at the same moment the connection was attempted. |
| Frequent retries generally indicate either a network problem between |
| HAProxy and the server, or a misconfigured system backlog on the server |
| preventing new connections from being queued. This field may optionally be |
| prefixed with a '+' sign, indicating that the session has experienced a |
| redispatch after the maximal retry count has been reached on the initial |
| server. In this case, the server name appearing in the log is the one the |
| connection was redispatched to, and not the first one, though both may |
| sometimes be the same in case of hashing for instance. So as a general rule |
| of thumb, when a '+' is present in front of the retry count, this count |
| should not be attributed to the logged server. |
| |
| - "srv_queue" is the total number of requests which were processed before |
| this one in the server queue. It is zero when the request has not gone |
| through the server queue. It makes it possible to estimate the approximate |
| server's response time by dividing the time spent in queue by the number of |
| requests in the queue. It is worth noting that if a session experiences a |
| redispatch and passes through two server queues, their positions will be |
| cumulative. A request should not pass through both the server queue and the |
| backend queue unless a redispatch occurs. |
| |
| - "backend_queue" is the total number of requests which were processed before |
| this one in the backend's global queue. It is zero when the request has not |
| gone through the global queue. It makes it possible to estimate the average |
| queue length, which easily translates into a number of missing servers when |
| divided by a server's "maxconn" parameter. It is worth noting that if a |
| session experiences a redispatch, it may pass twice in the backend's queue, |
| and then both positions will be cumulative. A request should not pass |
| through both the server queue and the backend queue unless a redispatch |
| occurs. |
| |
| - "captured_request_headers" is a list of headers captured in the request due |
| to the presence of the "capture request header" statement in the frontend. |
| Multiple headers can be captured, they will be delimited by a vertical bar |
| ('|'). When no capture is enabled, the braces do not appear, causing a |
| shift of remaining fields. It is important to note that this field may |
| contain spaces, and that using it requires a smarter log parser than when |
| it's not used. Please consult the section "Capturing HTTP headers and |
| cookies" below for more details. |
| |
| - "captured_response_headers" is a list of headers captured in the response |
| due to the presence of the "capture response header" statement in the |
| frontend. Multiple headers can be captured, they will be delimited by a |
| vertical bar ('|'). When no capture is enabled, the braces do not appear, |
| causing a shift of remaining fields. It is important to note that this |
| field may contain spaces, and that using it requires a smarter log parser |
| than when it's not used. Please consult the section "Capturing HTTP headers |
| and cookies" below for more details. |
| |
| - "http_request" is the complete HTTP request line, including the method, |
| request and HTTP version string. Non-printable characters are encoded (see |
| below the section "Non-printable characters"). This is always the last |
| field, and it is always delimited by quotes and is the only one which can |
| contain quotes. If new fields are added to the log format, they will be |
| added before this field. This field might be truncated if the request is |
| huge and does not fit in the standard syslog buffer (1024 characters). This |
| is the reason why this field must always remain the last one. |
| |
| |
| 8.2.4. HTTPS log format |
| ---------------------- |
| |
| The HTTPS format is the best suited for HTTP over SSL connections. It is an |
| extension of the HTTP format (see section 8.2.3) to which SSL related |
| information are added. It is enabled when "option httpslog" is specified in the |
| frontend. Just like the TCP and HTTP formats, the log is usually emitted at the |
| end of the session, unless "option logasap" is specified. A session which |
| matches the "monitor" rules will never logged. It is also possible not to log |
| sessions for which no data were sent by the client by specifying "option |
| dontlognull" in the frontend. Successful connections will not be logged if |
| "option dontlog-normal" is specified in the frontend. |
| |
| The HTTPS log format is internally declared as a custom log format based on the |
| exact following string, which may also be used as a basis to extend the format |
| if required. Additionally the HAPROXY_HTTPS_LOG_FMT variable can be used |
| instead. Refer to section 8.2.6 "Custom log format" to see how to use this: |
| |
| # strict equivalent of "option httpslog" |
| log-format "%ci:%cp [%tr] %ft %b/%s %TR/%Tw/%Tc/%Tr/%Ta %ST %B %CC \ |
| %CS %tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs %{+Q}r \ |
| %[fc_err]/%[ssl_fc_err,hex]/%[ssl_c_err]/\ |
| %[ssl_c_ca_err]/%[ssl_fc_is_resumed] %[ssl_fc_sni]/%sslv/%sslc" |
| # or using the HAPROXY_HTTPS_LOG_FMT variable |
| log-format "${HAPROXY_HTTPS_LOG_FMT}" |
| |
| This format is basically the HTTP one (see section 8.2.3) with new fields |
| appended to it. The new fields (lines 17 and 18) will be detailed here. For the |
| HTTP ones, refer to the HTTP section. |
| |
| Example : |
| frontend https-in |
| mode http |
| option httpslog |
| log global |
| bind *:443 ssl crt mycerts/srv.pem ... |
| default_backend bck |
| |
| backend static |
| server srv1 127.0.0.1:8000 ssl crt mycerts/clt.pem ... |
| |
| >>> Feb 6 12:14:14 localhost \ |
| haproxy[14389]: 10.0.1.2:33317 [06/Feb/2009:12:14:14.655] https-in \ |
| static/srv1 10/0/30/69/109 200 2750 - - ---- 1/1/1/1/0 0/0 {1wt.eu} \ |
| {} "GET /index.html HTTP/1.1" 0/0/0/0/0 \ |
| 1wt.eu/TLSv1.3/TLS_AES_256_GCM_SHA384 |
| |
| Field Format Extract from the example above |
| 1 process_name '[' pid ']:' haproxy[14389]: |
| 2 client_ip ':' client_port 10.0.1.2:33317 |
| 3 '[' request_date ']' [06/Feb/2009:12:14:14.655] |
| 4 frontend_name https-in |
| 5 backend_name '/' server_name static/srv1 |
| 6 TR '/' Tw '/' Tc '/' Tr '/' Ta* 10/0/30/69/109 |
| 7 status_code 200 |
| 8 bytes_read* 2750 |
| 9 captured_request_cookie - |
| 10 captured_response_cookie - |
| 11 termination_state ---- |
| 12 actconn '/' feconn '/' beconn '/' srv_conn '/' retries* 1/1/1/1/0 |
| 13 srv_queue '/' backend_queue 0/0 |
| 14 '{' captured_request_headers* '}' {haproxy.1wt.eu} |
| 15 '{' captured_response_headers* '}' {} |
| 16 '"' http_request '"' "GET /index.html HTTP/1.1" |
| 17 fc_err '/' ssl_fc_err '/' ssl_c_err |
| '/' ssl_c_ca_err '/' ssl_fc_is_resumed 0/0/0/0/0 |
| 18 ssl_fc_sni '/' ssl_version |
| '/' ssl_ciphers 1wt.eu/TLSv1.3/TLS_AES_256_GCM_SHA384 |
| |
| Detailed fields description : |
| - "fc_err" is the status of the connection on the frontend's side. It |
| corresponds to the "fc_err" sample fetch. See the "fc_err" and "fc_err_str" |
| sample fetch functions for more information. |
| |
| - "ssl_fc_err" is the last error of the first SSL error stack that was |
| raised on the connection from the frontend's perspective. It might be used |
| to detect SSL handshake errors for instance. It will be 0 if everything |
| went well. See the "ssl_fc_err" sample fetch's description for more |
| information. |
| |
| - "ssl_c_err" is the status of the client's certificate verification process. |
| The handshake might be successful while having a non-null verification |
| error code if it is an ignored one. See the "ssl_c_err" sample fetch and |
| the "crt-ignore-err" option. |
| |
| - "ssl_c_ca_err" is the status of the client's certificate chain verification |
| process. The handshake might be successful while having a non-null |
| verification error code if it is an ignored one. See the "ssl_c_ca_err" |
| sample fetch and the "ca-ignore-err" option. |
| |
| - "ssl_fc_is_resumed" is true if the incoming TLS session was resumed with |
| the stateful cache or a stateless ticket. Don't forgot that a TLS session |
| can be shared by multiple requests. |
| |
| - "ssl_fc_sni" is the SNI (Server Name Indication) presented by the client |
| to select the certificate to be used. It usually matches the host name for |
| the first request of a connection. An absence of this field may indicate |
| that the SNI was not sent by the client, and will lead haproxy to use the |
| default certificate, or to reject the connection in case of strict-sni. |
| |
| - "ssl_version" is the SSL version of the frontend. |
| |
| - "ssl_ciphers" is the SSL cipher used for the connection. |
| |
| |
| 8.2.5. Error log format |
| ----------------------- |
| |
| When an incoming connection fails due to an SSL handshake or an invalid PROXY |
| protocol header, HAProxy will log the event using a shorter, fixed line format, |
| unless a dedicated error log format is defined through an "error-log-format" |
| line. By default, logs are emitted at the LOG_INFO level, unless the option |
| "log-separate-errors" is set in the backend, in which case the LOG_ERR level |
| will be used. Connections on which no data are exchanged (e.g. probes) are not |
| logged if the "dontlognull" option is set. |
| |
| The default format looks like this : |
| |
| >>> Dec 3 18:27:14 localhost \ |
| haproxy[6103]: 127.0.0.1:56059 [03/Dec/2012:17:35:10.380] frt/f1: \ |
| Connection error during SSL handshake |
| |
| Field Format Extract from the example above |
| 1 process_name '[' pid ']:' haproxy[6103]: |
| 2 client_ip ':' client_port 127.0.0.1:56059 |
| 3 '[' accept_date ']' [03/Dec/2012:17:35:10.380] |
| 4 frontend_name "/" bind_name ":" frt/f1: |
| 5 message Connection error during SSL handshake |
| |
| These fields just provide minimal information to help debugging connection |
| failures. |
| |
| By using the "error-log-format" directive, the legacy log format described |
| above will not be used anymore, and all error log lines will follow the |
| defined format. |
| |
| An example of reasonably complete error-log-format follows, it will report the |
| source address and port, the connection accept() date, the frontend name, the |
| number of active connections on the process and on thit frontend, haproxy's |
| internal error identifier on the front connection, the hexadecimal OpenSSL |
| error number (that can be copy-pasted to "openssl errstr" for full decoding), |
| the client certificate extraction status (0 indicates no error), the client |
| certificate validation status using the CA (0 indicates no error), a boolean |
| indicating if the connection is new or was resumed, the optional server name |
| indication (SNI) provided by the client, the SSL version name and the SSL |
| ciphers used on the connection, if any. Note that backend connection errors |
| are never reported here since in order for a backend connection to fail, it |
| would have passed through a successful stream, hence will be available as |
| regular traffic log (see option httplog or option httpslog). |
| |
| # detailed frontend connection error log |
| error-log-format "%ci:%cp [%tr] %ft %ac/%fc %[fc_err]/\ |
| %[ssl_fc_err,hex]/%[ssl_c_err]/%[ssl_c_ca_err]/%[ssl_fc_is_resumed] \ |
| %[ssl_fc_sni]/%sslv/%sslc" |
| |
| |
| 8.2.6. Custom log format |
| ------------------------ |
| |
| When the default log formats are not sufficient, it is possible to define new |
| ones in very fine details. As creating a log-format from scratch is not always |
| a trivial task, it is strongly recommended to first have a look at the existing |
| formats ("option tcplog", "option httplog", "option httpslog"), pick the one |
| looking the closest to the expectation, copy its "log-format" equivalent string |
| and adjust it. |
| |
| HAProxy understands some log format variables. % precedes log format variables. |
| Variables can take arguments using braces ('{}'), and multiple arguments are |
| separated by commas within the braces. Flags may be added or removed by |
| prefixing them with a '+' or '-' sign. |
| |
| Special variable "%o" may be used to propagate its flags to all other |
| variables on the same format string. This is particularly handy with quoted |
| ("Q") and escaped ("E") string formats. |
| |
| If a variable is named between square brackets ('[' .. ']') then it is used |
| as a sample expression rule (see section 7.3). This it useful to add some |
| less common information such as the client's SSL certificate's DN, or to log |
| the key that would be used to store an entry into a stick table. |
| |
| Note: spaces must be escaped. In configuration directives "log-format", |
| "log-format-sd" and "unique-id-format", spaces are considered as |
| delimiters and are merged. In order to emit a verbatim '%', it must be |
| preceded by another '%' resulting in '%%'. |
| |
| Note: when using the RFC5424 syslog message format, the characters '"', |
| '\' and ']' inside PARAM-VALUE should be escaped with '\' as prefix (see |
| https://tools.ietf.org/html/rfc5424#section-6.3.3 for more details). In |
| such cases, the use of the flag "E" should be considered. |
| |
| Flags are : |
| * Q: quote a string |
| * X: hexadecimal representation (IPs, Ports, %Ts, %rt, %pid) |
| * E: escape characters '"', '\' and ']' in a string with '\' as prefix |
| (intended purpose is for the RFC5424 structured-data log formats) |
| |
| Example: |
| |
| log-format %T\ %t\ Some\ Text |
| log-format %{+Q}o\ %t\ %s\ %{-Q}r |
| |
| log-format-sd %{+Q,+E}o\ [exampleSDID@1234\ header=%[capture.req.hdr(0)]] |
| |
| Please refer to the table below for currently defined variables : |
| |
| +---+------+-----------------------------------------------+-------------+ |
| | R | var | field name (8.2.2 and 8.2.3 for description) | type | |
| +---+------+-----------------------------------------------+-------------+ |
| | | %o | special variable, apply flags on all next var | | |
| +---+------+-----------------------------------------------+-------------+ |
| | | %B | bytes_read (from server to client) | numeric | |
| | H | %CC | captured_request_cookie | string | |
| | H | %CS | captured_response_cookie | string | |
| | | %H | hostname | string | |
| | H | %HM | HTTP method (ex: POST) | string | |
| | H | %HP | HTTP request URI without query string | string | |
| | H | %HPO | HTTP path only (without host nor query string)| string | |
| | H | %HQ | HTTP request URI query string (ex: ?bar=baz) | string | |
| | H | %HU | HTTP request URI (ex: /foo?bar=baz) | string | |
| | H | %HV | HTTP version (ex: HTTP/1.0) | string | |
| | | %ID | unique-id | string | |
| | | %ST | status_code | numeric | |
| | | %T | gmt_date_time | date | |
| | H | %Ta | Active time of the request (from TR to end) | numeric | |
| | | %Tc | Tc | numeric | |
| | | %Td | Td = Tt - (Tq + Tw + Tc + Tr) | numeric | |
| | | %Tl | local_date_time | date | |
| | | %Th | connection handshake time (SSL, PROXY proto) | numeric | |
| | H | %Ti | idle time before the HTTP request | numeric | |
| | H | %Tq | Th + Ti + TR | numeric | |
| | H | %TR | time to receive the full request from 1st byte| numeric | |
| | H | %Tr | Tr (response time) | numeric | |
| | | %Ts | timestamp | numeric | |
| | | %Tt | Tt | numeric | |
| | | %Tu | Tu | numeric | |
| | | %Tw | Tw | numeric | |
| | | %U | bytes_uploaded (from client to server) | numeric | |
| | | %ac | actconn | numeric | |
| | | %b | backend_name | string | |
| | | %bc | beconn (backend concurrent connections) | numeric | |
| | | %bi | backend_source_ip (connecting address) | IP | |
| | | %bp | backend_source_port (connecting address) | numeric | |
| | | %bq | backend_queue | numeric | |
| | | %ci | client_ip (accepted address) | IP | |
| | | %cp | client_port (accepted address) | numeric | |
| | | %f | frontend_name | string | |
| | | %fc | feconn (frontend concurrent connections) | numeric | |
| | | %fi | frontend_ip (accepting address) | IP | |
| | | %fp | frontend_port (accepting address) | numeric | |
| | | %ft | frontend_name_transport ('~' suffix for SSL) | string | |
| | | %lc | frontend_log_counter | numeric | |
| | | %hr | captured_request_headers default style | string | |
| | | %hrl | captured_request_headers CLF style | string list | |
| | | %hs | captured_response_headers default style | string | |
| | | %hsl | captured_response_headers CLF style | string list | |
| | | %ms | accept date milliseconds (left-padded with 0) | numeric | |
| | | %pid | PID | numeric | |
| | H | %r | http_request | string | |
| | | %rc | retries | numeric | |
| | | %rt | request_counter (HTTP req or TCP session) | numeric | |
| | | %s | server_name | string | |
| | | %sc | srv_conn (server concurrent connections) | numeric | |
| | | %si | server_IP (target address) | IP | |
| | | %sp | server_port (target address) | numeric | |
| | | %sq | srv_queue | numeric | |
| | S | %sslc| ssl_ciphers (ex: AES-SHA) | string | |
| | S | %sslv| ssl_version (ex: TLSv1) | string | |
| | | %t | date_time (with millisecond resolution) | date | |
| | H | %tr | date_time of HTTP request | date | |
| | H | %trg | gmt_date_time of start of HTTP request | date | |
| | H | %trl | local_date_time of start of HTTP request | date | |
| | | %ts | termination_state | string | |
| | H | %tsc | termination_state with cookie status | string | |
| +---+------+-----------------------------------------------+-------------+ |
| |
| R = Restrictions : H = mode http only ; S = SSL only |
| |
| |
| 8.3. Advanced logging options |
| ----------------------------- |
| |
| Some advanced logging options are often looked for but are not easy to find out |
| just by looking at the various options. Here is an entry point for the few |
| options which can enable better logging. Please refer to the keywords reference |
| for more information about their usage. |
| |
| |
| 8.3.1. Disabling logging of external tests |
| ------------------------------------------ |
| |
| It is quite common to have some monitoring tools perform health checks on |
| HAProxy. Sometimes it will be a layer 3 load-balancer such as LVS or any |
| commercial load-balancer, and sometimes it will simply be a more complete |
| monitoring system such as Nagios. When the tests are very frequent, users often |
| ask how to disable logging for those checks. There are three possibilities : |
| |
| - if connections come from everywhere and are just TCP probes, it is often |
| desired to simply disable logging of connections without data exchange, by |
| setting "option dontlognull" in the frontend. It also disables logging of |
| port scans, which may or may not be desired. |
| |
| - it is possible to use the "http-request set-log-level silent" action using |
| a variety of conditions (source networks, paths, user-agents, etc). |
| |
| - if the tests are performed on a known URI, use "monitor-uri" to declare |
| this URI as dedicated to monitoring. Any host sending this request will |
| only get the result of a health-check, and the request will not be logged. |
| |
| |
| 8.3.2. Logging before waiting for the session to terminate |
| ---------------------------------------------------------- |
| |
| The problem with logging at end of connection is that you have no clue about |
| what is happening during very long sessions, such as remote terminal sessions |
| or large file downloads. This problem can be worked around by specifying |
| "option logasap" in the frontend. HAProxy will then log as soon as possible, |
| just before data transfer begins. This means that in case of TCP, it will still |
| log the connection status to the server, and in case of HTTP, it will log just |
| after processing the server headers. In this case, the number of bytes reported |
| is the number of header bytes sent to the client. In order to avoid confusion |
| with normal logs, the total time field and the number of bytes are prefixed |
| with a '+' sign which means that real numbers are certainly larger. |
| |
| |
| 8.3.3. Raising log level upon errors |
| ------------------------------------ |
| |
| Sometimes it is more convenient to separate normal traffic from errors logs, |
| for instance in order to ease error monitoring from log files. When the option |
| "log-separate-errors" is used, connections which experience errors, timeouts, |
| retries, redispatches or HTTP status codes 5xx will see their syslog level |
| raised from "info" to "err". This will help a syslog daemon store the log in |
| a separate file. It is very important to keep the errors in the normal traffic |
| file too, so that log ordering is not altered. You should also be careful if |
| you already have configured your syslog daemon to store all logs higher than |
| "notice" in an "admin" file, because the "err" level is higher than "notice". |
| |
| |
| 8.3.4. Disabling logging of successful connections |
| -------------------------------------------------- |
| |
| Although this may sound strange at first, some large sites have to deal with |
| multiple thousands of logs per second and are experiencing difficulties keeping |
| them intact for a long time or detecting errors within them. If the option |
| "dontlog-normal" is set on the frontend, all normal connections will not be |
| logged. In this regard, a normal connection is defined as one without any |
| error, timeout, retry nor redispatch. In HTTP, the status code is checked too, |
| and a response with a status 5xx is not considered normal and will be logged |
| too. Of course, doing is is really discouraged as it will remove most of the |
| useful information from the logs. Do this only if you have no other |
| alternative. |
| |
| |
| 8.4. Timing events |
| ------------------ |
| |
| Timers provide a great help in troubleshooting network problems. All values are |
| reported in milliseconds (ms). These timers should be used in conjunction with |
| the session termination flags. In TCP mode with "option tcplog" set on the |
| frontend, 3 control points are reported under the form "Tw/Tc/Tt", and in HTTP |
| mode, 5 control points are reported under the form "TR/Tw/Tc/Tr/Ta". In |
| addition, three other measures are provided, "Th", "Ti", and "Tq". |
| |
| Timings events in HTTP mode: |
| |
| first request 2nd request |
| |<-------------------------------->|<-------------- ... |
| t tr t tr ... |
| ---|----|----|----|----|----|----|----|----|-- |
| : Th Ti TR Tw Tc Tr Td : Ti ... |
| :<---- Tq ---->: : |
| :<-------------- Tt -------------->: |
| :<-- -----Tu--------------->: |
| :<--------- Ta --------->: |
| |
| Timings events in TCP mode: |
| |
| TCP session |
| |<----------------->| |
| t t |
| ---|----|----|----|----|--- |
| | Th Tw Tc Td | |
| |<------ Tt ------->| |
| |
| - Th: total time to accept tcp connection and execute handshakes for low level |
| protocols. Currently, these protocols are proxy-protocol and SSL. This may |
| only happen once during the whole connection's lifetime. A large time here |
| may indicate that the client only pre-established the connection without |
| speaking, that it is experiencing network issues preventing it from |
| completing a handshake in a reasonable time (e.g. MTU issues), or that an |
| SSL handshake was very expensive to compute. Please note that this time is |
| reported only before the first request, so it is safe to average it over |
| all request to calculate the amortized value. The second and subsequent |
| request will always report zero here. |
| |
| - Ti: is the idle time before the HTTP request (HTTP mode only). This timer |
| counts between the end of the handshakes and the first byte of the HTTP |
| request. When dealing with a second request in keep-alive mode, it starts |
| to count after the end of the transmission the previous response. When a |
| multiplexed protocol such as HTTP/2 is used, it starts to count immediately |
| after the previous request. Some browsers pre-establish connections to a |
| server in order to reduce the latency of a future request, and keep them |
| pending until they need it. This delay will be reported as the idle time. A |
| value of -1 indicates that nothing was received on the connection. |
| |
| - TR: total time to get the client request (HTTP mode only). It's the time |
| elapsed between the first bytes received and the moment the proxy received |
| the empty line marking the end of the HTTP headers. The value "-1" |
| indicates that the end of headers has never been seen. This happens when |
| the client closes prematurely or times out. This time is usually very short |
| since most requests fit in a single packet. A large time may indicate a |
| request typed by hand during a test. |
| |
| - Tq: total time to get the client request from the accept date or since the |
| emission of the last byte of the previous response (HTTP mode only). It's |
| exactly equal to Th + Ti + TR unless any of them is -1, in which case it |
| returns -1 as well. This timer used to be very useful before the arrival of |
| HTTP keep-alive and browsers' pre-connect feature. It's recommended to drop |
| it in favor of TR nowadays, as the idle time adds a lot of noise to the |
| reports. |
| |
| - Tw: total time spent in the queues waiting for a connection slot. It |
| accounts for backend queue as well as the server queues, and depends on the |
| queue size, and the time needed for the server to complete previous |
| requests. The value "-1" means that the request was killed before reaching |
| the queue, which is generally what happens with invalid or denied requests. |
| |
| - Tc: total time to establish the TCP connection to the server. It's the time |
| elapsed between the moment the proxy sent the connection request, and the |
| moment it was acknowledged by the server, or between the TCP SYN packet and |
| the matching SYN/ACK packet in return. The value "-1" means that the |
| connection never established. |
| |
| - Tr: server response time (HTTP mode only). It's the time elapsed between |
| the moment the TCP connection was established to the server and the moment |
| the server sent its complete response headers. It purely shows its request |
| processing time, without the network overhead due to the data transmission. |
| It is worth noting that when the client has data to send to the server, for |
| instance during a POST request, the time already runs, and this can distort |
| apparent response time. For this reason, it's generally wise not to trust |
| too much this field for POST requests initiated from clients behind an |
| untrusted network. A value of "-1" here means that the last the response |
| header (empty line) was never seen, most likely because the server timeout |
| stroke before the server managed to process the request. |
| |
| - Td: this is the total transfer time of the response payload till the last |
| byte sent to the client. In HTTP it starts after the last response header |
| (after Tr). |
| |
| The data sent are not guaranteed to be received by the client, they can be |
| stuck in either the kernel or the network. |
| |
| - Ta: total active time for the HTTP request, between the moment the proxy |
| received the first byte of the request header and the emission of the last |
| byte of the response body. The exception is when the "logasap" option is |
| specified. In this case, it only equals (TR+Tw+Tc+Tr), and is prefixed with |
| a '+' sign. From this field, we can deduce "Td", the data transmission time, |
| by subtracting other timers when valid : |
| |
| Td = Ta - (TR + Tw + Tc + Tr) |
| |
| Timers with "-1" values have to be excluded from this equation. Note that |
| "Ta" can never be negative. |
| |
| - Tt: total session duration time, between the moment the proxy accepted it |
| and the moment both ends were closed. The exception is when the "logasap" |
| option is specified. In this case, it only equals (Th+Ti+TR+Tw+Tc+Tr), and |
| is prefixed with a '+' sign. From this field, we can deduce "Td", the data |
| transmission time, by subtracting other timers when valid : |
| |
| Td = Tt - (Th + Ti + TR + Tw + Tc + Tr) |
| |
| Timers with "-1" values have to be excluded from this equation. In TCP |
| mode, "Ti", "Tq" and "Tr" have to be excluded too. Note that "Tt" can never |
| be negative and that for HTTP, Tt is simply equal to (Th+Ti+Ta). |
| |
| - Tu: total estimated time as seen from client, between the moment the proxy |
| accepted it and the moment both ends were closed, without idle time. |
| This is useful to roughly measure end-to-end time as a user would see it, |
| without idle time pollution from keep-alive time between requests. This |
| timer in only an estimation of time seen by user as it assumes network |
| latency is the same in both directions. The exception is when the "logasap" |
| option is specified. In this case, it only equals (Th+TR+Tw+Tc+Tr), and is |
| prefixed with a '+' sign. |
| |
| These timers provide precious indications on trouble causes. Since the TCP |
| protocol defines retransmit delays of 3, 6, 12... seconds, we know for sure |
| that timers close to multiples of 3s are nearly always related to lost packets |
| due to network problems (wires, negotiation, congestion). Moreover, if "Ta" or |
| "Tt" is close to a timeout value specified in the configuration, it often means |
| that a session has been aborted on timeout. |
| |
| Most common cases : |
| |
| - If "Th" or "Ti" are close to 3000, a packet has probably been lost between |
| the client and the proxy. This is very rare on local networks but might |
| happen when clients are on far remote networks and send large requests. It |
| may happen that values larger than usual appear here without any network |
| cause. Sometimes, during an attack or just after a resource starvation has |
| ended, HAProxy may accept thousands of connections in a few milliseconds. |
| The time spent accepting these connections will inevitably slightly delay |
| processing of other connections, and it can happen that request times in the |
| order of a few tens of milliseconds are measured after a few thousands of |
| new connections have been accepted at once. Using one of the keep-alive |
| modes may display larger idle times since "Ti" measures the time spent |
| waiting for additional requests. |
| |
| - If "Tc" is close to 3000, a packet has probably been lost between the |
| server and the proxy during the server connection phase. This value should |
| always be very low, such as 1 ms on local networks and less than a few tens |
| of ms on remote networks. |
| |
| - If "Tr" is nearly always lower than 3000 except some rare values which seem |
| to be the average majored by 3000, there are probably some packets lost |
| between the proxy and the server. |
| |
| - If "Ta" is large even for small byte counts, it generally is because |
| neither the client nor the server decides to close the connection while |
| HAProxy is running in tunnel mode and both have agreed on a keep-alive |
| connection mode. In order to solve this issue, it will be needed to specify |
| one of the HTTP options to manipulate keep-alive or close options on either |
| the frontend or the backend. Having the smallest possible 'Ta' or 'Tt' is |
| important when connection regulation is used with the "maxconn" option on |
| the servers, since no new connection will be sent to the server until |
| another one is released. |
| |
| Other noticeable HTTP log cases ('xx' means any value to be ignored) : |
| |
| TR/Tw/Tc/Tr/+Ta The "option logasap" is present on the frontend and the log |
| was emitted before the data phase. All the timers are valid |
| except "Ta" which is shorter than reality. |
| |
| -1/xx/xx/xx/Ta The client was not able to send a complete request in time |
| or it aborted too early. Check the session termination flags |
| then "timeout http-request" and "timeout client" settings. |
| |
| TR/-1/xx/xx/Ta It was not possible to process the request, maybe because |
| servers were out of order, because the request was invalid |
| or forbidden by ACL rules. Check the session termination |
| flags. |
| |
| TR/Tw/-1/xx/Ta The connection could not establish on the server. Either it |
| actively refused it or it timed out after Ta-(TR+Tw) ms. |
| Check the session termination flags, then check the |
| "timeout connect" setting. Note that the tarpit action might |
| return similar-looking patterns, with "Tw" equal to the time |
| the client connection was maintained open. |
| |
| TR/Tw/Tc/-1/Ta The server has accepted the connection but did not return |
| a complete response in time, or it closed its connection |
| unexpectedly after Ta-(TR+Tw+Tc) ms. Check the session |
| termination flags, then check the "timeout server" setting. |
| |
| |
| 8.5. Session state at disconnection |
| ----------------------------------- |
| |
| TCP and HTTP logs provide a session termination indicator in the |
| "termination_state" field, just before the number of active connections. It is |
| 2-characters long in TCP mode, and is extended to 4 characters in HTTP mode, |
| each of which has a special meaning : |
| |
| - On the first character, a code reporting the first event which caused the |
| session to terminate : |
| |
| C : the TCP session was unexpectedly aborted by the client. |
| |
| S : the TCP session was unexpectedly aborted by the server, or the |
| server explicitly refused it. |
| |
| P : the session was prematurely aborted by the proxy, because of a |
| connection limit enforcement, because a DENY filter was matched, |
| because of a security check which detected and blocked a dangerous |
| error in server response which might have caused information leak |
| (e.g. cacheable cookie). |
| |
| L : the session was locally processed by HAProxy. |
| |
| R : a resource on the proxy has been exhausted (memory, sockets, source |
| ports, ...). Usually, this appears during the connection phase, and |
| system logs should contain a copy of the precise error. If this |
| happens, it must be considered as a very serious anomaly which |
| should be fixed as soon as possible by any means. |
| |
| I : an internal error was identified by the proxy during a self-check. |
| This should NEVER happen, and you are encouraged to report any log |
| containing this, because this would almost certainly be a bug. It |
| would be wise to preventively restart the process after such an |
| event too, in case it would be caused by memory corruption. |
| |
| D : the session was killed by HAProxy because the server was detected |
| as down and was configured to kill all connections when going down. |
| |
| U : the session was killed by HAProxy on this backup server because an |
| active server was detected as up and was configured to kill all |
| backup connections when going up. |
| |
| K : the session was actively killed by an admin operating on HAProxy. |
| |
| c : the client-side timeout expired while waiting for the client to |
| send or receive data. |
| |
| s : the server-side timeout expired while waiting for the server to |
| send or receive data. |
| |
| - : normal session completion, both the client and the server closed |
| with nothing left in the buffers. |
| |
| - on the second character, the TCP or HTTP session state when it was closed : |
| |
| R : the proxy was waiting for a complete, valid REQUEST from the client |
| (HTTP mode only). Nothing was sent to any server. |
| |
| Q : the proxy was waiting in the QUEUE for a connection slot. This can |
| only happen when servers have a 'maxconn' parameter set. It can |
| also happen in the global queue after a redispatch consecutive to |
| a failed attempt to connect to a dying server. If no redispatch is |
| reported, then no connection attempt was made to any server. |
| |
| C : the proxy was waiting for the CONNECTION to establish on the |
| server. The server might at most have noticed a connection attempt. |
| |
| H : the proxy was waiting for complete, valid response HEADERS from the |
| server (HTTP only). |
| |
| D : the session was in the DATA phase. |
| |
| L : the proxy was still transmitting LAST data to the client while the |
| server had already finished. This one is very rare as it can only |
| happen when the client dies while receiving the last packets. |
| |
| T : the request was tarpitted. It has been held open with the client |
| during the whole "timeout tarpit" duration or until the client |
| closed, both of which will be reported in the "Tw" timer. |
| |
| - : normal session completion after end of data transfer. |
| |
| - the third character tells whether the persistence cookie was provided by |
| the client (only in HTTP mode) : |
| |
| N : the client provided NO cookie. This is usually the case for new |
| visitors, so counting the number of occurrences of this flag in the |
| logs generally indicate a valid trend for the site frequentation. |
| |
| I : the client provided an INVALID cookie matching no known server. |
| This might be caused by a recent configuration change, mixed |
| cookies between HTTP/HTTPS sites, persistence conditionally |
| ignored, or an attack. |
| |
| D : the client provided a cookie designating a server which was DOWN, |
| so either "option persist" was used and the client was sent to |
| this server, or it was not set and the client was redispatched to |
| another server. |
| |
| V : the client provided a VALID cookie, and was sent to the associated |
| server. |
| |
| E : the client provided a valid cookie, but with a last date which was |
| older than what is allowed by the "maxidle" cookie parameter, so |
| the cookie is consider EXPIRED and is ignored. The request will be |
| redispatched just as if there was no cookie. |
| |
| O : the client provided a valid cookie, but with a first date which was |
| older than what is allowed by the "maxlife" cookie parameter, so |
| the cookie is consider too OLD and is ignored. The request will be |
| redispatched just as if there was no cookie. |
| |
| U : a cookie was present but was not used to select the server because |
| some other server selection mechanism was used instead (typically a |
| "use-server" rule). |
| |
| - : does not apply (no cookie set in configuration). |
| |
| - the last character reports what operations were performed on the persistence |
| cookie returned by the server (only in HTTP mode) : |
| |
| N : NO cookie was provided by the server, and none was inserted either. |
| |
| I : no cookie was provided by the server, and the proxy INSERTED one. |
| Note that in "cookie insert" mode, if the server provides a cookie, |
| it will still be overwritten and reported as "I" here. |
| |
| U : the proxy UPDATED the last date in the cookie that was presented by |
| the client. This can only happen in insert mode with "maxidle". It |
| happens every time there is activity at a different date than the |
| date indicated in the cookie. If any other change happens, such as |
| a redispatch, then the cookie will be marked as inserted instead. |
| |
| P : a cookie was PROVIDED by the server and transmitted as-is. |
| |
| R : the cookie provided by the server was REWRITTEN by the proxy, which |
| happens in "cookie rewrite" or "cookie prefix" modes. |
| |
| D : the cookie provided by the server was DELETED by the proxy. |
| |
| - : does not apply (no cookie set in configuration). |
| |
| The combination of the two first flags gives a lot of information about what |
| was happening when the session terminated, and why it did terminate. It can be |
| helpful to detect server saturation, network troubles, local system resource |
| starvation, attacks, etc... |
| |
| The most common termination flags combinations are indicated below. They are |
| alphabetically sorted, with the lowercase set just after the upper case for |
| easier finding and understanding. |
| |
| Flags Reason |
| |
| -- Normal termination. |
| |
| CC The client aborted before the connection could be established to the |
| server. This can happen when HAProxy tries to connect to a recently |
| dead (or unchecked) server, and the client aborts while HAProxy is |
| waiting for the server to respond or for "timeout connect" to expire. |
| |
| CD The client unexpectedly aborted during data transfer. This can be |
| caused by a browser crash, by an intermediate equipment between the |
| client and HAProxy which decided to actively break the connection, |
| by network routing issues between the client and HAProxy, or by a |
| keep-alive session between the server and the client terminated first |
| by the client. |
| |
| cD The client did not send nor acknowledge any data for as long as the |
| "timeout client" delay. This is often caused by network failures on |
| the client side, or the client simply leaving the net uncleanly. |
| |
| CH The client aborted while waiting for the server to start responding. |
| It might be the server taking too long to respond or the client |
| clicking the 'Stop' button too fast. |
| |
| cH The "timeout client" stroke while waiting for client data during a |
| POST request. This is sometimes caused by too large TCP MSS values |
| for PPPoE networks which cannot transport full-sized packets. It can |
| also happen when client timeout is smaller than server timeout and |
| the server takes too long to respond. |
| |
| CQ The client aborted while its session was queued, waiting for a server |
| with enough empty slots to accept it. It might be that either all the |
| servers were saturated or that the assigned server was taking too |
| long a time to respond. |
| |
| CR The client aborted before sending a full HTTP request. Most likely |
| the request was typed by hand using a telnet client, and aborted |
| too early. The HTTP status code is likely a 400 here. Sometimes this |
| might also be caused by an IDS killing the connection between HAProxy |
| and the client. "option http-ignore-probes" can be used to ignore |
| connections without any data transfer. |
| |
| cR The "timeout http-request" stroke before the client sent a full HTTP |
| request. This is sometimes caused by too large TCP MSS values on the |
| client side for PPPoE networks which cannot transport full-sized |
| packets, or by clients sending requests by hand and not typing fast |
| enough, or forgetting to enter the empty line at the end of the |
| request. The HTTP status code is likely a 408 here. Note: recently, |
| some browsers started to implement a "pre-connect" feature consisting |
| in speculatively connecting to some recently visited web sites just |
| in case the user would like to visit them. This results in many |
| connections being established to web sites, which end up in 408 |
| Request Timeout if the timeout strikes first, or 400 Bad Request when |
| the browser decides to close them first. These ones pollute the log |
| and feed the error counters. Some versions of some browsers have even |
| been reported to display the error code. It is possible to work |
| around the undesirable effects of this behavior by adding "option |
| http-ignore-probes" in the frontend, resulting in connections with |
| zero data transfer to be totally ignored. This will definitely hide |
| the errors of people experiencing connectivity issues though. |
| |
| CT The client aborted while its session was tarpitted. It is important to |
| check if this happens on valid requests, in order to be sure that no |
| wrong tarpit rules have been written. If a lot of them happen, it |
| might make sense to lower the "timeout tarpit" value to something |
| closer to the average reported "Tw" timer, in order not to consume |
| resources for just a few attackers. |
| |
| LC The request was intercepted and locally handled by HAProxy. The |
| request was not sent to the server. It only happens with a redirect |
| because of a "redir" parameter on the server line. |
| |
| LR The request was intercepted and locally handled by HAProxy. The |
| request was not sent to the server. Generally it means a redirect was |
| returned, an HTTP return statement was processed or the request was |
| handled by an applet (stats, cache, Prometheus exported, lua applet...). |
| |
| LH The response was intercepted and locally handled by HAProxy. Generally |
| it means a redirect was returned or an HTTP return statement was |
| processed. |
| |
| SC The server or an equipment between it and HAProxy explicitly refused |
| the TCP connection (the proxy received a TCP RST or an ICMP message |
| in return). Under some circumstances, it can also be the network |
| stack telling the proxy that the server is unreachable (e.g. no route, |
| or no ARP response on local network). When this happens in HTTP mode, |
| the status code is likely a 502 or 503 here. |
| |
| sC The "timeout connect" stroke before a connection to the server could |
| complete. When this happens in HTTP mode, the status code is likely a |
| 503 or 504 here. |
| |
| SD The connection to the server died with an error during the data |
| transfer. This usually means that HAProxy has received an RST from |
| the server or an ICMP message from an intermediate equipment while |
| exchanging data with the server. This can be caused by a server crash |
| or by a network issue on an intermediate equipment. |
| |
| sD The server did not send nor acknowledge any data for as long as the |
| "timeout server" setting during the data phase. This is often caused |
| by too short timeouts on L4 equipment before the server (firewalls, |
| load-balancers, ...), as well as keep-alive sessions maintained |
| between the client and the server expiring first on HAProxy. |
| |
| SH The server aborted before sending its full HTTP response headers, or |
| it crashed while processing the request. Since a server aborting at |
| this moment is very rare, it would be wise to inspect its logs to |
| control whether it crashed and why. The logged request may indicate a |
| small set of faulty requests, demonstrating bugs in the application. |
| Sometimes this might also be caused by an IDS killing the connection |
| between HAProxy and the server. |
| |
| sH The "timeout server" stroke before the server could return its |
| response headers. This is the most common anomaly, indicating too |
| long transactions, probably caused by server or database saturation. |
| The immediate workaround consists in increasing the "timeout server" |
| setting, but it is important to keep in mind that the user experience |
| will suffer from these long response times. The only long term |
| solution is to fix the application. |
| |
| sQ The session spent too much time in queue and has been expired. See |
| the "timeout queue" and "timeout connect" settings to find out how to |
| fix this if it happens too often. If it often happens massively in |
| short periods, it may indicate general problems on the affected |
| servers due to I/O or database congestion, or saturation caused by |
| external attacks. |
| |
| PC The proxy refused to establish a connection to the server because the |
| process's socket limit has been reached while attempting to connect. |
| The global "maxconn" parameter may be increased in the configuration |
| so that it does not happen anymore. This status is very rare and |
| might happen when the global "ulimit-n" parameter is forced by hand. |
| |
| PD The proxy blocked an incorrectly formatted chunked encoded message in |
| a request or a response, after the server has emitted its headers. In |
| most cases, this will indicate an invalid message from the server to |
| the client. HAProxy supports chunk sizes of up to 2GB - 1 (2147483647 |
| bytes). Any larger size will be considered as an error. |
| |
| PH The proxy blocked the server's response, because it was invalid, |
| incomplete, dangerous (cache control), or matched a security filter. |
| In any case, an HTTP 502 error is sent to the client. One possible |
| cause for this error is an invalid syntax in an HTTP header name |
| containing unauthorized characters. It is also possible but quite |
| rare, that the proxy blocked a chunked-encoding request from the |
| client due to an invalid syntax, before the server responded. In this |
| case, an HTTP 400 error is sent to the client and reported in the |
| logs. Finally, it may be due to an HTTP header rewrite failure on the |
| response. In this case, an HTTP 500 error is sent (see |
| "tune.maxrewrite" and "http-response strict-mode" for more |
| inforomation). |
| |
| PR The proxy blocked the client's HTTP request, either because of an |
| invalid HTTP syntax, in which case it returned an HTTP 400 error to |
| the client, or because a deny filter matched, in which case it |
| returned an HTTP 403 error. It may also be due to an HTTP header |
| rewrite failure on the request. In this case, an HTTP 500 error is |
| sent (see "tune.maxrewrite" and "http-request strict-mode" for more |
| inforomation). |
| |
| PT The proxy blocked the client's request and has tarpitted its |
| connection before returning it a 500 server error. Nothing was sent |
| to the server. The connection was maintained open for as long as |
| reported by the "Tw" timer field. |
| |
| RC A local resource has been exhausted (memory, sockets, source ports) |
| preventing the connection to the server from establishing. The error |
| logs will tell precisely what was missing. This is very rare and can |
| only be solved by proper system tuning. |
| |
| The combination of the two last flags gives a lot of information about how |
| persistence was handled by the client, the server and by HAProxy. This is very |
| important to troubleshoot disconnections, when users complain they have to |
| re-authenticate. The commonly encountered flags are : |
| |
| -- Persistence cookie is not enabled. |
| |
| NN No cookie was provided by the client, none was inserted in the |
| response. For instance, this can be in insert mode with "postonly" |
| set on a GET request. |
| |
| II A cookie designating an invalid server was provided by the client, |
| a valid one was inserted in the response. This typically happens when |
| a "server" entry is removed from the configuration, since its cookie |
| value can be presented by a client when no other server knows it. |
| |
| NI No cookie was provided by the client, one was inserted in the |
| response. This typically happens for first requests from every user |
| in "insert" mode, which makes it an easy way to count real users. |
| |
| VN A cookie was provided by the client, none was inserted in the |
| response. This happens for most responses for which the client has |
| already got a cookie. |
| |
| VU A cookie was provided by the client, with a last visit date which is |
| not completely up-to-date, so an updated cookie was provided in |
| response. This can also happen if there was no date at all, or if |
| there was a date but the "maxidle" parameter was not set, so that the |
| cookie can be switched to unlimited time. |
| |
| EI A cookie was provided by the client, with a last visit date which is |
| too old for the "maxidle" parameter, so the cookie was ignored and a |
| new cookie was inserted in the response. |
| |
| OI A cookie was provided by the client, with a first visit date which is |
| too old for the "maxlife" parameter, so the cookie was ignored and a |
| new cookie was inserted in the response. |
| |
| DI The server designated by the cookie was down, a new server was |
| selected and a new cookie was emitted in the response. |
| |
| VI The server designated by the cookie was not marked dead but could not |
| be reached. A redispatch happened and selected another one, which was |
| then advertised in the response. |
| |
| |
| 8.6. Non-printable characters |
| ----------------------------- |
| |
| In order not to cause trouble to log analysis tools or terminals during log |
| consulting, non-printable characters are not sent as-is into log files, but are |
| converted to the two-digits hexadecimal representation of their ASCII code, |
| prefixed by the character '#'. The only characters that can be logged without |
| being escaped are comprised between 32 and 126 (inclusive). Obviously, the |
| escape character '#' itself is also encoded to avoid any ambiguity ("#23"). It |
| is the same for the character '"' which becomes "#22", as well as '{', '|' and |
| '}' when logging headers. |
| |
| Note that the space character (' ') is not encoded in headers, which can cause |
| issues for tools relying on space count to locate fields. A typical header |
| containing spaces is "User-Agent". |
| |
| Last, it has been observed that some syslog daemons such as syslog-ng escape |
| the quote ('"') with a backslash ('\'). The reverse operation can safely be |
| performed since no quote may appear anywhere else in the logs. |
| |
| |
| 8.7. Capturing HTTP cookies |
| --------------------------- |
| |
| Cookie capture simplifies the tracking a complete user session. This can be |
| achieved using the "capture cookie" statement in the frontend. Please refer to |
| section 4.2 for more details. Only one cookie can be captured, and the same |
| cookie will simultaneously be checked in the request ("Cookie:" header) and in |
| the response ("Set-Cookie:" header). The respective values will be reported in |
| the HTTP logs at the "captured_request_cookie" and "captured_response_cookie" |
| locations (see section 8.2.3 about HTTP log format). When either cookie is |
| not seen, a dash ('-') replaces the value. This way, it's easy to detect when a |
| user switches to a new session for example, because the server will reassign it |
| a new cookie. It is also possible to detect if a server unexpectedly sets a |
| wrong cookie to a client, leading to session crossing. |
| |
| Examples : |
| # capture the first cookie whose name starts with "ASPSESSION" |
| capture cookie ASPSESSION len 32 |
| |
| # capture the first cookie whose name is exactly "vgnvisitor" |
| capture cookie vgnvisitor= len 32 |
| |
| |
| 8.8. Capturing HTTP headers |
| --------------------------- |
| |
| Header captures are useful to track unique request identifiers set by an upper |
| proxy, virtual host names, user-agents, POST content-length, referrers, etc. In |
| the response, one can search for information about the response length, how the |
| server asked the cache to behave, or an object location during a redirection. |
| |
| Header captures are performed using the "capture request header" and "capture |
| response header" statements in the frontend. Please consult their definition in |
| section 4.2 for more details. |
| |
| It is possible to include both request headers and response headers at the same |
| time. Non-existent headers are logged as empty strings, and if one header |
| appears more than once, only its last occurrence will be logged. Request headers |
| are grouped within braces '{' and '}' in the same order as they were declared, |
| and delimited with a vertical bar '|' without any space. Response headers |
| follow the same representation, but are displayed after a space following the |
| request headers block. These blocks are displayed just before the HTTP request |
| in the logs. |
| |
| As a special case, it is possible to specify an HTTP header capture in a TCP |
| frontend. The purpose is to enable logging of headers which will be parsed in |
| an HTTP backend if the request is then switched to this HTTP backend. |
| |
| Example : |
| # This instance chains to the outgoing proxy |
| listen proxy-out |
| mode http |
| option httplog |
| option logasap |
| log global |
| server cache1 192.168.1.1:3128 |
| |
| # log the name of the virtual server |
| capture request header Host len 20 |
| |
| # log the amount of data uploaded during a POST |
| capture request header Content-Length len 10 |
| |
| # log the beginning of the referrer |
| capture request header Referer len 20 |
| |
| # server name (useful for outgoing proxies only) |
| capture response header Server len 20 |
| |
| # logging the content-length is useful with "option logasap" |
| capture response header Content-Length len 10 |
| |
| # log the expected cache behavior on the response |
| capture response header Cache-Control len 8 |
| |
| # the Via header will report the next proxy's name |
| capture response header Via len 20 |
| |
| # log the URL location during a redirection |
| capture response header Location len 20 |
| |
| >>> Aug 9 20:26:09 localhost \ |
| haproxy[2022]: 127.0.0.1:34014 [09/Aug/2004:20:26:09] proxy-out \ |
| proxy-out/cache1 0/0/0/162/+162 200 +350 - - ---- 0/0/0/0/0 0/0 \ |
| {fr.adserver.yahoo.co||http://fr.f416.mail.} {|864|private||} \ |
| "GET http://fr.adserver.yahoo.com/" |
| |
| >>> Aug 9 20:30:46 localhost \ |
| haproxy[2022]: 127.0.0.1:34020 [09/Aug/2004:20:30:46] proxy-out \ |
| proxy-out/cache1 0/0/0/182/+182 200 +279 - - ---- 0/0/0/0/0 0/0 \ |
| {w.ods.org||} {Formilux/0.1.8|3495|||} \ |
| "GET http://trafic.1wt.eu/ HTTP/1.1" |
| |
| >>> Aug 9 20:30:46 localhost \ |
| haproxy[2022]: 127.0.0.1:34028 [09/Aug/2004:20:30:46] proxy-out \ |
| proxy-out/cache1 0/0/2/126/+128 301 +223 - - ---- 0/0/0/0/0 0/0 \ |
| {www.sytadin.equipement.gouv.fr||http://trafic.1wt.eu/} \ |
| {Apache|230|||http://www.sytadin.} \ |
| "GET http://www.sytadin.equipement.gouv.fr/ HTTP/1.1" |
| |
| |
| 8.9. Examples of logs |
| --------------------- |
| |
| These are real-world examples of logs accompanied with an explanation. Some of |
| them have been made up by hand. The syslog part has been removed for better |
| reading. Their sole purpose is to explain how to decipher them. |
| |
| >>> haproxy[674]: 127.0.0.1:33318 [15/Oct/2003:08:31:57.130] px-http \ |
| px-http/srv1 6559/0/7/147/6723 200 243 - - ---- 5/3/3/1/0 0/0 \ |
| "HEAD / HTTP/1.0" |
| |
| => long request (6.5s) entered by hand through 'telnet'. The server replied |
| in 147 ms, and the session ended normally ('----') |
| |
| >>> haproxy[674]: 127.0.0.1:33319 [15/Oct/2003:08:31:57.149] px-http \ |
| px-http/srv1 6559/1230/7/147/6870 200 243 - - ---- 324/239/239/99/0 \ |
| 0/9 "HEAD / HTTP/1.0" |
| |
| => Idem, but the request was queued in the global queue behind 9 other |
| requests, and waited there for 1230 ms. |
| |
| >>> haproxy[674]: 127.0.0.1:33320 [15/Oct/2003:08:32:17.654] px-http \ |
| px-http/srv1 9/0/7/14/+30 200 +243 - - ---- 3/3/3/1/0 0/0 \ |
| "GET /image.iso HTTP/1.0" |
| |
| => request for a long data transfer. The "logasap" option was specified, so |
| the log was produced just before transferring data. The server replied in |
| 14 ms, 243 bytes of headers were sent to the client, and total time from |
| accept to first data byte is 30 ms. |
| |
| >>> haproxy[674]: 127.0.0.1:33320 [15/Oct/2003:08:32:17.925] px-http \ |
| px-http/srv1 9/0/7/14/30 502 243 - - PH-- 3/2/2/0/0 0/0 \ |
| "GET /cgi-bin/bug.cgi? HTTP/1.0" |
| |
| => the proxy blocked a server response either because of an "http-response |
| deny" rule, or because the response was improperly formatted and not |
| HTTP-compliant, or because it blocked sensitive information which risked |
| being cached. In this case, the response is replaced with a "502 bad |
| gateway". The flags ("PH--") tell us that it was HAProxy who decided to |
| return the 502 and not the server. |
| |
| >>> haproxy[18113]: 127.0.0.1:34548 [15/Oct/2003:15:18:55.798] px-http \ |
| px-http/<NOSRV> -1/-1/-1/-1/8490 -1 0 - - CR-- 2/2/2/0/0 0/0 "" |
| |
| => the client never completed its request and aborted itself ("C---") after |
| 8.5s, while the proxy was waiting for the request headers ("-R--"). |
| Nothing was sent to any server. |
| |
| >>> haproxy[18113]: 127.0.0.1:34549 [15/Oct/2003:15:19:06.103] px-http \ |
| px-http/<NOSRV> -1/-1/-1/-1/50001 408 0 - - cR-- 2/2/2/0/0 0/0 "" |
| |
| => The client never completed its request, which was aborted by the |
| time-out ("c---") after 50s, while the proxy was waiting for the request |
| headers ("-R--"). Nothing was sent to any server, but the proxy could |
| send a 408 return code to the client. |
| |
| >>> haproxy[18989]: 127.0.0.1:34550 [15/Oct/2003:15:24:28.312] px-tcp \ |
| px-tcp/srv1 0/0/5007 0 cD 0/0/0/0/0 0/0 |
| |
| => This log was produced with "option tcplog". The client timed out after |
| 5 seconds ("c----"). |
| |
| >>> haproxy[18989]: 10.0.0.1:34552 [15/Oct/2003:15:26:31.462] px-http \ |
| px-http/srv1 3183/-1/-1/-1/11215 503 0 - - SC-- 205/202/202/115/3 \ |
| 0/0 "HEAD / HTTP/1.0" |
| |
| => The request took 3s to complete (probably a network problem), and the |
| connection to the server failed ('SC--') after 4 attempts of 2 seconds |
| (config says 'retries 3'), and no redispatch (otherwise we would have |
| seen "/+3"). Status code 503 was returned to the client. There were 115 |
| connections on this server, 202 connections on this proxy, and 205 on |
| the global process. It is possible that the server refused the |
| connection because of too many already established. |
| |
| |
| 9. Supported filters |
| -------------------- |
| |
| Here are listed officially supported filters with the list of parameters they |
| accept. Depending on compile options, some of these filters might be |
| unavailable. The list of available filters is reported in haproxy -vv. |
| |
| See also : "filter" |
| |
| 9.1. Trace |
| ---------- |
| |
| filter trace [name <name>] [random-forwarding] [hexdump] |
| |
| Arguments: |
| <name> is an arbitrary name that will be reported in |
| messages. If no name is provided, "TRACE" is used. |
| |
| <quiet> inhibits trace messages. |
| |
| <random-forwarding> enables the random forwarding of parsed data. By |
| default, this filter forwards all previously parsed |
| data. With this parameter, it only forwards a random |
| amount of the parsed data. |
| |
| <hexdump> dumps all forwarded data to the server and the client. |
| |
| This filter can be used as a base to develop new filters. It defines all |
| callbacks and print a message on the standard error stream (stderr) with useful |
| information for all of them. It may be useful to debug the activity of other |
| filters or, quite simply, HAProxy's activity. |
| |
| Using <random-parsing> and/or <random-forwarding> parameters is a good way to |
| tests the behavior of a filter that parses data exchanged between a client and |
| a server by adding some latencies in the processing. |
| |
| |
| 9.2. HTTP compression |
| --------------------- |
| |
| filter compression |
| |
| The HTTP compression has been moved in a filter in HAProxy 1.7. "compression" |
| keyword must still be used to enable and configure the HTTP compression. And |
| when no other filter is used, it is enough. When used with the cache or the |
| fcgi-app enabled, it is also enough. In this case, the compression is always |
| done after the response is stored in the cache. But it is mandatory to |
| explicitly use a filter line to enable the HTTP compression when at least one |
| filter other than the cache or the fcgi-app is used for the same |
| listener/frontend/backend. This is important to know the filters evaluation |
| order. |
| |
| See also : "compression", section 9.4 about the cache filter and section 9.5 |
| about the fcgi-app filter. |
| |
| |
| 9.3. Stream Processing Offload Engine (SPOE) |
| -------------------------------------------- |
| |
| filter spoe [engine <name>] config <file> |
| |
| Arguments : |
| |
| <name> is the engine name that will be used to find the right scope in |
| the configuration file. If not provided, all the file will be |
| parsed. |
| |
| <file> is the path of the engine configuration file. This file can |
| contain configuration of several engines. In this case, each |
| part must be placed in its own scope. |
| |
| The Stream Processing Offload Engine (SPOE) is a filter communicating with |
| external components. It allows the offload of some specifics processing on the |
| streams in tiered applications. These external components and information |
| exchanged with them are configured in dedicated files, for the main part. It |
| also requires dedicated backends, defined in HAProxy configuration. |
| |
| SPOE communicates with external components using an in-house binary protocol, |
| the Stream Processing Offload Protocol (SPOP). |
| |
| For all information about the SPOE configuration and the SPOP specification, see |
| "doc/SPOE.txt". |
| |
| 9.4. Cache |
| ---------- |
| |
| filter cache <name> |
| |
| Arguments : |
| |
| <name> is name of the cache section this filter will use. |
| |
| The cache uses a filter to store cacheable responses. The HTTP rules |
| "cache-store" and "cache-use" must be used to define how and when to use a |
| cache. By default the corresponding filter is implicitly defined. And when no |
| other filters than fcgi-app or compression are used, it is enough. In such |
| case, the compression filter is always evaluated after the cache filter. But it |
| is mandatory to explicitly use a filter line to use a cache when at least one |
| filter other than the compression or the fcgi-app is used for the same |
| listener/frontend/backend. This is important to know the filters evaluation |
| order. |
| |
| See also : section 9.2 about the compression filter, section 9.5 about the |
| fcgi-app filter and section 6 about cache. |
| |
| |
| 9.5. Fcgi-app |
| ------------- |
| |
| filter fcgi-app <name> |
| |
| Arguments : |
| |
| <name> is name of the fcgi-app section this filter will use. |
| |
| The FastCGI application uses a filter to evaluate all custom parameters on the |
| request path, and to process the headers on the response path. the <name> must |
| reference an existing fcgi-app section. The directive "use-fcgi-app" should be |
| used to define the application to use. By default the corresponding filter is |
| implicitly defined. And when no other filters than cache or compression are |
| used, it is enough. But it is mandatory to explicitly use a filter line to a |
| fcgi-app when at least one filter other than the compression or the cache is |
| used for the same backend. This is important to know the filters evaluation |
| order. |
| |
| See also: "use-fcgi-app", section 9.2 about the compression filter, section 9.4 |
| about the cache filter and section 10 about FastCGI application. |
| |
| |
| 9.6. OpenTracing |
| ---------------- |
| |
| The OpenTracing filter adds native support for using distributed tracing in |
| HAProxy. This is enabled by sending an OpenTracing compliant request to one |
| of the supported tracers such as Datadog, Jaeger, Lightstep and Zipkin tracers. |
| Please note: tracers are not listed by any preference, but alphabetically. |
| |
| This feature is only enabled when HAProxy was built with USE_OT=1. |
| |
| The OpenTracing filter activation is done explicitly by specifying it in the |
| HAProxy configuration. If this is not done, the OpenTracing filter in no way |
| participates in the work of HAProxy. |
| |
| filter opentracing [id <id>] config <file> |
| |
| Arguments : |
| |
| <id> is the OpenTracing filter id that will be used to find the |
| right scope in the configuration file. If no filter id is |
| specified, 'ot-filter' is used as default. If scope is not |
| specified in the configuration file, it applies to all defined |
| OpenTracing filters. |
| |
| <file> is the path of the OpenTracing configuration file. The same |
| file can contain configurations for multiple OpenTracing |
| filters simultaneously. In that case we do not need to define |
| scope so the same configuration applies to all filters or each |
| filter must have its own scope defined. |
| |
| More detailed documentation related to the operation, configuration and use |
| of the filter can be found in the addons/ot directory. |
| |
| 9.7. Bandwidth limitation |
| -------------------------- |
| |
| filter bwlim-in <name> default-limit <size> default-period <time> [min-size <sz>] |
| filter bwlim-out <name> default-limit <size> default-period <time> [min-size <sz>] |
| filter bwlim-in <name> limit <size> key <pattern> [table <table>] [min-size <sz>] |
| filter bwlim-out <name> limit <size> key <pattern> [table <table>] [min-size <sz>] |
| |
| Arguments : |
| |
| <name> is the filter name that will be used by 'set-bandwidth-limit' |
| actions to reference a specific bandwidth limitation filter. |
| |
| <size> is max number of bytes that can be forwarded over the period. |
| The value must be specified for per-stream and shared bandwidth |
| limitation filters. It follows the HAProxy size format and is |
| expressed in bytes. |
| |
| <pattern> is a sample expression rule as described in section 7.3. It |
| describes what elements will be analyzed, extracted, combined, |
| and used to select which table entry to update the counters. It |
| must be specified for shared bandwidth limitation filters only. |
| |
| <table> is an optional table to be used instead of the default one, |
| which is the stick-table declared in the current proxy. It can |
| be specified for shared bandwidth limitation filters only. |
| |
| <time> is the default time period used to evaluate the bandwidth |
| limitation rate. It can be specified for per-stream bandwidth |
| limitation filters only. It follows the HAProxy time format and |
| is expressed in milliseconds. |
| |
| <min-size> is the optional minimum number of bytes forwarded at a time by |
| a stream excluding the last packet that may be smaller. This |
| value can be specified for per-stream and shared bandwidth |
| limitation filters. It follows the HAProxy size format and is |
| expressed in bytes. |
| |
| Bandwidth limitation filters should be used to restrict the data forwarding |
| speed at the stream level. By extension, such filters limit the network |
| bandwidth consumed by a resource. Several bandwidth limitation filters can be |
| used. For instance, it is possible to define a limit per source address to be |
| sure a client will never consume all the network bandwidth, thereby penalizing |
| other clients, and another one per stream to be able to fairly handle several |
| connections for a given client. |
| |
| The definition order of these filters is important. If several bandwidth |
| filters are enabled on a stream, the filtering will be applied in their |
| definition order. It is also important to understand the definition order of |
| the other filters have an influence. For instance, depending on the HTTP |
| compression filter is defined before or after a bandwidth limitation filter, |
| the limit will be applied on the compressed payload or not. The same is true |
| for the cache filter. |
| |
| There are two kinds of bandwidth limitation filters. The first one enforces a |
| default limit and is applied per stream. The second one uses a stickiness table |
| to enforce a limit equally divided between all streams sharing the same entry in |
| the table. |
| |
| In addition, for a given filter, depending on the filter keyword used, the |
| limitation can be applied on incoming data, received from the client and |
| forwarded to a server, or on outgoing data, received from a server and sent to |
| the client. To apply a limit on incoming data, "bwlim-in" keyword must be |
| used. To apply it on outgoing data, "bwlim-out" keyword must be used. In both |
| cases, the bandwidth limitation is applied on forwarded data, at the stream |
| level. |
| |
| The bandwidth limitation is applied at the stream level and not at the |
| connection level. For multiplexed protocols (H2, H3 and FastCGI), the streams |
| of the same connection may have different limits. |
| |
| For a per-stream bandwidth limitation filter, default period and limit must be |
| defined. As their names suggest, they are the default values used to setup the |
| bandwidth limitation rate for a stream. However, for this kind of filter and |
| only this one, it is possible to redefine these values using sample expressions |
| when the filter is enabled with a TCP/HTTP "set-bandwidth-limit" action. |
| |
| For a shared bandwidth limitation filter, depending on whether it is applied on |
| incoming or outgoing data, the stickiness table used must store the |
| corresponding bytes rate information. "bytes_in_rate(<period>)" counter must be |
| stored to limit incoming data and "bytes_out_rate(<period>)" counter must be |
| used to limit outgoing data. |
| |
| Finally, it is possible to set the minimum number of bytes that a bandwidth |
| limitation filter can forward at a time for a given stream. It should be used |
| to not forward too small amount of data, to reduce the CPU usage. It must |
| carefully be defined. Too small, a value can increase the CPU usage. Too high, |
| it can increase the latency. It is also highly linked to the defined bandwidth |
| limit. If it is too close to the bandwidth limit, some pauses may be |
| experienced to not exceed the limit because too many bytes will be consumed at |
| a time. It is highly dependent on the filter configuration. A good idea is to |
| start with something around 2 TCP MSS, typically 2896 bytes, and tune it after |
| some experimentations. |
| |
| Example: |
| frontend http |
| bind *:80 |
| mode http |
| |
| # If this filter is enabled, the stream will share the download limit |
| # of 10m/s with all other streams with the same source address. |
| filter bwlim-out limit-by-src key src table limit-by-src limit 10m |
| |
| # If this filter is enabled, the stream will be limited to download at 1m/s, |
| # independently of all other streams. |
| filter bwlim-out limit-by-strm default-limit 1m default-period 1s |
| |
| # Limit all streams to 1m/s (the default limit) and those accessing the |
| # internal API to 100k/s. Limit each source address to 10m/s. The shared |
| # limit is applied first. Both are limiting the download rate. |
| http-request set-bandwidth-limit limit-by-strm |
| http-request set-bandwidth-limit limit-by-strm limit 100k if { path_beg /internal } |
| http-request set-bandwidth-limit limit-by-src |
| ... |
| |
| backend limit-by-src |
| # The stickiness table used by <limit-by-src> filter |
| stick-table type ip size 1m expire 3600s store bytes_out_rate(1s) |
| |
| See also : "tcp-request content set-bandwidth-limit", |
| "tcp-response content set-bandwidth-limit", |
| "http-request set-bandwidth-limit" and |
| "http-response set-bandwidth-limit". |
| |
| 10. FastCGI applications |
| ------------------------- |
| |
| HAProxy is able to send HTTP requests to Responder FastCGI applications. This |
| feature was added in HAProxy 2.1. To do so, servers must be configured to use |
| the FastCGI protocol (using the keyword "proto fcgi" on the server line) and a |
| FastCGI application must be configured and used by the backend managing these |
| servers (using the keyword "use-fcgi-app" into the proxy section). Several |
| FastCGI applications may be defined, but only one can be used at a time by a |
| backend. |
| |
| HAProxy implements all features of the FastCGI specification for Responder |
| application. Especially it is able to multiplex several requests on a simple |
| connection. |
| |
| 10.1. Setup |
| ----------- |
| |
| 10.1.1. Fcgi-app section |
| -------------------------- |
| |
| fcgi-app <name> |
| Declare a FastCGI application named <name>. To be valid, at least the |
| document root must be defined. |
| |
| acl <aclname> <criterion> [flags] [operator] <value> ... |
| Declare or complete an access list. |
| |
| See "acl" keyword in section 4.2 and section 7 about ACL usage for |
| details. ACLs defined for a FastCGI application are private. They cannot be |
| used by any other application or by any proxy. In the same way, ACLs defined |
| in any other section are not usable by a FastCGI application. However, |
| Pre-defined ACLs are available. |
| |
| docroot <path> |
| Define the document root on the remote host. <path> will be used to build |
| the default value of FastCGI parameters SCRIPT_FILENAME and |
| PATH_TRANSLATED. It is a mandatory setting. |
| |
| index <script-name> |
| Define the script name that will be appended after an URI that ends with a |
| slash ("/") to set the default value of the FastCGI parameter SCRIPT_NAME. It |
| is an optional setting. |
| |
| Example : |
| index index.php |
| |
| log-stderr global |
| log-stderr <address> [len <length>] [format <format>] |
| [sample <ranges>:<sample_size>] <facility> [<level> [<minlevel>]] |
| Enable logging of STDERR messages reported by the FastCGI application. |
| |
| See "log" keyword in section 4.2 for details. It is an optional setting. By |
| default STDERR messages are ignored. |
| |
| pass-header <name> [ { if | unless } <condition> ] |
| Specify the name of a request header which will be passed to the FastCGI |
| application. It may optionally be followed by an ACL-based condition, in |
| which case it will only be evaluated if the condition is true. |
| |
| Most request headers are already available to the FastCGI application, |
| prefixed with "HTTP_". Thus, this directive is only required to pass headers |
| that are purposefully omitted. Currently, the headers "Authorization", |
| "Proxy-Authorization" and hop-by-hop headers are omitted. |
| |
| Note that the headers "Content-type" and "Content-length" are never passed to |
| the FastCGI application because they are already converted into parameters. |
| |
| path-info <regex> |
| Define a regular expression to extract the script-name and the path-info from |
| the URL-decoded path. Thus, <regex> may have two captures: the first one to |
| capture the script name and the second one to capture the path-info. The |
| first one is mandatory, the second one is optional. This way, it is possible |
| to extract the script-name from the path ignoring the path-info. It is an |
| optional setting. If it is not defined, no matching is performed on the |
| path. and the FastCGI parameters PATH_INFO and PATH_TRANSLATED are not |
| filled. |
| |
| For security reason, when this regular expression is defined, the newline and |
| the null characters are forbidden from the path, once URL-decoded. The reason |
| to such limitation is because otherwise the matching always fails (due to a |
| limitation one the way regular expression are executed in HAProxy). So if one |
| of these two characters is found in the URL-decoded path, an error is |
| returned to the client. The principle of least astonishment is applied here. |
| |
| Example : |
| path-info ^(/.+\.php)(/.*)?$ # both script-name and path-info may be set |
| path-info ^(/.+\.php) # the path-info is ignored |
| |
| option get-values |
| no option get-values |
| Enable or disable the retrieve of variables about connection management. |
| |
| HAProxy is able to send the record FCGI_GET_VALUES on connection |
| establishment to retrieve the value for following variables: |
| |
| * FCGI_MAX_REQS The maximum number of concurrent requests this |
| application will accept. |
| |
| * FCGI_MPXS_CONNS "0" if this application does not multiplex connections, |
| "1" otherwise. |
| |
| Some FastCGI applications does not support this feature. Some others close |
| the connection immediately after sending their response. So, by default, this |
| option is disabled. |
| |
| Note that the maximum number of concurrent requests accepted by a FastCGI |
| application is a connection variable. It only limits the number of streams |
| per connection. If the global load must be limited on the application, the |
| server parameters "maxconn" and "pool-max-conn" must be set. In addition, if |
| an application does not support connection multiplexing, the maximum number |
| of concurrent requests is automatically set to 1. |
| |
| option keep-conn |
| no option keep-conn |
| Instruct the FastCGI application to keep the connection open or not after |
| sending a response. |
| |
| If disabled, the FastCGI application closes the connection after responding |
| to this request. By default, this option is enabled. |
| |
| option max-reqs <reqs> |
| Define the maximum number of concurrent requests this application will |
| accept. |
| |
| This option may be overwritten if the variable FCGI_MAX_REQS is retrieved |
| during connection establishment. Furthermore, if the application does not |
| support connection multiplexing, this option will be ignored. By default set |
| to 1. |
| |
| option mpxs-conns |
| no option mpxs-conns |
| Enable or disable the support of connection multiplexing. |
| |
| This option may be overwritten if the variable FCGI_MPXS_CONNS is retrieved |
| during connection establishment. It is disabled by default. |
| |
| set-param <name> <fmt> [ { if | unless } <condition> ] |
| Set a FastCGI parameter that should be passed to this application. Its |
| value, defined by <fmt> must follows the log-format rules (see section 8.2.6 |
| "Custom Log format"). It may optionally be followed by an ACL-based |
| condition, in which case it will only be evaluated if the condition is true. |
| |
| With this directive, it is possible to overwrite the value of default FastCGI |
| parameters. If the value is evaluated to an empty string, the rule is |
| ignored. These directives are evaluated in their declaration order. |
| |
| Example : |
| # PHP only, required if PHP was built with --enable-force-cgi-redirect |
| set-param REDIRECT_STATUS 200 |
| |
| set-param PHP_AUTH_DIGEST %[req.hdr(Authorization)] |
| |
| |
| 10.1.2. Proxy section |
| --------------------- |
| |
| use-fcgi-app <name> |
| Define the FastCGI application to use for the backend. |
| |
| Arguments : |
| <name> is the name of the FastCGI application to use. |
| |
| This keyword is only available for HTTP proxies with the backend capability |
| and with at least one FastCGI server. However, FastCGI servers can be mixed |
| with HTTP servers. But except there is a good reason to do so, it is not |
| recommended (see section 10.3 about the limitations for details). Only one |
| application may be defined at a time per backend. |
| |
| Note that, once a FastCGI application is referenced for a backend, depending |
| on the configuration some processing may be done even if the request is not |
| sent to a FastCGI server. Rules to set parameters or pass headers to an |
| application are evaluated. |
| |
| |
| 10.1.3. Example |
| --------------- |
| |
| frontend front-http |
| mode http |
| bind *:80 |
| bind *: |
| |
| use_backend back-dynamic if { path_reg ^/.+\.php(/.*)?$ } |
| default_backend back-static |
| |
| backend back-static |
| mode http |
| server www A.B.C.D:80 |
| |
| backend back-dynamic |
| mode http |
| use-fcgi-app php-fpm |
| server php-fpm A.B.C.D:9000 proto fcgi |
| |
| fcgi-app php-fpm |
| log-stderr global |
| option keep-conn |
| |
| docroot /var/www/my-app |
| index index.php |
| path-info ^(/.+\.php)(/.*)?$ |
| |
| |
| 10.2. Default parameters |
| ------------------------ |
| |
| A Responder FastCGI application has the same purpose as a CGI/1.1 program. In |
| the CGI/1.1 specification (RFC3875), several variables must be passed to the |
| script. So HAProxy set them and some others commonly used by FastCGI |
| applications. All these variables may be overwritten, with caution though. |
| |
| +-------------------+-----------------------------------------------------+ |
| | AUTH_TYPE | Identifies the mechanism, if any, used by HAProxy | |
| | | to authenticate the user. Concretely, only the | |
| | | BASIC authentication mechanism is supported. | |
| | | | |
| +-------------------+-----------------------------------------------------+ |
| | CONTENT_LENGTH | Contains the size of the message-body attached to | |
| | | the request. It means only requests with a known | |
| | | size are considered as valid and sent to the | |
| | | application. | |
| | | | |
| +-------------------+-----------------------------------------------------+ |
| | CONTENT_TYPE | Contains the type of the message-body attached to | |
| | | the request. It may not be set. | |
| | | | |
| +-------------------+-----------------------------------------------------+ |
| | DOCUMENT_ROOT | Contains the document root on the remote host under | |
| | | which the script should be executed, as defined in | |
| | | the application's configuration. | |
| | | | |
| +-------------------+-----------------------------------------------------+ |
| | GATEWAY_INTERFACE | Contains the dialect of CGI being used by HAProxy | |
| | | to communicate with the FastCGI application. | |
| | | Concretely, it is set to "CGI/1.1". | |
| | | | |
| +-------------------+-----------------------------------------------------+ |
| | PATH_INFO | Contains the portion of the URI path hierarchy | |
| | | following the part that identifies the script | |
| | | itself. To be set, the directive "path-info" must | |
| | | be defined. | |
| | | | |
| +-------------------+-----------------------------------------------------+ |
| | PATH_TRANSLATED | If PATH_INFO is set, it is its translated version. | |
| | | It is the concatenation of DOCUMENT_ROOT and | |
| | | PATH_INFO. If PATH_INFO is not set, this parameters | |
| | | is not set too. | |
| | | | |
| +-------------------+-----------------------------------------------------+ |
| | QUERY_STRING | Contains the request's query string. It may not be | |
| | | set. | |
| | | | |
| +-------------------+-----------------------------------------------------+ |
| | REMOTE_ADDR | Contains the network address of the client sending | |
| | | the request. | |
| | | | |
| +-------------------+-----------------------------------------------------+ |
| | REMOTE_USER | Contains the user identification string supplied by | |
| | | client as part of user authentication. | |
| | | | |
| +-------------------+-----------------------------------------------------+ |
| | REQUEST_METHOD | Contains the method which should be used by the | |
| | | script to process the request. | |
| | | | |
| +-------------------+-----------------------------------------------------+ |
| | REQUEST_URI | Contains the request's URI. | |
| | | | |
| +-------------------+-----------------------------------------------------+ |
| | SCRIPT_FILENAME | Contains the absolute pathname of the script. it is | |
| | | the concatenation of DOCUMENT_ROOT and SCRIPT_NAME. | |
| | | | |
| +-------------------+-----------------------------------------------------+ |
| | SCRIPT_NAME | Contains the name of the script. If the directive | |
| | | "path-info" is defined, it is the first part of the | |
| | | URI path hierarchy, ending with the script name. | |
| | | Otherwise, it is the entire URI path. | |
| | | | |
| +-------------------+-----------------------------------------------------+ |
| | SERVER_NAME | Contains the name of the server host to which the | |
| | | client request is directed. It is the value of the | |
| | | header "Host", if defined. Otherwise, the | |
| | | destination address of the connection on the client | |
| | | side. | |
| | | | |
| +-------------------+-----------------------------------------------------+ |
| | SERVER_PORT | Contains the destination TCP port of the connection | |
| | | on the client side, which is the port the client | |
| | | connected to. | |
| | | | |
| +-------------------+-----------------------------------------------------+ |
| | SERVER_PROTOCOL | Contains the request's protocol. | |
| | | | |
| +-------------------+-----------------------------------------------------+ |
| | SERVER_SOFTWARE | Contains the string "HAProxy" followed by the | |
| | | current HAProxy version. | |
| | | | |
| +-------------------+-----------------------------------------------------+ |
| | HTTPS | Set to a non-empty value ("on") if the script was | |
| | | queried through the HTTPS protocol. | |
| | | | |
| +-------------------+-----------------------------------------------------+ |
| |
| |
| 10.3. Limitations |
| ------------------ |
| |
| The current implementation have some limitations. The first one is about the |
| way some request headers are hidden to the FastCGI applications. This happens |
| during the headers analysis, on the backend side, before the connection |
| establishment. At this stage, HAProxy know the backend is using a FastCGI |
| application but it don't know if the request will be routed to a FastCGI server |
| or not. But to hide request headers, it simply removes them from the HTX |
| message. So, if the request is finally routed to an HTTP server, it never see |
| these headers. For this reason, it is not recommended to mix FastCGI servers |
| and HTTP servers under the same backend. |
| |
| Similarly, the rules "set-param" and "pass-header" are evaluated during the |
| request headers analysis. So the evaluation is always performed, even if the |
| requests is finally forwarded to an HTTP server. |
| |
| About the rules "set-param", when a rule is applied, a pseudo header is added |
| into the HTX message. So, the same way than for HTTP header rewrites, it may |
| fail if the buffer is full. The rules "set-param" will compete with |
| "http-request" ones. |
| |
| Finally, all FastCGI params and HTTP headers are sent into a unique record |
| FCGI_PARAM. Encoding of this record must be done in one pass, otherwise a |
| processing error is returned. It means the record FCGI_PARAM, once encoded, |
| must not exceeds the size of a buffer. However, there is no reserve to respect |
| here. |
| |
| |
| 11. Address formats |
| ------------------- |
| |
| Several statements as "bind, "server", "nameserver" and "log" requires an |
| address. |
| |
| This address can be a host name, an IPv4 address, an IPv6 address, or '*'. |
| The '*' is equal to the special address "0.0.0.0" and can be used, in the case |
| of "bind" or "dgram-bind" to listen on all IPv4 of the system.The IPv6 |
| equivalent is '::'. |
| |
| Depending of the statement, a port or port range follows the IP address. This |
| is mandatory on 'bind' statement, optional on 'server'. |
| |
| This address can also begin with a slash '/'. It is considered as the "unix" |
| family, and '/' and following characters must be present the path. |
| |
| Default socket type or transport method "datagram" or "stream" depends on the |
| configuration statement showing the address. Indeed, 'bind' and 'server' will |
| use a "stream" socket type by default whereas 'log', 'nameserver' or |
| 'dgram-bind' will use a "datagram". |
| |
| Optionally, a prefix could be used to force the address family and/or the |
| socket type and the transport method. |
| |
| |
| 11.1. Address family prefixes |
| ----------------------------- |
| |
| 'abns@<name>' following <name> is an abstract namespace (Linux only). |
| |
| 'fd@<n>' following address is a file descriptor <n> inherited from the |
| parent. The fd must be bound and may or may not already be |
| listening. |
| |
| 'ip@<address>[:port1[-port2]]' following <address> is considered as an IPv4 or |
| IPv6 address depending on the syntax. Depending |
| on the statement using this address, a port or |
| a port range may or must be specified. |
| |
| 'ipv4@<address>[:port1[-port2]]' following <address> is always considered as |
| an IPv4 address. Depending on the statement |
| using this address, a port or a port range |
| may or must be specified. |
| |
| 'ipv6@<address>[:port1[-port2]]' following <address> is always considered as |
| an IPv6 address. Depending on the statement |
| using this address, a port or a port range |
| may or must be specified. |
| |
| 'sockpair@<n>' following address is the file descriptor of a connected unix |
| socket or of a socketpair. During a connection, the initiator |
| creates a pair of connected sockets, and passes one of them |
| over the FD to the other end. The listener waits to receive |
| the FD from the unix socket and uses it as if it were the FD |
| of an accept(). Should be used carefully. |
| |
| 'unix@<path>' following string is considered as a UNIX socket <path>. this |
| prefix is useful to declare an UNIX socket path which don't |
| start by slash '/'. |
| |
| |
| 11.2. Socket type prefixes |
| -------------------------- |
| |
| Previous "Address family prefixes" can also be prefixed to force the socket |
| type and the transport method. The default depends of the statement using |
| this address but in some cases the user may force it to a different one. |
| This is the case for "log" statement where the default is syslog over UDP |
| but we could force to use syslog over TCP. |
| |
| Those prefixes were designed for internal purpose and users should instead use |
| use aliases of the next section "11.3 Protocol prefixes". However these can |
| sometimes be convenient, for example in combination with inherited sockets |
| known by their file descriptor number, in which case the address family is "fd" |
| and the socket type must be declared. |
| |
| If users need one those prefixes to perform what they expect because |
| they can not configure the same using the protocol prefixes, they should |
| report this to the maintainers. |
| |
| 'stream+<family>@<address>' forces socket type and transport method |
| to "stream" |
| |
| 'dgram+<family>@<address>' forces socket type and transport method |
| to "datagram". |
| |
| 'quic+<family>@<address>' forces socket type to "datagram" and transport |
| method to "stream". |
| |
| |
| |
| 11.3. Protocol prefixes |
| ----------------------- |
| |
| 'quic4@<address>[:port1[-port2]]' following <address> is always considered as |
| an IPv4 address but socket type is forced to |
| "datagram" and the transport method is forced |
| to "stream". Depending on the statement using |
| this address, a UDP port or port range can or |
| must be specified. It is equivalent to |
| "quic+ipv4@". |
| |
| 'quic6@<address>[:port1[-port2]]' following <address> is always considered as |
| an IPv6 address but socket type is forced to |
| "datagram" and the transport method is forced |
| to "stream". Depending on the statement using |
| this address, a UDP port or port range can or |
| must be specified. It is equivalent to |
| "quic+ipv6@". |
| |
| 'tcp@<address>[:port1[-port2]]' following <address> is considered as an IPv4 |
| or IPv6 address depending of the syntax but |
| socket type and transport method is forced to |
| "stream". Depending on the statement using |
| this address, a port or a port range can or |
| must be specified. It is considered as an alias |
| of 'stream+ip@'. |
| |
| 'tcp4@<address>[:port1[-port2]]' following <address> is always considered as |
| an IPv4 address but socket type and transport |
| method is forced to "stream". Depending on the |
| statement using this address, a port or port |
| range can or must be specified. |
| It is considered as an alias of 'stream+ipv4@'. |
| |
| 'tcp6@<address>[:port1[-port2]]' following <address> is always considered as |
| an IPv6 address but socket type and transport |
| method is forced to "stream". Depending on the |
| statement using this address, a port or port |
| range can or must be specified. |
| It is considered as an alias of 'stream+ipv4@'. |
| |
| 'udp@<address>[:port1[-port2]]' following <address> is considered as an IPv4 |
| or IPv6 address depending of the syntax but |
| socket type and transport method is forced to |
| "datagram". Depending on the statement using |
| this address, a port or a port range can or |
| must be specified. It is considered as an alias |
| of 'dgram+ip@'. |
| |
| 'udp4@<address>[:port1[-port2]]' following <address> is always considered as |
| an IPv4 address but socket type and transport |
| method is forced to "datagram". Depending on |
| the statement using this address, a port or |
| port range can or must be specified. |
| It is considered as an alias of 'dgram+ipv4@'. |
| |
| 'udp6@<address>[:port1[-port2]]' following <address> is always considered as |
| an IPv6 address but socket type and transport |
| method is forced to "datagram". Depending on |
| the statement using this address, a port or |
| port range can or must be specified. |
| It is considered as an alias of 'dgram+ipv4@'. |
| |
| 'uxdg@<path>' following string is considered as a unix socket <path> but |
| transport method is forced to "datagram". It is considered as |
| an alias of 'dgram+unix@'. |
| |
| 'uxst@<path>' following string is considered as a unix socket <path> but |
| transport method is forced to "stream". It is considered as |
| an alias of 'stream+unix@'. |
| |
| In future versions, other prefixes could be used to specify protocols like |
| QUIC which proposes stream transport based on socket of type "datagram". |
| |
| /* |
| * Local variables: |
| * fill-column: 79 |
| * End: |
| */ |