BUG/CRITICAL: http: properly reject empty http header field names

The HTTP header parsers surprizingly accepts empty header field names,
and this is a leftover from the original code that was agnostic to this.

When muxes were introduced, for H2 first, the HPACK decompressor needed
to feed headers lists, and since empty header names were strictly
forbidden by the protocol, the lists of headers were purposely designed
to be terminated by an empty header field name (a principle that is
similar to H1's empty line termination). This principle was preserved
and generalized to other protocols migrated to muxes (H1/FCGI/H3 etc)
without anyone ever noticing that the H1 parser was still able to deliver
empty header field names to this list. In addition to this it turns out
that the HPACK decompressor, despite a comment in the code, may
successfully decompress an empty header field name, and this mistake
was propagated to the QPACK decompressor as well.

The impact is that an empty header field name may be used to truncate
the list of headers and thus make some headers disappear. While for
H2/H3 the impact is limited as haproxy sees a request with missing
headers, and headers are not used to delimit messages, in the case of
HTTP/1, the impact is significant because the presence (and sometimes
contents) of certain sensitive headers is detected during the parsing.
Thus, some of these headers may be seen, marked as present, their value
extracted, but never delivered to upper layers and obviously not
forwarded to the other side either. This can have for consequence that
certain important header fields such as Connection, Upgrade, Host,
Content-length, Transfer-Encoding etc are possibly seen as different
between what haproxy uses to parse/forward/route and what is observed
in http-request rules and of course, forwarded. One direct consequence
is that it is possible to exploit this property in HTTP/1 to make
affected versions of haproxy forward more data than is advertised on
the other side, and bypass some access controls or routing rules by
crafting extraneous requests.  Note, however, that responses to such
requests will normally not be passed back to the client, but this can
still cause some harm.

This specific risk can be mostly worked around in configuration using
the following rule that will rely on the bug's impact to precisely
detect the inconsistency between the known body size and the one
expected to be advertised to the server (the rule works from 2.0 to
2.8-dev):

  http-request deny if { fc_http_major 1 } !{ req.body_size 0 } !{ req.hdr(content-length) -m found } !{ req.hdr(transfer-encoding) -m found } !{ method CONNECT }

This will exclusively block such carefully crafted requests delivered
over HTTP/1. HTTP/2 and HTTP/3 do not need content-length, and a body
that arrives without being announced with a content-length will be
forwarded using transfer-encoding, hence will not cause discrepancies.
In HAProxy 2.0 in legacy mode ("no option http-use-htx"), this rule will
simply have no effect but will not cause trouble either.

A clean solution would consist in modifying the loops iterating over
these headers lists to check the header name's pointer instead of its
length (since both are zero at the end of the list), but this requires
to touch tens of places and it's very easy to miss one. Functions such
as htx_add_header(), htx_add_trailer(), htx_add_all_headers() would be
good starting points for such a possible future change.

Instead the current fix focuses on blocking empty headers where they
are first inserted, hence in the H1/HPACK/QPACK decoders. One benefit
of the current solution (for H1) is that it allows "show errors" to
report a precise diagnostic when facing such invalid HTTP/1 requests,
with the exact location of the problem and the originating address:

  $ printf "GET / HTTP/1.1\r\nHost: localhost\r\n:empty header\r\n\r\n" | nc 0 8001
  HTTP/1.1 400 Bad request
  Content-length: 90
  Cache-Control: no-cache
  Connection: close
  Content-Type: text/html

  <html><body><h1>400 Bad request</h1>
  Your browser sent an invalid request.
  </body></html>

  $ socat /var/run/haproxy.stat <<< "show errors"
  Total events captured on [10/Feb/2023:16:29:37.530] : 1

  [10/Feb/2023:16:29:34.155] frontend decrypt (#2): invalid request
    backend <NONE> (#-1), server <NONE> (#-1), event #0, src 127.0.0.1:31092
    buffer starts at 0 (including 0 out), 16334 free,
    len 50, wraps at 16336, error at position 33
    H1 connection flags 0x00000000, H1 stream flags 0x00000810
    H1 msg state MSG_HDR_NAME(17), H1 msg flags 0x00001410
    H1 chunk len 0 bytes, H1 body len 0 bytes :

    00000  GET / HTTP/1.1\r\n
    00016  Host: localhost\r\n
    00033  :empty header\r\n
    00048  \r\n

I want to address sincere and warm thanks for their great work to the
team composed of the following security researchers who found the issue
together and reported it: Bahruz Jabiyev, Anthony Gavazzi, and Engin
Kirda from Northeastern University, Kaan Onarlioglu from Akamai
Technologies, Adi Peleg and Harvey Tuch from Google. And kudos to Amaury
Denoyelle from HAProxy Technologies for spotting that the HPACK and
QPACK decoders would let this pass despite the comment explicitly
saying otherwise.

This fix must be backported as far as 2.0. The QPACK changes can be
dropped before 2.6. In 2.0 there is also the equivalent code for legacy
mode, which doesn't suffer from the list truncation, but it would better
be fixed regardless.

CVE-2023-25725 was assigned to this issue.
diff --git a/src/hpack-dec.c b/src/hpack-dec.c
index 147021c..052a7c3 100644
--- a/src/hpack-dec.c
+++ b/src/hpack-dec.c
@@ -420,6 +420,15 @@
 			/* <name> and <value> are correctly filled here */
 		}
 
+		/* We must not accept empty header names (forbidden by the spec and used
+		 * as a list termination).
+		 */
+		if (!name.len) {
+			hpack_debug_printf("##ERR@%d##\n", __LINE__);
+			ret = -HPACK_ERR_INVALID_ARGUMENT;
+			goto leave;
+		}
+
 		/* here's what we have here :
 		 *   - name.len > 0
 		 *   - value is filled with either const data or data allocated from tmp