BUG/MEDIUM: h2: always reassemble the Cookie request header field

The special case of the Cookie header field was overlooked in the
implementation, considering that most servers do handle cookie lists,
but as reported here on discourse it's not the case at all :

  https://discourse.haproxy.org/t/h2-cookie-header-splitted-header/1742

This patch fixes this by skipping all occurences of the Cookie header
in the request while building the H1 request, and then building a single
Cookie header with all values appended at once, according to what is
requested in RFC7540#8.1.2.5.

In order to build the list of values, the list struct is used as a linked
list (as there can't be more cookies than headers). This makes the list
walking quite efficient and ensures all values are quickly found without
having to rescan the list.

A test case provided by Lukas shows that it properly works :

 > GET /? HTTP/1.1
 > user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Firefox/45.0
 > accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
 > accept-language: en-US,en;q=0.5
 > accept-encoding: gzip, deflate
 > referer: https://127.0.0.1:4443/?expectValue=1511294406
 > host: 127.0.0.1:4443

 < HTTP/1.1 200 OK
 < Server: nginx
 < Date: Tue, 21 Nov 2017 20:00:13 GMT
 < Content-Type: text/html; charset=utf-8
 < Transfer-Encoding: chunked
 < Connection: keep-alive
 < X-Powered-By: PHP/5.3.10-1ubuntu3.26
 < Set-Cookie: HAPTESTa=1511294413
 < Set-Cookie: HAPTESTb=1511294413
 < Set-Cookie: HAPTESTc=1511294413
 < Set-Cookie: HAPTESTd=1511294413
 < Content-Encoding: gzip

 > GET /?expectValue=1511294413 HTTP/1.1
 > user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Firefox/45.0
 > accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
 > accept-language: en-US,en;q=0.5
 > accept-encoding: gzip, deflate
 > host: 127.0.0.1:4443
 > cookie: SERVERID=s1; HAPTESTa=1511294413; HAPTESTb=1511294413; HAPTESTc=1511294413; HAPTESTd=1511294413

Many thanks to @Nurza, @adrianw and @lukastribus for their helpful reports
and investigations here.
diff --git a/src/h2.c b/src/h2.c
index 28518a6..3d03b12 100644
--- a/src/h2.c
+++ b/src/h2.c
@@ -116,6 +116,9 @@
  *   - n.name ignored, n.len == 0 : end of list
  *   - in all cases except the end of list, v.name and v.len must designate a
  *     valid value.
+ *
+ * The Cookie header will be reassembled at the end, and for this, the <list>
+ * will be used to create a linked list, so its contents may be destroyed.
  */
 int h2_make_h1_request(struct http_hdr *list, char *out, int osize)
 {
@@ -123,9 +126,11 @@
 	char *out_end = out + osize;
 	uint32_t fields; /* bit mask of H2_PHDR_FND_* */
 	uint32_t idx;
+	int ck, lck; /* cookie index and last cookie index */
 	int phdr;
 	int ret;
 
+	lck = ck = -1; // no cookie for now
 	fields = 0;
 	for (idx = 0; list[idx].n.len != 0; idx++) {
 		if (!list[idx].n.ptr) {
@@ -170,6 +175,19 @@
 		if (isteq(list[idx].n, ist("host")))
 			fields |= H2_PHDR_FND_HOST;
 
+		/* cookie requires special processing at the end */
+		if (isteq(list[idx].n, ist("cookie"))) {
+			list[idx].n.len = -1;
+
+			if (ck < 0)
+				ck = idx;
+			else
+				list[lck].n.len = idx;
+
+			lck = idx;
+			continue;
+		}
+
 		if (out + list[idx].n.len + 2 + list[idx].v.len + 2 > out_end) {
 			/* too large */
 			goto fail;
@@ -209,6 +227,40 @@
 		*(out++) = '\n';
 	}
 
+	/* now we may have to build a cookie list. We'll dump the values of all
+	 * visited headers.
+	 */
+	if (ck >= 0) {
+		if (out + 8 > out_end) {
+			/* too large */
+			goto fail;
+		}
+		memcpy(out, "cookie: ", 8);
+		out += 8;
+
+		do {
+			if (out + list[ck].v.len + 2 > out_end) {
+				/* too large */
+				goto fail;
+			}
+			memcpy(out, list[ck].v.ptr, list[ck].v.len);
+			out += list[ck].v.len;
+			ck = list[ck].n.len;
+
+			if (ck >= 0) {
+				*(out++) = ';';
+				*(out++) = ' ';
+			}
+		} while (ck >= 0);
+
+		if (out + 2 > out_end) {
+			/* too large */
+			goto fail;
+		}
+		*(out++) = '\r';
+		*(out++) = '\n';
+	}
+
 	/* And finish */
 	if (out + 2 > out_end) {
 		/* too large */