BUG/MEDIUM: jwt: fix base64 decoding error detection

Tim reported that a decoding error from the base64 function wouldn't
be matched in case of bad input, and could possibly cause trouble
with -1 being passed in decoded_sig->data. In the case of HMAC+SHA
it is harmless as the comparison is made using memcmp() after checking
for length equality, but in the case of RSA/ECDSA this result is passed
as a size_t to EVP_DigetVerifyFinal() and may depend on the lib's mood.

The fix simply consists in checking the intermediary result before
storing it.

That's precisely what happens with one of the regtests which returned
0 instead of 4 on the intentionally defective token, so the regtest
was fixed as well.

No backport is needed as this is new in this release.
diff --git a/src/jwt.c b/src/jwt.c
index fd46262..0e23305 100644
--- a/src/jwt.c
+++ b/src/jwt.c
@@ -292,10 +292,10 @@
 {
 	struct jwt_item items[JWT_ELT_MAX] = { { 0 } };
 	unsigned int item_num = JWT_ELT_MAX;
-
 	struct buffer *decoded_sig = NULL;
 	struct jwt_ctx ctx = {};
 	enum jwt_vrfy_status retval = JWT_VRFY_KO;
+	int ret;
 
 	ctx.alg = jwt_parse_alg(alg->area, alg->data);
 
@@ -325,13 +325,14 @@
 	if (!decoded_sig)
 		return JWT_VRFY_OUT_OF_MEMORY;
 
-	decoded_sig->data = base64urldec(ctx.signature.start, ctx.signature.length,
-					 decoded_sig->area, decoded_sig->size);
-	if (decoded_sig->data == (unsigned int)-1) {
+	ret = base64urldec(ctx.signature.start, ctx.signature.length,
+	                   decoded_sig->area, decoded_sig->size);
+	if (ret == -1) {
 		retval = JWT_VRFY_INVALID_TOKEN;
 		goto end;
 	}
 
+	decoded_sig->data = ret;
 	ctx.key = key->area;
 	ctx.key_length = key->data;