BUG/MEDIUM: init: fix fd_hard_limit default in compute_ideal_maxconn

This commit fixes 41275a691 ("MEDIUM: init: set default for fd_hard_limit via
DEFAULT_MAXFD").

fd_hard_limit is taken in account implicitly via 'ideal_maxconn' value in
all maxconn adjustements, when global.rlimit_memmax is set:

	MIN(global.maxconn, capped by global.rlimit_memmax, ideal_maxconn);

It also caps provided global.rlimit_nofile, if it couldn't be set as a current
process fd limit (see more details in the main() code).

So, lets set the default value for fd_hard_limit only, when there is no any
other haproxy-specific limit provided, i.e. rlimit_memmax, maxconn,
rlimit_nofile. Otherwise we may break users configs.

Please, note, that in master-worker mode, master does not need the
DEFAULT_MAXFD (1048576) as well, as we explicitly limit its maxconn to 100.

Must be backported in all stable versions until v2.6.0, including v2.6.0,
like the commit above.

(cherry picked from commit 16a5fac4bba1cb2bb6cf686066256aa141515feb)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit bfd43e79996fd1aebed8942c2c27456e380f88e4)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit 30f1f4e78333d2d2835535a42ca8c854c377f586)
Signed-off-by: Willy Tarreau <w@1wt.eu>
diff --git a/src/haproxy.c b/src/haproxy.c
index ffd114f..8b984d0 100644
--- a/src/haproxy.c
+++ b/src/haproxy.c
@@ -1395,16 +1395,28 @@
 	 *   - two FDs per connection
 	 */
 
-	/* on some modern distros for archs like amd64 fs.nr_open (kernel max) could
-	 * be in order of 1 billion, systemd since the version 256~rc3-3 bumped
-	 * fs.nr_open as the hard RLIMIT_NOFILE (rlim_fd_max_at_boot). If we are
-	 * started without global.maxconn or global.rlimit_memmax_all, we risk to
-	 * finish with computed global.maxconn = ~500000000 and computed
-	 * global.maxsock = ~1000000000. So, fdtab will be unnecessary and extremely
-	 * huge and watchdog will kill the process, when it tries to loop over the
-	 * fdtab (see fd_reregister_all).
+	/* on some modern distros for archs like amd64 fs.nr_open (kernel max)
+	 * could be in order of 1 billion. Systemd since the version 256~rc3-3
+	 * bumped fs.nr_open as the hard RLIMIT_NOFILE (rlim_fd_max_at_boot).
+	 * If we are started without any limits, we risk to finish with computed
+	 * maxconn = ~500000000, maxsock = ~2*maxconn. So, fdtab will be
+	 * extremely large and watchdog will kill the process, when it will try
+	 * to loop over the fdtab (see fd_reregister_all). Please note, that
+	 * fd_hard_limit is taken in account implicitly via 'ideal_maxconn'
+	 * value in all global.maxconn adjustements, when global.rlimit_memmax
+	 * is set:
+	 *
+	 *   MIN(global.maxconn, capped by global.rlimit_memmax, ideal_maxconn);
+	 *
+	 * It also caps global.rlimit_nofile, if it couldn't be set as rlim_cur
+	 * and as rlim_max. So, fd_hard_limitit is a good parameter to serve as
+	 * a safeguard, when no haproxy-specific limits are set, i.e.
+	 * rlimit_memmax, maxconn, rlimit_nofile. But it must be kept as a zero,
+	 * if only one of these ha-specific limits is presented in config or in
+	 * the cmdline.
 	 */
-	if (!global.fd_hard_limit)
+	if (!global.fd_hard_limit && !global.maxconn && !global.rlimit_nofile
+	    && !global.rlimit_memmax)
 		global.fd_hard_limit = DEFAULT_MAXFD;
 
 	if (remain > global.fd_hard_limit)