Revert "MEDIUM: init: set default for fd_hard_limit via DEFAULT_MAXFD"

This reverts the following commit:
  e3aefc50d8 ("MEDIUM: init: set default for fd_hard_limit via DEFAULT_MAXFD")

Lukas expressed some concerns about possible consequences of this change
so let's wait for a consensus to be found in mainline before we backport
anything (if at all), as we certainly don't want to change the behavior
after it's backported. No version was released with this patch, it's the
right moment to revert it. For reference, the discussion is here:

    https://www.mail-archive.com/haproxy@formilux.org/msg45098.html

Please note that if it were to be re-introduced later, it should be
applied along with a small fix that already references it.
This commit is contained in:
Willy Tarreau 2024-07-11 15:28:46 +02:00
parent 73cb6e9fa9
commit c742566b5b
3 changed files with 4 additions and 39 deletions

View File

@ -1789,14 +1789,9 @@ fd-hard-limit <number>
much RAM for regular usage. The fd-hard-limit setting is provided to enforce
a possibly lower bound to this limit. This means that it will always respect
the system-imposed limits when they are below <number> but the specified
value will be used if system-imposed limits are higher. By default
fd-hard-limit is set to 1048576. This default could be changed via
DEFAULT_MAXFD compile-time variable, that could serve as the maximum (kernel)
system limit, if RLIMIT_NOFILE hard limit is extremely large. fd-hard-limit
set in global section allows to temporarily override the value provided via
DEFAULT_MAXFD at the build-time. In the example below, no other setting is
specified and the maxconn value will automatically adapt to the lower of
"fd-hard-limit" and the RLIMIT_NOFILE limit:
value will be used if system-imposed limits are higher. In the example below,
no other setting is specified and the maxconn value will automatically adapt
to the lower of "fd-hard-limit" and the system-imposed limit:
global
# use as many FDs as possible but no more than 50000

View File

@ -295,24 +295,6 @@
#define DEFAULT_MAXCONN 100
#endif
/* Default file descriptor limit.
*
* DEFAULT_MAXFD explicitly reduces the hard RLIMIT_NOFILE, which is used by the
* process as the base value to calculate the default global.maxsock, if
* global.maxconn, global.rlimit_memmax are not defined. This is useful in the
* case, when hard nofile limit has been bumped to fs.nr_open (kernel max),
* which is extremely large on many modern distros. So, we will also finish with
* an extremely large default global.maxsock. The only way to override
* DEFAULT_MAXFD, if defined, is to set fd_hard_limit in the config global
* section. If DEFAULT_MAXFD is not set, a reasonable maximum of 1048576 will be
* used as the default value, which almost guarantees that a process will
* correctly start in any situation and will be not killed then by watchdog,
* when it will loop over the allocated fdtab.
*/
#ifndef DEFAULT_MAXFD
#define DEFAULT_MAXFD 1048576
#endif
/* Define a maxconn which will be used in the master process once it re-exec to
* the MODE_MWORKER_WAIT and won't change when SYSTEM_MAXCONN is set.
*

View File

@ -1439,19 +1439,7 @@ static int compute_ideal_maxconn()
* - two FDs per connection
*/
/* on some modern distros for archs like amd64 fs.nr_open (kernel max) could
* be in order of 1 billion, systemd since the version 256~rc3-3 bumped
* fs.nr_open as the hard RLIMIT_NOFILE (rlim_fd_max_at_boot). If we are
* started without global.maxconn or global.rlimit_memmax_all, we risk to
* finish with computed global.maxconn = ~500000000 and computed
* global.maxsock = ~1000000000. So, fdtab will be unnecessary and extremely
* huge and watchdog will kill the process, when it tries to loop over the
* fdtab (see fd_reregister_all).
*/
if (!global.fd_hard_limit)
global.fd_hard_limit = DEFAULT_MAXFD;
if (remain > global.fd_hard_limit)
if (global.fd_hard_limit && remain > global.fd_hard_limit)
remain = global.fd_hard_limit;
/* subtract listeners and checks */