Commit Graph

18984 Commits

Author SHA1 Message Date
Willy Tarreau
24f5a24c18 BUILD: 51d: fix build issue with recent compilers
With gcc-11.2 and binutils-2.37 I'm getting link errors due to multiply
defined symbols when enabling USE_51DEGREES_V4. This is caused by two
variables being present in hash.h instead of hash.c, hence they're
defined twice.

This patch just moves them to hash.c and turns their declaration to
extern.

No backport is needed since this was introduced in 2.8-dev.
2022-12-15 19:36:13 +01:00
Amaury Denoyelle
5ac6b3b125 BUG/MINOR: quic: fix crash on PTO rearm if anti-amplification reset
There is a possible segfault when accessing qc->timer_task in
quic_conn_io_cb() without testing it. It seems however very rare as it
requires several condition to be encounter.
* quic_conn must be in CLOSING state after having sent a
  CONNECTION_CLOSE which free the qc.timer_task
* quic_conn handshake must still be in progress : in fact, qc.timer_task
  is accessed on this path because of the anti-amplification limit
  lifted.

I was unable thus far to trigger it but benchmarking tests seems to have
fire it with the following backtrace as a result :

  #0  _task_wakeup (f=4096, caller=0x5620ed004a40 <_.46868>, t=0x0) at include/haproxy/task.h:195
  195             state = _HA_ATOMIC_OR_FETCH(&t->state, f);
  [Current thread is 1 (Thread 0x7fc714ff1700 (LWP 14305))]
  (gdb) bt
  #0  _task_wakeup (f=4096, caller=0x5620ed004a40 <_.46868>, t=0x0) at include/haproxy/task.h:195
  #1  quic_conn_io_cb (t=0x7fc5d0e07060, context=0x7fc5d0df49c0, state=<optimized out>) at src/quic_conn.c:4393
  #2  0x00005620ecedab6e in run_tasks_from_lists (budgets=<optimized out>) at src/task.c:596
  #3  0x00005620ecedb63c in process_runnable_tasks () at src/task.c:861
  #4  0x00005620ecea971a in run_poll_loop () at src/haproxy.c:2913
  #5  0x00005620ecea9cf9 in run_thread_poll_loop (data=<optimized out>) at src/haproxy.c:3102
  #6  0x00007fc773c3f609 in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
  #7  0x00007fc77372d133 in clone () from /lib/x86_64-linux-gnu/libc.so.6
  (gdb) up
  #1  quic_conn_io_cb (t=0x7fc5d0e07060, context=0x7fc5d0df49c0, state=<optimized out>) at src/quic_conn.c:4393
  4393                            task_wakeup(qc->timer_task, TASK_WOKEN_MSG);
  (gdb) p qc
  $1 = (struct quic_conn *) 0x7fc5d0df49c0
  (gdb) p qc->timer_task
  $2 = (struct task *) 0x0

This fix should be backported up to 2.6.
2022-12-15 17:02:19 +01:00
Aurelien DARRAGON
16c9ca94ef MINOR: stats: make show info json future-proof
This is a follow up of "BUG/MINOR: stats: fix show stat json buffer limitation"

However this time this is purely preemptive as we did not reach the buffer
limitation yet. But now is the proper time so that this won't be an issue
in the upcoming versions.

No backport needed.
2022-12-15 16:53:49 +01:00
Aurelien DARRAGON
42b18fb645 BUG/MINOR: stats: fix show stat json buffer limitation
json output type is a lot more verbose than other output types.
Because of this and the increasing number of metrics implemented within
haproxy, we are starting to reach max bufsize limit (defaults to 16k)
when dumping stats to json since 2.6-dev1.
This results in stats output being truncated with
    "[{"errorStr":"output buffer too short"}]"

This was reported by Gabriel in #1964.

Thanks to "MINOR: stats: introduce stats field ctx", we can now make
multipart (using multiple buffers) dumping, in case a single buffer is not big
enough to hold the complete stat line.
For now, only stats_dump_fields_json() makes use of it as it is by
far the most verbose stats output type.
(csv, typed and html outputs should be good for a while and may use this
capability if the need arises in some distant future)

--

It could be backported to 2.6 and 2.7.
This commit depends on:
  - MINOR: stats: provide ctx for dumping functions
  - MINOR: stats: introduce stats field ctx
2022-12-15 16:53:49 +01:00
Aurelien DARRAGON
5594184190 MINOR: stats: introduce stats field ctx
Add a new value in stats ctx: field.
Implement field support in line dumping parent functions
stats_print_proxy_field_json() and stats_dump_proxy_to_buffer().

This will allow child dumping functions to support partial line dumping
when needed. ie: when dumping buffer is exhausted: do a partial send and
wait for a new buffer to finish the dump. Thanks to field ctx, the function
can start dumping where it left off on previous (unterminated) invokation.
2022-12-15 16:53:49 +01:00
Aurelien DARRAGON
e76a027b0b MINOR: stats: provide ctx for dumping functions
This is a minor refactor to allow stats_dump_info_* and stats_dump_fields_*
functions to directly access stat ctx pointer instead of explicitly
passing stat ctx struct members to them.

This will allow dumping functions to benefit from upcoming ctx updates.
2022-12-15 16:53:49 +01:00
Remi Tricot-Le Breton
4cf0d3f1e8 BUG/MINOR: ssl: Fix memory leak of find_chain in ssl_sock_load_cert_chain
The certificate chain that gets passed in the SSL_CTX through
SSL_CTX_set1_chain has its reference counter increased by OpenSSL
itself. But since the ssl_sock_load_cert_chain function might create a
brand new certificate chain if none exists in the ckch_data
(sk_X509_new_null), then we ended up returning a new certificate chain
to the caller that was never destroyed.

This patch can be backported to all stable branches but it might need to
be reworked for branches older than 2.4 because of commit ec805a32b9
that refactorized the modified code.
2022-12-15 16:33:43 +01:00
Remi Tricot-Le Breton
e3d5f9a92b MINOR: ssl: Remove unnecessary alloc'ed trash chunk in show ocsp-response
When displaying the content of an OCSP response in the CLI, a buffer is
alloc'ed when a temporary buffer would be enough.
2022-12-15 16:33:36 +01:00
Remi Tricot-Le Breton
9334843859 MINOR: ssl: Remove unneeded buffer allocation in show ocsp-response
When calling 'show ssl ocsp-response' from the CLI, a temporary buffer
was created in parse_binary when we could just use a local static buffer
instead. This does not change the behavior of the function, it just
simplifies it.
2022-12-15 16:33:25 +01:00
Amaury Denoyelle
c4913f6b54 MINOR: h3: check return values of htx_add_* on headers parsing
Check return values of htx_add_header()/htx_add_eof() during H3 HEADERS
conversion to HTX. In case of error, the connection is interrupted with
a CONNECTION_CLOSE.

This commit is useful to detect abnormal situation on headers parsing.
It should be backported up to 2.7.
2022-12-15 11:48:30 +01:00
Amaury Denoyelle
788fc05401 BUG/MINOR: h3: fix memleak on HEADERS parsing failure
If an error is triggered on H3 HEADERS parsing, the allocated buffer for
HTX data is not freed.

To prevent this memleak, all return path have been centralized using
goto statements.

Also, as a small bonus, offer_buffers() is not called anymore if buffer
is not freed because sedesc has taken it. However this change has
probably no noticeable effect as dynamic buffers management is not
functional currently.

This should be backported up to 2.6.
2022-12-15 11:48:30 +01:00
Amaury Denoyelle
19942e3859 BUG/MEDIUM: h3: fix cookie header parsing
Cookie header are treated specifically to merge multiple occurences in a
single HTX header. This is treated in a if-block condition inside the
'while (1)' loop for headers parsing. The length value of ist
representing cookie header is set to -1 by http_cookie_register(). The
problem is that then a continue statement is used but without
incrementing 'hdr_idx' to pass on the next header.

This issue was revealed by the introduction of commit :
  commit d6fb7a0e0f
  BUG/MEDIUM: h3: reject request with invalid header name

Before the aformentionned patch, the bug was hidden : on the next while
iteration, all isteq() invocations won't match with cookie header length
now set to -1. htx_add_header() fails silently because length is
invalid. hdr_idx is finally incremented which allows parsing to proceed
normally with the next header.

Now, a cookie header with length -1 do not pass the test on header name
conformance introduced by the above patch. Thus, a spurrious
RESET_STREAM is emitted. This behavior has been reported on the mailing
list by Shawn Heisey who found out that browsers disabled H3 usage due
to the RESET_STREAM received. Big thanks to him for his testing on the
master branch.

This issue is simply resolved by incrementing hdr_idx before continue
statement. It could have been detected earlier if htx_add_header()
return value was checked. This will be the subject of a dedicated commit
outside of the backport scope.

This must be backported up to 2.6.
2022-12-15 11:41:51 +01:00
Amaury Denoyelle
4328b61bb3 MINOR: http-htx: add BUG_ON to prevent API error on http_cookie_register
http_cookie_register() must be called on first invocation with the last
two arguments pointing both to a negative value. After it, they will be
updated to a valid index.

We must never have only the last argument as NULL as this will cause an
invalid array addressing. To clarify this a BUG_ON statement is
introduced.

This is linked to github issue #1967.
2022-12-15 11:41:39 +01:00
Christopher Faulet
8f1f1b0579 BUG/MINOR: mux-h1: Fix test instead a BUG_ON() in h1_send_error()
In the previous patch (86924532db "BUG/MINOR: mux-h1: Fix test instead a
BUG_ON() in h1_send_error()"), a BUG_ON() condition was inverted by error in
h1_send_error(). The stream-connector must be NULL to be able to destroy the H1
stream.

This patch must be backported with the commit above (to 2.7).
2022-12-15 09:59:51 +01:00
Christopher Faulet
da93802ffc BUG/MEDIUM: mux-h1: Don't release H1 stream upgraded from TCP on error
When an error occurred during the request parsing, the H1 multiplexer is
responsible to sent a response to the client and to release the H1 stream
and the H1 connection. In HTTP mode, it is not an issue because at this
stage the H1 connection is in embryonic state. Thus it can be released
immediately.

However, it is a problem if the connection was first upgraded from a TCP
connection. In this case, a stream-connector is attached. The H1 stream is
not orphan. Thus it must not be released at this stage. It must be detached
first. Otherwise a BUG_ON() is triggered in h1s_destroy().

So now, the H1S is destroyed on early errors but only if the H1C is in
embryonic state.

This patch may be related to #1966. It must be backported to 2.7.
2022-12-15 09:51:31 +01:00
Ilya Shipitsin
f5994fc692 CI: github: split matrix for development and stable branches
ML ref: https://www.mail-archive.com/haproxy@formilux.org/msg42934.html

we agreed to use "latest" images for development branches and fixed
images for stable branches

Can be backported to 2.6.
2022-12-14 15:29:42 +01:00
Ilya Shipitsin
6dedeb70da CI: github: remove redundant ASAN loop
it was there because we only ran ASAN for clang, now no need to separate loop

Can be backported to 2.6.
2022-12-14 15:29:20 +01:00
Amaury Denoyelle
d2c5ee665e BUG/MEDIUM: h3: parse content-length and reject invalid messages
Ensure that if a request contains a content-length header it matches
with the total size of following DATA frames. This is conformance with
HTTP/3 RFC 9114.

For the moment, this kind of errors triggers a connection close. In the
future, it should be handled only with a stream reset. To reduce
backport surface, this will be implemented in another commit.

This must be backported up to 2.6. It relies on the previous commit :
  MINOR: http: extract content-length parsing from H2
2022-12-14 11:37:12 +01:00
Amaury Denoyelle
15f3cc4b38 MINOR: http: extract content-length parsing from H2
Extract function h2_parse_cont_len_header() in the generic HTTP module.
This allows to reuse it for all HTTP/x parsers. The function is now
available as http_parse_cont_len_header().

Most notably, this will be reused in the next bugfix for the H3 parser.
This is necessary to check that content-length header match the length
of DATA frames.

Thus, it must be backported to 2.6.
2022-12-14 11:34:18 +01:00
Amaury Denoyelle
7b5a671fb8 BUG/MEDIUM: h3: reject request with invalid pseudo header
RFC 9114 dictates several requirements for pseudo header usage in H3
request. Previously only minimal checks were implemented. Enforce all
the following requirements with this patch :
* reject request with undefined or invalid pseudo header
* reject request with duplicated pseudo header
* reject non-CONNECT request with missing mandatory pseudo header
* reject request with pseudo header after standard ones

For the moment, this kind of errors triggers a connection close. In the
future, it should be handled only with a stream reset. To reduce
backport surface, this will be implemented in another commit.

This must be backported up to 2.6.
2022-12-14 11:34:18 +01:00
Amaury Denoyelle
d6fb7a0e0f BUG/MEDIUM: h3: reject request with invalid header name
Reject request containing invalid header name. This concerns every
header containing uppercase letter or a non HTTP token such as a space.

For the moment, this kind of errors triggers a connection close. In the
future, it should be handled only with a stream reset. To reduce
backport surface, this will be implemented in another commit.

Thanks to Yuki Mogi from FFRI Security, Inc. for having reported this.

This must be backported up to 2.6.
2022-12-14 11:34:18 +01:00
William Lallemand
f98b3b1107 REGTESTS: startup: add alternatives values in automatic_maxconn.vtc
The calculated maxconn could produce other values when compiled with
debug options.

Must be backported where 6b6f082 was backported (as far as 2.5).
2022-12-14 11:16:51 +01:00
Christopher Faulet
819d48b14e BUG/MEDIUM: resolvers: Use tick_first() to update the resolvers task timeout
In resolv_update_resolvers_timeout(), the resolvers task timeout is updated
by checking running and waiting resolutions. However, to find the next
wakeup date, MIN() operator is used to compare ticks. Ticks must never be
compared with such operators, tick helper functions must be used, to
properly handled TICK_ETERNITY value. In this case, tick_first() must be
used instead of MIN().

It is an old bug but it is pretty visible since the commit fdecaf6ae4
("BUG/MINOR: resolvers: do not run the timeout task when there's no
resolution"). Because of this bug, the resolvers task timeout may be set to
TICK_ETERNITY, stopping periodic resolutions.

This patch should solve the issue #1962. It must be backported to all stable
versions.
2022-12-14 10:44:17 +01:00
Christopher Faulet
427293420f BUG/MEDIUM: freq-ctr: Don't compute overshoot value for empty counters
The function computing the excess of events of a frequency counter over the
current period for a target frequency must handle empty counters (no event
and no start date). In this case, no excess must be reported.

Because of this bug, long pauses may be experienced on the bandwith
limitation filter.

This patch must be backported to 2.7.
2022-12-14 10:44:17 +01:00
William Lallemand
04007cb08d CLEANUP: ssl: remove check on srv->proxy
Remove a useless check on srv->proxy which triggers coverity.

Should fix issue #1965.
2022-12-14 10:36:31 +01:00
Thayne McCombs
02cf4ecb5a MINOR: sample: add param converter
Add a converter that extracts a parameter from string of delimited
key/value pairs.

Fixes: #1697
2022-12-14 08:24:15 +01:00
William Lallemand
6b6f082969 REGTESTS: startup: activate automatic_maxconn.vtc
Check if USE_OBSOLETE_LINK=1 was used so it could run this test when
ASAN is not built, since ASAN require this option.

For this test to work, the ulimit -n value must be big enough.

Could be backported at least to 2.5.
2022-12-14 00:32:06 +01:00
William Lallemand
2cb1493748 CI: github: set ulimit -n to a greater value
Set ulimit -n to 65536 to limit less the maxconn computation.

Could be backported at least to 2.5.
2022-12-14 00:31:19 +01:00
William Lallemand
2a225390eb REGTESTS: startup: change the expected maxconn to 11000
change the expected maxconn from 10000 to 11000 in
automatic_maxconn.vtc

To be backported only if the test failed, the value might be the right
one in previous versions.
2022-12-14 00:28:23 +01:00
William Lallemand
0adafb307e BUG/MINOR: startup: don't use internal proxies to compute the maxconn
With internal proxies using the SSL activated (httpclient for example)
the automatic computation of the maxconn is wrong because these proxies
are always activated by default.

This patch fixes the issue by not counting these internal proxies during
the computation.

Must be backported as far as 2.5.
2022-12-13 18:28:29 +01:00
William Lallemand
38c5b6ea97 REGTESTS: startup: check maxconn computation
Check the maxconn computation with multiple -m parameters.

Broken with ASAN for now.

Could be backported as far as 2.2.
2022-12-13 17:51:25 +01:00
Ilya Shipitsin
4a04cd35ae CI: github: split ssl lib selection based on git branch
when *SSL_VERSION="latest" behaviour was introduced, it seems to be fine
for development branches, but too intrusive for stable branches.

let us limit "latest" semantic only for development builds, if branch name
contains "haproxy-" it is supposed to be stable branch, no latest openssl
should be taken

[wla: must be backported as far as 2.6]
Signed-off-by: William Lallemand <wlallemand@haproxy.org>
2022-12-12 16:20:48 +01:00
Amaury Denoyelle
4b167006fd BUG/MINOR: mux-quic: handle properly alloc error in qcs_new()
Use qcs_free() on allocation failure in qcs_new() This ensures that all
qcs content is properly deallocated and prevent memleaks. Most notably,
qcs instance is now removed from qcc tree.

This bug is labelled as MINOR as it occurs only on qcs allocation
failure due to memory exhaustion.

This must be backported up to 2.6.
2022-12-12 14:54:39 +01:00
Amaury Denoyelle
641a65ff3c BUG/MINOR: mux-quic: remove qcs from opening-list on free
qcs instances for bidirectional streams are inserted in
<qcc.opening_list>. It is removed from the list once a full HTTP request
has been parsed. This is required to implement http-request timeout.

If a qcs instance is freed before receiving a full HTTP request, it must
be removed from the <qcc.opening_list>. Else a segfault will occur in
qcc_refresh_timeout() when accessing a dangling pointer.

For the moment this bug was not reproduced in production. This is
because there exists only few rare cases where a qcs is freed before
HTTP request parsing. However, as error detection will be improved on
H3, this will occur more frequently in the near future.

This must be backported up to 2.6.
2022-12-12 14:54:39 +01:00
Amaury Denoyelle
6eb3c4b71c CLEANUP: mux-quic: remove unused attribute on qcs_is_close_remote()
qcs_is_close_remote() is used in qcc_decode_qcs(). Thus the unused
function attribute is now unneeded.

This can be backported up to 2.7.
2022-12-12 14:54:39 +01:00
Amaury Denoyelle
4244833c5f BUG/MINOR: quic: handle alloc failure on qc_new_conn() for owned socket
This patch is the follow up of previous fix :
  BUG/MINOR: quic: properly handle alloc failure in qc_new_conn()

quic_conn owned socket FD is initialized as soon as possible in
qc_new_conn(). This guarantees that we can safely call
quic_conn_release() on allocation failure. This function uses internally
qc_release_fd() to free the socket FD unless it has been initialized to
an invalid FD value.

Without this patch, a segfault will occur if one inner allocation of
qc_new_conn() fails before qc.fd is initialized.

This change is linked to quic-conn owned socket implementation.
This should be backported up to 2.7.
2022-12-12 14:53:55 +01:00
Amaury Denoyelle
dbf6ad470b BUG/MINOR: quic: properly handle alloc failure in qc_new_conn()
qc_new_conn() is used to allocate a quic_conn instance and its various
internal members. If one allocation fails, quic_conn_release() is used
to cleanup things.

For the moment, pool_zalloc() is used which ensures that all content is
null. However, some members must be initialized to a special values
to be able to use quic_conn_release() safely. This is the case for
quic_conn lists and its tasklet.

Also, some quic_conn internal allocation functions were doing their own
cleanup on failure without reset to NULL. This caused an issue with
quic_conn_release() which also frees this members. To fix this, these
functions now only return an error without cleanup. It is the caller
responsibility to free the allocated content, which is done via
quic_conn_release().

Without this patch, allocation failure in qc_new_conn() would often
result in segfault. This was reproduced easily using fail-alloc at 10%.

This should be backported up to 2.6.
2022-12-12 11:44:34 +01:00
William Lallemand
393e4e4dd1 CI: github: reintroduce openssl 1.1.1
OpenSSL 1.1.1 is not tested anymore since github updated "ubuntu-latest"
to 22.04, let's reintroduce this version.
2022-12-12 08:52:03 +01:00
Christopher Faulet
e1b866a28a REGTESTS: fix the race conditions in iff.vtc
A "Connection: close" header is added to responses to avoid any connection
reuse. This should avoid any "HTTP header incomplete" errors.
2022-12-09 17:11:22 +01:00
Youfu Zhang
2e6bf0a272 BUG/MAJOR: fcgi: Fix uninitialized reserved bytes
The output buffer is not zero-initialized. If we don't clear reserved
bytes, fcgi requests sent to backend will leak sensitive data.

This patch must be backported as far as 2.2.
2022-12-09 12:23:14 +01:00
Christopher Faulet
7edec90c00 DOC: promex: Add missing backend metrics
"haproxy_backend_agg_server_status" and "haproxy_backend_agg_check_status"
were not referenced in promex README.

"haproxy_backend_agg_server_check_status" is also missing but it is a
deprecated metric. Thus, it is better to not reference it.
2022-12-09 11:04:12 +01:00
Cedric Paillet
e06e31ea3b MINOR: promex: introduce haproxy_backend_agg_check_status
This patch introduces haproxy_backend_agg_check_status metric
as we wanted in 42d7c402d but with the right data source.

This patch could be backported as far as 2.4.
2022-12-09 10:54:48 +01:00
Cedric Paillet
7d6644e689 BUG/MINOR: promex: create haproxy_backend_agg_server_status
haproxy_backend_agg_server_check_status currently aggregates
haproxy_server_status instead of haproxy_server_check_status.
We deprecate this and create a new one,
haproxy_backend_agg_server_status to clarify what it really
does.

This patch could be backported as far as 2.4.
2022-12-09 10:54:27 +01:00
Willy Tarreau
9192d20f02 MINOR: pools: make DEBUG_UAF a runtime setting
Since the massive pools cleanup that happened in 2.6, the pools
architecture was made quite more hierarchical and many alternate code
blocks could be moved to runtime flags set by -dM. One of them had not
been converted by then, DEBUG_UAF. It's not much more difficult actually,
since it only acts on a pair of functions indirection on the slow path
(OS-level allocator) and a default setting for the cache activation.

This patch adds the "uaf" setting to the options permitted in -dM so
that it now becomes possible to set or unset UAF at boot time without
recompiling. This is particularly convenient, because every 3 months on
average, developers ask a user to recompile haproxy with DEBUG_UAF to
understand a bug. Now it will not be needed anymore, instead the user
will only have to disable pools and enable uaf using -dMuaf. Note that
-dMuaf only disables previously enabled pools, but it remains possible
to re-enable caching by specifying the cache after, like -dMuaf,cache.
A few tests with this mode show that it can be an interesting combination
which catches significantly less UAF but will do so with much less
overhead, so it might be compatible with some high-traffic deployments.

The change is very small and isolated. It could be helpful to backport
this at least to 2.7 once confirmed not to cause build issues on exotic
systems, and even to 2.6 a bit later as this has proven to be useful
over time, and could be even more if it did not require a rebuild. If
a backport is desired, the following patches are needed as well:

  CLEANUP: pools: move the write before free to the uaf-only function
  CLEANUP: pool: only include pool-os from pool.c not pool.h
  REORG: pool: move all the OS specific code to pool-os.h
  CLEANUP: pools: get rid of CONFIG_HAP_POOLS
  DEBUG: pool: show a few examples in -dMhelp
2022-12-08 18:54:59 +01:00
Willy Tarreau
b634987fed DEBUG: pool: show a few examples in -dMhelp
It's not always easy to remember what certain options do together nor
which ones are only relevant when combined with others, so let's add a
few examples with the "help" command on -dM.
2022-12-08 18:45:41 +01:00
Willy Tarreau
4da51bd190 CLEANUP: pools: get rid of CONFIG_HAP_POOLS
This one was set in defaults.h only when neither DEBUG_NO_POOLS nor
DEBUG_UAF were set. This was not the most convenient location to look
for it, and it was only used in pool.c to decide on the initial value
of POOL_DBG_NO_CACHE.

Let's just use DEBUG_NO_POOLS || DEBUG_UAF directly on this flag and
get rid of the intermediary condition. This also has the benefit of
removing a double inversion, which is always nice for understanding.
2022-12-08 17:45:08 +01:00
Willy Tarreau
a95636682d REORG: pool: move all the OS specific code to pool-os.h
Till now pool-os used to contain a mapping from pool_{alloc,free}_area()
to pool_{alloc,free}_area_uaf() in case of DEBUG_UAF, or the regular
malloc-based function. And the *_uaf() functions were in pool.c. But
since 2.4 with the first cleanup of the pools, there has been no more
calls to pool_{alloc,free}_area() from anywhere but pool.c, from exactly
one place each. As such, there's no more need to keep *_uaf() apart in
pool.c, we can inline it into pool-os.h and leave all the OS stuff there,
with pool.c calling either based on DEBUG_UAF. This is cleaner with less
round trips between both files and easier to find.
2022-12-08 17:32:57 +01:00
Willy Tarreau
76a97a98ca CLEANUP: pool: only include pool-os from pool.c not pool.h
There's no need for the low-level pool functions to be known from all
callers anymore, they're only used by pool.c. Let's reduce the amount
of header files processed.
2022-12-08 17:32:40 +01:00
Willy Tarreau
67f89c527f CLEANUP: pools: move the write before free to the uaf-only function
In UAF mode, pool_put_to_os() performs a write to the about-to-be-freed
memory area so as to make sure the page is properly mapped and catch a
possible double-free. However there's no point keeping that in an ifdef
in the generic function, because we now have a pool_free_area_uaf()
that is the UAF-specific version of pool_free_area() and the one that
is called immediately after this write. Let's move the code there, it
will be cleaner.
2022-12-08 16:08:28 +01:00
William Lallemand
94dbfedec1 BUG/MEDIUM: httpclient/lua: double LIST_DELETE on end of lua task
The lua httpclient cleanup can be called in 2 places, the
hlua_httpclient_gc() and the hlua_httpclient_destroy_all().

A LIST_DELETE() is performed to remove the hlua_hc struct of the list.
However, when the lua task ends and call hlua_ctx_destroy(), it does a
LIST_DELETE() first, and then the gc tries to do a LIST_DELETE() again
in hlua_httpclient_gc(), provoking a crash.

This patch fixes the issue by doing a LIST_DEL_INIT() instead of
LIST_DELETE() in both cases.

Should fix issue #1958.

Must be backported where bb58142 is backported.
2022-12-08 11:30:03 +01:00