Commit Graph

22716 Commits

Author SHA1 Message Date
Christopher Faulet
228f104668 DOC: config: Explicitly list relaxing rules for accept-invalid-http-* options
Time to time, new exceptions are added in the HTTP parsing (most of time H1)
to not reject some invalid messages sent by legacy applications. But the
documentation of accept-invalid-http-request and
accept-invalid-http-response options is not pretty clear. So, now, there is
an explicit list of relaxing rules for both options.

(cherry picked from commit 0f4fad5291027a7dfc8109fbbe2acd0bac8affd0)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
2024-09-17 09:28:58 +02:00
Aurelien DARRAGON
9a923de45d BUG/MINOR: peers: local entries updates may not be advertised after resync
Since commit 864ac3117 ("OPTIM: stick-tables: check the stksess without
taking the read lock"), when entries for a local table are learned from
another peer upon resynchro, and this is the only peer haproxy speaks to,
local updates on such entries are not advertised to the peer anymore,
until they eventually expire and can be recreated upon local updates.

This is due to the fact that ts->seen is always set to 0 when creating
new entry, and also when touch_remote is performed on the entry.

Indeed, while 864ac3117 attempts to avoid useless updates, it didn't
consider entries learned from a remote peer. Such entries are exclusively
learned in peer_treat_updatemsg(): once the entry is created (or updated)
with new data, touch_remote is used to commit the change. However, unlike
touch_local, entries committed using touch_remote will not be advertised
to the peer from which the entry was just learned (otherwise we would
enter a looping situation). Due to the above patch, once an entry is
learned from the (unique) remote peer, 'seen' will be stuck to 0 so it
will never be advertised for its whole lifetime.

Instead, when entries are learned from a peer, we should consider that
the peer that taught us the entry has seen it.

To do this, let's set seen=1 in peer_treat_updatemsg() after calling
touch_remote(). This way, if we happen to perform updates on this entry,
it will be properly advertized to relevant peers. This patch should not
affect the performance gain documented in 864ac3117 given that the test
scenario didn't involved entries learned by remote peers, but solely
locally created entries advertised to remote peers upon updates.

This should be backported in 3.0 with 864ac3117.

(cherry picked from commit 1e0920f85542d63755cb8824d1000c3a6f22bb9c)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
2024-09-17 09:28:34 +02:00
Willy Tarreau
36ea5570d2 BUG/MEDIUM: queue: implement a flag to check for the dequeuing
As unveiled in GH issue #2711, commit 5541d4995d ("BUG/MEDIUM: queue:
deal with a rare TOCTOU in assign_server_and_queue()") does have some
side effects in that it can occasionally cause an endless loop.

As Christopher analysed it, the problem is that process_srv_queue(),
which uses a trylock in order to leave only one thread in charge of
the dequeueing process, can lose the lock race against pendconn_add().
If this happens on the last served request, then there's no more thread
to deal with the dequeuing, and assign_server_and_queue() will loop
forever on a condition that was initially exepected to be extremely
rare (and still is, except that now it can become sticky). Previously
what was happening is that such queued requests would just time out
and since that was very rare, nobody would notice.

The root of the problem really is that trylock. It was added so that
only one thread dequeues at a time but it doesn't offer only that
guarantee since it also prevents a thread from dequeuing if another
one is in the process of queuing. We need a different criterion.

What we're doing now is to set a flag "dequeuing" in the server, which
indicates that one thread is currently in the process of dequeuing
requests. This one is atomically tested, and only if no thread is in
this process, then the thread grabs the queue's lock and dequeues.
This way it will be serialized with pendconn_add() and no request
addition will be missed.

It is not certain whether the original race covered by the fix above
can still happen with this change, so better keep that fix for now.

Thanks to @Yenya (Jan Kasprzak) for the precise and complete report
allowing to spot the problem.

This patch should be backported wherever the patch above was backported.

(cherry picked from commit b11495652e724d71f1f4247332f060fe48577664)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
2024-09-17 09:27:58 +02:00
Willy Tarreau
dbd3d4c720 BUG/MINOR: clock: validate that now_offset still applies to the current date
We want to make sure that now_offset is still valid for the current
date: another thread could very well have updated it by detecting a
backwards jump, and at the very same moment the time got fixed again,
that we retrieve and add to the new offset, which results in a larger
jump. Normally, for this to happen, it would mean that before_poll
was also affected by the jump and was detected before and bounded
within 2 seconds, resulting in max 2 seconds perturbations.

Here we try to detect this situation and fall back to re-adjusting the
offset instead.

It's more of a strengthening of what's done by commit e8b1ad4c2b
("BUG/MEDIUM: clock: also update the date offset on time jumps") than a
pure fix, in that the issue was not direclty observed but it's visibly
possible by reading the code, so this should be backported along with
the patch above. This is related to issue GH #2704.

Note that this could be simplified in terms of operations by migrating
the deadlines to nanoseconds, but this was the path to least intrusive
changes.

(cherry picked from commit adaba6f904d11424327f90c3ab4df085c26fc8c4)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
2024-09-17 09:27:43 +02:00
Willy Tarreau
6eac661927 BUG/MINOR: clock: make time jump corrections a bit more accurate
Since commit e8b1ad4c2b ("BUG/MEDIUM: clock: also update the date offset
on time jumps") we try to update the now_offet based on the last known
valid date. But if it's off compared to the global_now_ns date shared
by other threads, we'll get the time off a little bit. When this happens,
we should consider the most recent of these dates so that if the global
date was already known to be more recent, we should use it and stick to
it. This will avoid setting too large an offset that could in turn provoke
a larger jump on another thread.

This is related to issue GH #2704.

This can be backported to other branches having the patch above.

(cherry picked from commit af48e4cc6b315cadb80d6980586af5c3c8368826)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
2024-09-17 09:27:01 +02:00
Willy Tarreau
250c436ff7 BUG/MINOR: polling: fix time reporting when using busy polling
Since commit beb859abce ("MINOR: polling: add an option to support
busy polling") the time and status passed to clock_update_local_date()
were incorrect. Indeed, what is considered is the before_poll date
related to the configured timeout which does not correspond to what
is passed to the poller. That's not correct because before_poll+the
syscall's timeout will be crossed by the current date 100 ms after
the start of the poller. In practice it didn't happen when the poller
was limited to 1s timeout but at one minute it happens all the time.

That's particularly visible when running a multi-threaded setup with
busy polling and only half of the threads working (bind ... thread even).
In this case, the fixup code of clock_update_local_date() is executed
for each round of busy polling. The issue was made really visible
starting with recent commit e8b1ad4c2b ("BUG/MEDIUM: clock: also
update the date offset on time jumps") because upon a jump, the
shared offset is reset, while it should not be in this specific
case.

What needs to be done instead is to pass the configured timeout of
the poller (and not of the syscall), and always pass "interrupted"
set so as to claim we got an event (which is sort of true as it just
means the poller returned instantly). In this case we can still
detect backwards/forward jumps and will use a correct boundary
for the maximum date that covers the whole loop.

This can be backported to all versions since the issue was introduced
with busy-polling in 1.9-dev8.

(cherry picked from commit ad98edd00accdf5529da77eb79b72cf9fc025a7d)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
2024-09-17 09:26:43 +02:00
Christopher Faulet
ba9d151c46 MEDIUM: h1: Accept invalid T-E values with accept-invalid-http-response option
Since the 2.6, A parsing error is reported when the chunked encoding is
found twice. As stated in RFC9112, A sender must not apply the chunked
transfer coding more than once to a message body. It means only one chunked
coding must be found. In addition, empty values are also rejected becaues it
is forbidden by RFC9110.

However, in both cases, it may be useful to relax the rules for trusted
legacy servers when accept-invalid-http-response option is set. Especially
because it was accepted on 2.4 and older. In addition, T-E header is now
sanitized before sending it. It is not a problem Because it is a hop-by-hop
header

Note that it remains invalid on client side because there is no good reason
to relax the parsing on this side. We can argue a server is trusted so we
can decide to support some legacy behavior. It is not true on client side
and it is highly suspicious if a client is sending an invalid T-E header.

Note also we continue to reject unsupported T-E values (so all codings except
"chunked"). Because the "TE" header is sanitized and cannot contain other value
than "Trailers", there is absolutely no reason for a server to use something
else.

This patch should fix the issue #2677. It could probably be backported as
far as 2.6 if necessary.

(cherry picked from commit 1900ca475fa005b8000c4652457fedf282fa864d)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
2024-09-17 09:26:33 +02:00
Willy Tarreau
6cae9b3ead BUG/MINOR: pattern: do not leave a leading comma on "set" error messages
Commit 4f2493f355 ("BUG/MINOR: pattern: pat_ref_set: fix UAF reported by
coverity") dropped the condition to concatenate error messages and as
such introduced a leading comma in front of all of them. Then commit
911f4d93d4 ("BUG/MINOR: pattern: pat_ref_set: return 0 if err was found")
changed the behavior to stop at the first error anyway, so all the
mechanics dedicated to the concatenation of error messages is no longer
needed and we can simply return the error as-is, without inserting any
comma.

This should be backported where the patches above are backported.

(cherry picked from commit 9f8d9c9e8bdef8e7b8f7d1d641987b4d0dcae988)
Signed-off-by: Willy Tarreau <w@1wt.eu>
2024-09-10 09:07:27 +02:00
Christopher Faulet
550ea03db2 BUG/MINOR: h1-htx: Don't flag response as bodyless when a tunnel is established
This reverts commit 225a4d02e1.

When a 200-OK response is replied to a CONNECT request or a
101-Switching-protocol, a tunnel is considered as established between the
client and the server. However, we must not declare the reponse as
bodyless. Of course, there is no payload, but tunneled data are expected.

Because of this bug, the zero-copy forwarding is disabled on the server
side.

This patch must be backported as far as 2.9.

(cherry picked from commit a99d58819f8184fb6f500807ddaf2371c5f1c110)
Signed-off-by: Willy Tarreau <w@1wt.eu>
2024-09-09 19:44:33 +02:00
Christopher Faulet
0be5e36d8c BUG/MAJOR: mux-h1: Wake SC to perform 0-copy forwarding in CLOSING state
When the mux is woken up on I/O events, if the zero-copy forwarding is
enabled, receives are blocked. In this case, the SC is woken up to be able
to perform 0-copy forwarding to the other side. This works well, except for
the H1C in CLOSING state.

Indeed, in that case, in h1_process(), the SC is not woken up because only
RUNNING H1 connections are considered. As consequence, the mux will ignore
connection closure. The H1 connection remains blocked, waiting for the
shutdown timeout. If no timeout is configured, the H1 connection is never
closed leading to a leak.

This patch should fix leak reported by Damien Claisse in the issue #2697. It
should be backported as far as 2.8.

(cherry picked from commit f6e193f1b05bc3a73090b1efeb3f0440a0c776b0)
Signed-off-by: Willy Tarreau <w@1wt.eu>
2024-09-09 19:44:33 +02:00
Aurelien DARRAGON
fc0af6f2bb BUG/MEDIUM: pattern: prevent UAF on reused pattern expr
Since c5959fd ("MEDIUM: pattern: merge same pattern"), UAF (leading to
crash) can be experienced if the same pattern file (and match method) is
used in two default sections and the first one is not referenced later in
the config. In this case, the first default section will be cleaned up.
However, due to an unhandled case in the above optimization, the original
expr which the second default section relies on is mistakenly freed.

This issue was discovered while trying to reproduce GH #2708. The issue
was particularly tricky to reproduce given the config and sequence
required to make the UAF happen. Hopefully, Github user @asmnek not only
provided useful informations, but since he was able to consistently
trigger the crash in his environment he was able to nail down the crash to
the use of pattern file involved with 2 named default sections. Big thanks
to him.

To fix the issue, let's push the logic from c5959fd a bit further. Instead
of relying on "do_free" variable to know if the expression should be freed
or not (which proved to be insufficient in our case), let's switch to a
simple refcounting logic. This way, no matter who owns the expression, the
last one attempting to free it will be responsible for freeing it.
Refcount is implemented using a 32bit value which fills a previous 4 bytes
structure gap:

        int                        mflags;               /*    80     4 */

        /* XXX 4 bytes hole, try to pack */

        long unsigned int          lock;                 /*    88     8 */
(output from pahole)

Even though it was not reproduced in 2.6 or below by @asmnek (the bug was
revealed thanks to another bugfix), this issue theorically affects all
stable versions (up to c5959fd), thus it should be backported to all
stable versions.

(cherry picked from commit 68cfb222b559b909415c9aa92114ca42c9a93459)
Signed-off-by: Willy Tarreau <w@1wt.eu>
2024-09-09 19:44:33 +02:00
Aurelien DARRAGON
1f06ee2f67 BUG/MINOR: pattern: prevent const sample from being tampered in pat_match_beg()
This is a complementary patch to a68affeaa ("BUG/MINOR: pattern: a sample
marked as const could be written"). Indeed the same logic from
pat_match_str() is used there, but we lack the check to ensure that the
sample is not const before writing data to it.

It could be backported to all stable versions.

(cherry picked from commit 3449525a0204841a62a9fa41119ec8c47f21fde8)
Signed-off-by: Willy Tarreau <w@1wt.eu>
2024-09-09 19:44:33 +02:00
Willy Tarreau
4c656153b3 BUG/MEDIUM: clock: detect and cover jumps during execution
After commit e8b1ad4c2 ("BUG/MEDIUM: clock: also update the date offset
on time jumps"), @firexinghe mentioned that the issue was still present
in their case. In fact it depends on the load, which affects the
probability that the time changes between two poll() calls vs that it
changes during poll(). The time correction code used to only deal with
the latter. But under load if it changes between two poll() calls, what
happens then is that before_poll is off, and after returning from poll(),
the date is within bounds defined by before_poll, so no correction is
applied.

After many tests, it turns out that the most reliable solution without
using CLOCK_MONOTONIC is to prevent before_poll from being earlier than
the previous after_poll (trivial), and to cover forward jumps, we need
to enforce a margin. Given that the watchdog kills a looping task within
2 seconds and that no sane setup triggers it, it seems that 2 seconds
remains a safe enough margin. This means that in the worst case, some
forward jumps of up to 2 seconds will not be corrected, leading to an
apparent fast time and low rates. But this is supposed to be an exceptional
event anyway (typically an admin or crontab running ntpdate).

For future versions, given that we now opportunistically call
now_mono_time() before and after poll(), that returns zero if not
supported, we could imagine relying on this one for the thread's local
time when it's non-null.

(cherry picked from commit ef8d8215de2ed03a08172bbccb1146e5b867ce74)
Signed-off-by: Willy Tarreau <w@1wt.eu>
2024-09-09 19:44:33 +02:00
Willy Tarreau
07f1630a77 REGTESTS: fix random failures with wrong_ip_port_logging.vtc under load
This test has an expect rule for syslog that looks for [cC]D, to
indicate a client abort or timeout during the data phase. The purpose
was to say that when it fails it must be this, but the very low timeout
(1ms) still makes it prone to succeeding if the machine is highly loaded.

This has become more visible since commit e8b1ad4c2b ("BUG/MEDIUM: clock:
also update the date offset on time jumps") because the clock drift
adjustments are more systematic. Since this commit, running 50 such tests
at twice more than the number of CPUs in parallel is sufficient to yield
errors due to some lines appearing as succeeding:

   make reg-tests -- --j $((($(nproc)+1)*2)) --vtestparams -n50 reg-tests/log/wrong_ip_port_logging.vtc

It was observed that pauses up to 300ms were observed in epoll_wait() in
such circumstances, which were properly fixed by the time drift detection..
Another approach would consist in increasing the permitted margin during
which we don't fix the clock drift but that would not be logical since the
base time had really been awaited for.

This should be backported to all stable releases since the commit above
will trigger the issue more often.

(cherry picked from commit 036ab62231003e7e7072986626597682725edff8)
Signed-off-by: Willy Tarreau <w@1wt.eu>
2024-09-09 19:44:33 +02:00
Willy Tarreau
ef35df8349 DOC: configuration: place the HAPROXY_HTTP_LOG_FMT example on the correct line
When HAPROXY_HTTP_LOG_FMT was added by commit 537b9e7f36 ("MINOR: config:
add environment variables for default log format"), the example was placed
by accident after the clf log format instead of the HTTP log format,
causing a bit of confusion.

This can be backported to 2.8.

(cherry picked from commit c22fc591d428c55d76b2e18799fca804fafaf558)
Signed-off-by: Willy Tarreau <w@1wt.eu>
2024-09-09 19:44:33 +02:00
Frederic Lecaille
d317fc6b21 BUG/MINOR: quic: Too short datagram during packet building failures (aws-lc only)
This issue was reported by Ilya (@Chipitsine) when building haproxy against
aws-lc in GH #2663 where handshakeloss and handshakecorruption interop tests could
lead haproxy to crash after having built too short datagrams:

FATAL: bug condition "first_pkt->type == QUIC_PACKET_TYPE_INITIAL && (first_pkt->flags & (1UL << 0)) && length < 1200" matched at src/quic_tx.c:163
call trace(13):
| 0x55f4ee4dcc02 [ba d9 00 00 00 48 8d 35]: main-0x195bf2
| 0x55f4ee4e3112 [83 3d 2f 16 35 00 00 0f]: qc_send+0x11f3/0x1b5d
| 0x55f4ee4e9ab4 [85 c0 0f 85 00 f6 ff ff]: quic_conn_io_cb+0xab1/0xf1c
| 0x55f4ee6efa82 [48 c7 c0 f8 55 ff ff 64]: run_tasks_from_lists+0x173/0x9c2
| 0x55f4ee6f05d3 [8b 7d a0 29 c7 85 ff 0f]: process_runnable_tasks+0x302/0x6e6
| 0x55f4ee671bb7 [83 3d 86 72 44 00 01 0f]: run_poll_loop+0x6e/0x57b
| 0x55f4ee672367 [48 8b 1d 22 d4 1d 00 48]: main-0x48d
| 0x55f4ee6755e0 [b8 00 00 00 00 e8 08 61]: main+0x2dec/0x335d

This could happen after Handshake packet building failures which follow a successful
Initial packet into the same datagram. In this case, the datagram could be emitted
with a too short length (<1200 bytes).

To fix this, store the datagram only if the first packet is not an Initial packet
or if its length is big enough (>=1200 bytes).

Must be backported as far as 2.6.

(cherry picked from commit eb1a097a6687cd8b93d981eff3d7e257c448ff3e)
[fl: same patch wich <wrlen> replaced by <dlen> (datagram length)]
Signed-off-by: Frederic Lecaille <flecaille@haproxy.com>
2024-09-06 11:23:40 +02:00
Frederic Lecaille
763a0367a1 BUG/MINOR: quic: Crash from trace dumping SSL eary data status (AWS-LC)
This bug follows this patch:
     MINOR: quic: Add trace for QUIC_EV_CONN_IO_CB event.
where a new third variable was added to be dumped from QUIC_EV_CONN_IO_CB trace
event. The quic_trace() code did not reveal there was already another variable
passed as third argument but not dumped. This leaded to crash when dereferencing
a point to an int in place of a point to an SSL object.

This issue was reproduced only by handshakecorruption aws-lc interop test with
s2n-quic as client.

Note that this patch must be backported with this one:
     BUG/MEDIUM: quic: always validate sender address on 0-RTT
which depends on the commit mentionned above.

(cherry picked from commit db13df3d6e64e9993b90f048734538491082cbed)
Signed-off-by: Frederic Lecaille <flecaille@haproxy.com>
2024-09-05 16:54:16 +02:00
Frederic Lecaille
e875aa59a9 BUG/MEDIUM: quic: always validate sender address on 0-RTT
It has been reported by Wedl Michael, a student at the University of Applied
Sciences St. Poelten, a potential vulnerability into haproxy as described below.

An attacker could have obtained a TLS session ticket after having established
a connection to an haproxy QUIC listener, using its real IP address. The
attacker has not even to send a application level request (HTTP3). Then
the attacker could open a 0-RTT session with a spoofed IP address
trusted by the QUIC listen to bypass IP allow/block list and send HTTP3 requests.

To mitigate this vulnerability, one decided to use a token which can be provided
to the client each time it successfully managed to connect to haproxy. These
tokens may be reused for future connections to validate the address/path of the
remote peer as this is done with the Retry token which is used for the current
connection, not the next one. Such tokens are transported by NEW_TOKEN frames
which was not used at this time by haproxy.

So, each time a client connect to an haproxy QUIC listener with 0-RTT
enabled, it is provided with such a token which can be reused for the
next 0-RTT session. If no such a token is presented by the client,
haproxy checks if the session is a 0-RTT one, so with early-data presented
by the client. Contrary to the Retry token, the decision to refuse the
connection is made only when the TLS stack has been provided with
enough early-data from the Initial ClientHello TLS message and when
these data have been accepted. Hopefully, this event arrives fast enough
to allow haproxy to kill the connection if some early-data have been accepted
without token presented by the client.

quic_build_post_handshake_frames() has been modified to build a NEW_TOKEN
frame with this newly implemented token to be transported inside.

quic_tls_derive_retry_token_secret() was renamed to quic_do_tls_derive_token_secre()
and modified to be reused and derive the secret for the new token implementation.

quic_token_validate() has been implemented to validate both the Retry and
the new token implemented by this patch. When this is a non-retry token
which could not be validated, the datagram received is marked as requiring
a Retry packet to be sent, and no connection is created.

When the Initial packet does not embed any non-retry token and if 0-RTT is enabled
the connection is marked with this new flag: QUIC_FL_CONN_NO_TOKEN_RCVD. As soon
as the TLS stack detects that some early-data have been provided and accepted by
the client, the connection is marked to be killed (QUIC_FL_CONN_TO_KILL) from
ha_quic_add_handshake_data(). This is done calling qc_ssl_eary_data_accepted()
new function. The secret TLS handshake is interrupted as soon as possible returnin
0 from ha_quic_add_handshake_data(). The connection is also marked as
requiring a Retry packet to be sent (QUIC_FL_CONN_SEND_RETRY) from
ha_quic_add_handshake_data(). The the handshake I/O handler (quic_conn_io_cb())
knows how to behave: kill the connection after having sent a Retry packet.

About TLS stack compatibility, this patch is supported by aws-lc. It is
disabled for wolfssl which does not support 0-RTT at this time thanks
to HAVE_SSL_0RTT_QUIC.

This patch depends on these commits:

     MINOR: quic: Add trace for QUIC_EV_CONN_IO_CB event.
     MINOR: quic: Implement qc_ssl_eary_data_accepted().
     MINOR: quic: Modify NEW_TOKEN frame structure (qf_new_token struct)
     BUG/MINOR: quic: Missing incrementation in NEW_TOKEN frame builder
     MINOR: quic: Token for future connections implementation.
     MINOR: quic: Implement quic_tls_derive_token_secret().
     MINOR: tools: Implement ipaddrcpy().

Must be backported as far as 2.6.

(cherry picked from commit f627b9272bd8ffca6f2f898bfafc6bf0b84b7d46)
[fl: Add ->flags to quic_dgram struct (would arrive with quic_initial feature).
     Add QUIC_DGRAM_FL_ quic_dgram flags (would arrive with quic_initial feature).
     Modify quic_rx_pkt_retrieve_conn() to fix a compilation issue and correctly
     handle the "if (pkt->token_len) {}" else block to do so with quic_initial
     feature]
Signed-off-by: Frederic Lecaille <flecaille@haproxy.com>
2024-09-05 16:42:48 +02:00
Frederic Lecaille
cc80919247 MINOR: quic: Add trace for QUIC_EV_CONN_IO_CB event.
Dump the early data status from QUIC_EV_CONN_IO_CB trace event.
This is very helpful to know if the QUIC server has accepted the
early data received from clients.

(cherry picked from commit 8854cef03672235addec6f3baafcc44f0e0441f4)
Signed-off-by: Frederic Lecaille <flecaille@haproxy.com>
2024-09-05 16:25:26 +02:00
Frederic Lecaille
7cae9b607f MINOR: quic: Implement qc_ssl_eary_data_accepted().
This function is a wrapper around SSL_get_early_data_status() for
OpenSSL derived stack and SSL_early_data_accepted() boringSSL derived
stacks like AWS-LC. It returns true for a TLS server if it has
accepted the early data received from a client.

Also implement quic_ssl_early_data_status_str() which is dedicated to be used
for debugging purposes (traces). This function converts the enum returned
by the two function mentionned above to a human readable string.

(cherry picked from commit 609b1245610c8f3e219a1eff12d13559cae109cd)
Signed-off-by: Frederic Lecaille <flecaille@haproxy.com>
2024-09-05 16:25:03 +02:00
Frederic Lecaille
ceb7dbf420 MINOR: quic: Modify NEW_TOKEN frame structure (qf_new_token struct)
Modify qf_new_token structure to use a static buffer with QUIC_TOKEN_LEN
as size as defined by the token for future connections (quic_token.c).
Modify consequently the NEW_TOKEN frame parser (see quic_parse_new_token_frame()).
Also add comments to denote that the NEW_TOKEN parser function is used only by
clients and that its builder is used only by servers.

(cherry picked from commit e926378375bcf579daadea071c600651eb7dce0d)
[fl: remove useless openssl/chacha.h header inclusion when moving
     openssl-compat.h at the start of the header inclusions as expected by this
     patch]
Signed-off-by: Frederic Lecaille <flecaille@haproxy.com>
2024-09-05 16:23:20 +02:00
Frederic Lecaille
0040d08b19 BUG/MINOR: quic: Missing incrementation in NEW_TOKEN frame builder
quic_build_new_token_frame() is the function which is called to build
a NEW_TOKEN frame into a buffer. The position pointer for this buffer
was not updated, leading the NEW_TOKEN frame to be malformed.

Must be backported as far as 2.6.

(cherry picked from commit 76c80605a6cc7880e4184c52fb1a314367410271)
Signed-off-by: Frederic Lecaille <flecaille@haproxy.com>
2024-09-05 16:18:52 +02:00
Frederic Lecaille
92b811d520 MINOR: quic: Token for future connections implementation.
There exist two sorts of token used by QUIC. They are both used to validate
the peer address (path validation). Retry are used for the current
connection the client want to open. This patch implement the other
sort of tokens which after having been received from a connection, may
be provided for the next connection from the same IP address to validate
it (or validate the network path between the client and the server).

The token generation is implemented by quic_generate_token(), and
the token validation by quic_token_chek(). The same method
is used as for Retry tokens to build such tokens to be reused for
future connections. The format is very simple: one byte for the format
identifier to distinguish these new tokens for the Retry token, followed
by a 32bits timestamps. As this part is ciphered with AEAD as cryptographic
algorithm, 16 bytes are needed for the AEAD tag. 16 more random bytes
are added to this token and a salt to derive the AEAD secret used
to cipher the token. In addition to this salt, this is the client IP address
which is used also as AAD to derive the AEAD secret. So, the length of
the token is fixed: 37 bytes.

(cherry picked from commit f5b09dc452f582eb876527fd28103bc29c51afad)
[fl: very minor Makefile modif to correctly add quic_token.o object to be compiled]
Signed-off-by: Frederic Lecaille <flecaille@haproxy.com>
2024-09-05 16:17:27 +02:00
William Lallemand
0c7265509c MEDIUM: ssl/quic: implement quic crypto with EVP_AEAD
The QUIC crypto is using the EVP_CIPHER API in order to achieve
authenticated encryption, this was the API which was used with OpenSSL.
With libraries that inspires from BoringSSL (libreSSL and AWS-LC), the
AEAD algorithms are implemented using the EVP_AEAD API.

This patch converts the call to the EVP_CIPHER API when called in the
contex of AEAD cryptography for QUIC.

The patch defines some QUIC_AEAD macros that can be either EVP_CIPHER or
EVP_AEAD depending on the library.

This was mainly done for AWS-LC but this could be useful for other
libraries. This should finally allow to use CHACHA20_POLY1305 with
AWS-LC.

This patch allows to use the following ciphers with the EVP_AEAD API:
- TLS1_3_CK_AES_128_GCM_SHA256
- TLS1_3_CK_AES_256_GCM_SHA384

AWS-LC does not implement TLS1_3_CK_AES_128_CCM_SHA256 and
TLS1_3_CK_CHACHA20_POLY1305_SHA256 requires some hack for headers
protection which will come in another patch.

(cherry picked from commit 31c831e29b432f0a9958be63948e8f4cb278e9f8)
[fl: required to support NEW_TOKEN which depends on QUIC_AEAD_* definitions]
Signed-off-by: Frederic Lecaille <flecaille@haproxy.com>
2024-09-05 16:15:15 +02:00
Frederic Lecaille
d074491bd4 MINOR: quic: Implement quic_tls_derive_token_secret().
This is function is similar to quic_tls_derive_retry_token_secret().
Its aim is to derive the secret used to cipher the token to be used
for future connections.

This patch renames quic_tls_derive_retry_token_secret() to a more
and reuses its code to produce a more generic one: quic_do_tls_derive_token_secret().
Two arguments are added to this latter to produce both quic_tls_derive_retry_token_secret()
and quic_tls_derive_token_secret() new function which calls
quic_do_tls_derive_token_secret().

(cherry picked from commit 74caa0eece1cc3a8b35f1d34674ea5f357819314)
Signed-off-by: Frederic Lecaille <flecaille@haproxy.com>
2024-09-05 16:13:00 +02:00
Frederic Lecaille
ee7ad6615d MINOR: tools: Implement ipaddrcpy().
Implement ipaddrcpy() new function to copy only the IP address from
a sockaddr_storage struct object into a buffer.

(cherry picked from commit fb7a0922038932a6b82f1827a0214c5d2e8da32e)
Signed-off-by: Frederic Lecaille <flecaille@haproxy.com>
2024-09-05 16:12:41 +02:00
Willy Tarreau
61d73137f1 BUG/MEDIUM: clock: also update the date offset on time jumps
In GH issue #2704, @swimlessbird and @xanoxes reported problems handling
time jumps. Indeed, since 2.7 with commit 4eaf85f5d9 ("MINOR: clock: do
not update the global date too often") we refrain from updating the global
offset in case it didn't change. But there's a catch: in case of a large
time jump, if the poller was interrupted, the local time remains the same
and we return immediately from there without updating the offset. It then
becomes incorrect regarding the "date" value, and upon subsequent call to
the poller, there's no way to detect a jump anymore so we apply the old,
incorrect offset and the date becomes wrong. Worse, going back to the
original time (then in the past), global_now_ns remains higher than the
local time and neither get updated anymore.

What is missing in practice is to immediately update the offset when
detecting a time jump. In an ideal world, the offset would be updated
upon every call, that's what was being done prior to commit above but
it's extremely CPU intensive on large systems. However we can perfectly
afford to update the offset every time we detect a time jump, as it's
not as common.

This needs to be backported as far as 2.8. Thanks to both participants
above for providing very helpful details.

(cherry picked from commit e8b1ad4c2b3985eb9e826fd279e419719a2c03ce)
Signed-off-by: Willy Tarreau <w@1wt.eu>
2024-09-04 17:13:00 +02:00
Frederic Lecaille
5ad7493933 BUILD: quic: 32bits build broken by wrong integer conversions for printf()
Since these commits the 32bits build is broken due to several errors as follow:

CC      src/quic_cli.o
src/quic_cli.c: In function ‘dump_quic_full’:
src/quic_cli.c:285:94: error: format ‘%ld’ expects argument of type ‘long int’,
        but argument 5 has type ‘uint64_t’ {aka ‘long long unsigned int’} [-Werror=format=]
  285 |                         chunk_appendf(&trash, "  [initl] rx.ackrng=%-6zu tx.inflight=%-6zu(%ld%%)\n",
      |                                                                                            ~~^
      |                                                                                              |
      |                                                                                              long int
      |                                                                                            %lld
  286 |                                       pktns->rx.arngs.sz, pktns->tx.in_flight,
  287 |                                       pktns->tx.in_flight * 100 / qc->path->cwnd);
      |                                       ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
      |                                                                 |
      |                                                                 uint64_t {aka long long unsigned int}

Replace several %ld by %llu with ull as printf conversion in quic_clic.c and a
%ld by %lld with (long long) as printf conversion in quic_cc_cubic.c.

Thank you to Ilya (@chipitsine) for having reported this issue in GH #2689.

Must be backported to 3.0.

(cherry picked from commit 414e3aa6bc80d66a448dc25d8e50f4e457dc8711)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
2024-09-03 18:49:07 +02:00
Valentine Krasnobaeva
fa72729511 BUG/MINOR: cfgparse-global: remove tune.fast-forward from common_kw_list
Remove tune.fast-forward from common_kw_list. It was replaced by
'tune.disable-fast-forward' and it's no longer present in "if..else if.."
parser from cfg_parse_global(). Otherwise, it may be shown as the best-match
keyword for some tune options, which is now wrong.

Should be backported in versions 2.9 and 3.0.

(cherry picked from commit 2e6e159ac47468b6b65e9678c9e4d1fe746165e8)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
2024-09-03 18:49:07 +02:00
Nathan Wehrman
692c298a88 DOC: config: correct the table for option tcplog
option tcplog was reported as functional in the backend section in
error. This can be back ported as needed but it simply corrects
that.

(cherry picked from commit 9788ae1d19ea159f2a87a8ef0a02ff57a480b703)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
2024-09-03 18:49:07 +02:00
Valentine Krasnobaeva
b4af0cff48 BUG/MINOR: pattern: pat_ref_set: return 0 if err was found
pat_ref_set_elt() returns 0, if we are run out of memory or can't parse a new
map value. Any arror message emitted by pat_ref_set_elt() is saved in err
buffer, if its provided by caller. These error messages are cumulated during
the loop.

pat_ref_set() is used to update values in map, referred to the same given key.
If during the update pat_ref_set_elt() fails, let's retun 0 to caller
immediately. We have the same non-unique key and the same new value in each
loop. So it seems quite odd to cumulate the same error messages and print it in
CLI:

        > add map @1 mytest.map <<
        + 1.0.1.11 TestA
        + 1.0.1.11 TESTA
        + 1.0.1.11 test_a
        +

        > set map mytest.map 1.0.1.11 15
         unable to parse '15' unable to parse '15' unable to parse '15'.

cli_parse_set_map(), which calls pat_ref_set() to update map, will return only
one error message with this patch:

> set map mytest.map 1.0.1.11 15
 unable to parse '15'.

hlua_set_map() and http_action_set_map() don't provide error buffer and will
just exit on the first error.

This should be backported in all stable versions.

(cherry picked from commit 911f4d93d436a9067fb45168f70f79b5916489b0)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
2024-09-03 18:49:07 +02:00
Valentine Krasnobaeva
bf82f46ef0 BUG/MINOR: pattern: pat_ref_set: fix UAF reported by coverity
memprintf() performs realloc and updates then the pointer to an output buffer,
where it has written the data. So free() is called on the previous buffer
address, if it was provided.

pat_ref_set_elt() uses memprintf() to write its error message as well as
pat_ref_set(). So, when we re-enter into the while loop the second time and
pat_ref_set_elt() has returned, the *err ptr (previous value of *merr) is
already freed by memprintf() from pat_ref_set_el().

'if (!found)' condition is false at this point, because we've found a node at
the first loop. So, the second memprintf(), in order to write error messages,
does again free(*err).

This should be backported in all stable versions.

(cherry picked from commit 4f2493f3551f8c344485cccf075c3b15f9825181)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
2024-09-03 18:49:07 +02:00
Amaury Denoyelle
b8523ffaf8 BUG/MINOR: h3: properly reject too long header responses
When encoding HTX to HTTP/3 headers on the response path, a bunch of
ABORT_NOW() where used when buffer room was not enough. In most cases
this is safe as output buffer has just been allocated and so is empty at
the start of the function. However, with a header list longer than a
whole buffer, this would cause an unexpected crash.

Fix this by removing ABORT_NOW() statement with proper error return
path. For the moment, this would cause the whole connection to be close
rather than the stream only. This may be further improved in the future.

Also remove ABORT_NOW() when encoding frame length at the end of headers
or trailers encoding. Buffer room is sufficient as it was already
checked prior in the same function.

This should be backported up to 2.6. Special care should be handled
however as this code path has changed frequently :
* for 2.9 and older, the extra following statement must be inserted
  prior each newly added goto statement :
  h3c->err = H3_INTERNAL_ERROR;
* for 2.6, trailers support is not implemented. As such, related chunks
  should just be ignored when backporting.

(cherry picked from commit 48514c118ce881e2ae20aeb1edda929663a8397d)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
2024-09-03 18:49:07 +02:00
Valentine Krasnobaeva
317e918193 BUG/MINOR: proto_uxst: delete fd from fdtab if listen() fails
This patch is done mostly as a safeguard in order not to trigger
BUG_ON(fdtab[fd].owner != NULL) check, if listen() will fail on UNIX domain
socket.

In uxst_bind_listener(), the pretty same logic of closing socket on error path
was kept, as it was in tcp_bind_listener() before. The use of fd_delete() was
not generalized, when the support of UNIX sock_stream protocol was implemented.
So, let's remove fd from fdtab on failure, instead of closing it. Otherwise,
uxst_bind_listener(), which could be called in loop for each receiver, will
obtain the same fd via socket() for the next receiver. Then, it will bind it
again and it will try to re-insert it in fdtab.

This can be backported to all stable versions.

(cherry picked from commit eb8235869027fe6f472160febb6edb169f38d1ee)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
2024-09-03 18:49:07 +02:00
Amaury Denoyelle
cf920ac69b BUG/MINOR: mux-quic: do not send too big MAX_STREAMS ID
QUIC stream IDs are expressed as QUIC variable integer which cover the
range for 0 to 2^62 - 1. As such, it is forbidden to send an ID for
MAX_STREAMS flow-control frame which would allow to overcome this value.

This patch fixes MAX_STREAMS emission to ensure sent value is valid.
This also ensures that the peer cannot open a stream with an invalid ID
as this would cause a flow-control violation instead.

This must be backported up to 2.6.

(cherry picked from commit f3c75a52df29247e5d502344127d42efb2c12b82)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
2024-09-03 18:49:07 +02:00
William Lallemand
3ce79401b3 REGTESTS: mcli: test the pipelined commands on master CLI
A recent fix broke the pipelined command on the master CLI, this
reg-tests implement a simple test that allow to check its right
behavior.

This could be backported as far as 2.6.

(cherry picked from commit fe5ddcc4901e3b43e58f5cf903c500a69d091b57)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
2024-09-03 18:49:07 +02:00
William Lallemand
a948603e99 BUG/MEDIUM: mworker/cli: fix pipelined modes on master CLI
Since commit 3d93ecc ("BUG/MAJOR: cli: Restore non-interactive mode
behavior with pipelined commands") and commit 598c7f16 ("BUG/MEDIUM:
cli: Warn if pipelined commands are delimited by a \n"), the pipelined
command on the master CLI are either broken or emit warnings depending
on which version.

The reason is that mode applied on the master CLI are saved on the in
the current CLI session, and then reinserted for each pipelined command,
however, these commande were inserted as new lines.

For example:

 "@1; expert-mode on; debug dev log foo; debug dev log bar"

 Would be sent as:

  "expert mode on\ndebug dev log foo"
  "expert mode on\ndebug dev log bar"

This patch fixes the issue by using the new ci_insert() function which
inserts a string instead of a newline, and the command are now suffixed
by ';' upon insertion allowing a correct pipelined command chain.

This must be backported with the previous commit introducing ci_insert()
in every stable version.

This is broken since the 3.0 version, but it emits a warning in every
version below, because 598c7f164 was backported.

(cherry picked from commit b75edf2f11486bc2f2cc92c2b5273219a5e728c6)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
2024-09-03 18:49:07 +02:00
William Lallemand
dcfe6118b2 MINOR: channel: implement ci_insert() function
ci_insert() is a function which allows to insert a string <str> of size
<len> at <pos> of the input buffer. This is the equivalent of
ci_insert_line2() but without inserting '\r\n'

(cherry picked from commit b2a8e8731da82b8bbd9dfff6d5a0d71f25a5ee49)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
2024-09-03 18:49:07 +02:00
Valentine Krasnobaeva
1038e1517a BUG/MINOR: proto_tcp: keep error msg if listen() fails
If listen() fails, we need to keep the message about it, which is copied then
in errmsg buffer on the error path. This buffer is properly provided by the
caller (protocol_bind_all()) and reallocated if needed in memprintf(), but
it was deleted without being returned.

This can be backported to all stable versions.

(cherry picked from commit 81f48395b325b9875d215ec2743e75f7a56e1e5f)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
2024-09-03 18:38:52 +02:00
Valentine Krasnobaeva
de397e2e97 BUG/MINOR: proto_tcp: delete fd from fdtab if listen() fails
If listen() fails, fd should be deleted from fdtab, not just closed. Otherwise,
sock_inet_bind_receiver(), which is called in loop for each receiver, will
obtain the same fd via socket() for the next receiver, registered in the
receivers list. Then, it will bind it again and it will try to re-insert it in
fdtab, and fd_insert() will trigger the BUG_ON(fdtab[fd].owner != NULL) check.

When tcp_bind_listener() code was implemented, the use of fd_delete() was
not generalized and this one remained overlooked.

This can be backported to all stable versions.

(cherry picked from commit 308c6881c03b6302afd5cc48781d73a11ef994d4)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
2024-09-03 18:38:48 +02:00
Willy Tarreau
be0d95414c BUG/MINOR: quic/trace: make quic_conn_enc_level_init() emit NEW not CLOSE
The event emitted by this trace was of type CLOSE instead of NEW, which
would somtimes temporarily pause a started trace.

This can be backported to 3.0, probably 2.6.

(cherry picked from commit 6bf50dfccca992d7f05febb5819e57b601ef94c0)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
2024-09-03 18:38:38 +02:00
Willy Tarreau
1c44929ed1 BUG/MINOR: trace/quic: make "qconn" selectable as a lockon criterion
The test was was performed but there's no way to set the option! Let's
just add "qconn" to select the quic conn when the source supports it.

This can be backported at least to 3.0, probably 2.6.

(cherry picked from commit 7a22fbd453d7ef732b72c1d45d0e2c4f89b43fcb)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
2024-09-03 18:38:32 +02:00
Willy Tarreau
830fcd3082 BUG/MINOR: trace: automatically start in waiting mode with "start <evt>"
The doc clearly says that "start <evt>" should leave the trace in pause
mode until the indicated event appears. However it's not what's happening,
the state is not changed until one command uses "now", so it's typically
needed to configure the events with "start <evt>" then enable the waiting
mode using "pause now". This is counter-intuitive and does not match the
doc, so let's fix it so that "start <evt>" switches from stopped to waiting
as long as at least one event is enabled.

This can be backported to all versions.

(cherry picked from commit 0406efe9ad129e91f8a6c93b780064b3c27ccaa0)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
2024-09-03 18:38:27 +02:00
Willy Tarreau
fe71ad89da BUG/MEDIUM: trace: fix null deref in lockon mechanism since TRACE_ENABLED()
When calling TRACE_ENABLED(), which is called by TRACE_PRINTF(), we pass
a NULL plockptr to __trace_enabled(). This argument is used when lockon
is active, and may update the pointer. This is an overlook which also
broke the lockon mechanism because now for calls from __trace(), it
dereferences a pointer pointing to NULL, and never updates it due to the
broken condition, so that trace() never sets up src->lockon_ptr.

The bug was introduced in 2.8 by commit 8f9a9704bb ("MINOR: trace: add a
TRACE_ENABLED() macro to determine if a trace is active"), so the fix must
be backported there.

(cherry picked from commit b5df6b5a31b86b4403f00b7e0230c97883eca0f3)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
2024-09-03 18:38:22 +02:00
Willy Tarreau
ab1a247177 BUG/MINOR: trace/quic: permit to lock on frontend/connect/session etc
These ones were not proposed in the list of trackable elements. Note
that this depends on previous commit:

    BUG/MINOR: trace/quic: enable conn/session pointer recovery from quic_conn

This should be backported to at least 3.0, maybe even 2.6.

(cherry picked from commit 88a752ca789e6a2f863e16a35ddaa4fade5ccecd)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
2024-09-03 18:38:17 +02:00
Willy Tarreau
ab921e1935 BUG/MINOR: trace/quic: enable conn/session pointer recovery from quic_conn
In __trace_enabled(), a quic_conn was detected, but it was not possible
to derive the connection nor the session from it, which was quite limiting
in terms of ability to track a same instance.

This should be backported to at least 3.0, maybe even 2.6.

(cherry picked from commit aa1915a9f559724cb3fc2be39a2d928d99556e5a)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
2024-09-03 18:38:12 +02:00
Willy Tarreau
762f3df4e4 DOC: configuration: fix alphabetical ordering of {bs,fs}.aborted
These must be before {bs,fs}.id, not after. Should be backported wherever
068ce2d5d2 ("MINOR: stconn: Add samples to retrieve about stream aborts")
is (normally 3.0).

(cherry picked from commit b681a9e48813742850299fb5207766ac6f15007d)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
2024-09-03 18:38:07 +02:00
Ilia Shipitsin
111e8590c1 BUG/MINOR: fcgi-app: handle a possible strdup() failure
This defect was found by the coccinelle script "unchecked-strdup.cocci".
It can be backported to 2.2.

(cherry picked from commit aaaacaaf4b558f4e0206b98c787cdcef773b1d76)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
2024-09-03 18:37:59 +02:00
Christopher Faulet
136fe7e3c1 BUG/MEDIUM: peer: Notify the applet won't consume data when it waits for sync
When the peer applet is waiting for a synchronisation with the global sync
task, we must notify it won't consume data. Otherwise, if some data are
already waiting in the input buffer, the applet will be woken up in loop and
this wil trigger the watchdog. Once synchronized, the applet is woken up. In
that case, the peer applet must indicate it is going to consume data again.

This patch should fix the issue #2656. It must be backported to 3.0.

(cherry picked from commit 78b8b6003082b54a24fa7b13be954f73248cc9d4)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
2024-09-03 18:33:45 +02:00
Christopher Faulet
fef9d21bae BUG/MEDIUM: mux-h2: Propagate term flags to SE on error in h2s_wake_one_stream
When a stream is explicitly woken up by the H2 conneciton, if an error
condition is detected, the corresponding error flag is set on the SE. So
SE_FL_ERROR or SE_FL_ERR_PENDING, depending if the end of stream was
reported or not.

However, there is no attempt to propagate other termination flags. We must
be sure to properly set SE_FL_EOI and SE_FL_EOS when appropriate to be able
to switch a pending error to a fatal error.

Because of this bug, the SE remains with a pending error and no end of
stream, preventing the applicative stream to trully abort it. It means on
some abort scenario, it is possible to block a stream infinitely.

This patch must be backported at least as far as 2.8. No bug was observed on
older versions while the same code is inuse.

(cherry picked from commit 184f16ded7a0274bffe99a4795d0a27f8be7c006)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
2024-09-03 18:33:40 +02:00