IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Before 2.3, after an async crypto processing or on session close, the engine
async file's descriptors were removed from the fdtab but not closed because
it is the engine which has created the file descriptor, and it is responsible
for closing it. In 2.3 the fd_remove() call was replaced by fd_stop_both()
which stops the polling but does not remove the fd from the fdtab and the
fd remains indefinitively in the fdtab.
A simple replacement by fd_delete() is not a valid fix because fd_delete()
removes the fd from the fdtab but also closes the fd. And the fd will be
closed twice: by the haproxy's core and by the engine itself.
Instead, let's set FD_DISOWN on the FD before calling fd_delete() which will
take care of not closing it.
This patch must be backported on branches >= 2.3, and it relies on this
previous patch:
MINOR: fd: add a new FD_DISOWN flag to prevent from closing a deleted FD
As mentioned in the patch above, a different flag will be needed in 2.3.
(cherry picked from commit 7d392a592d)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
Some FDs might be offered to some external code (external libraries)
which will deal with them until they close them. As such we must not
close them upon fd_delete() but we need to delete them anyway so that
they do not appear anymore in the fdtab. This used to be handled by
fd_remove() before 2.3 but we don't have this anymore.
This patch introduces a new flag FD_DISOWN to let fd_delete() know that
the core doesn't own the fd and it must not be closed upon removal from
the fd_tab. This way it's totally unregistered from the poller but still
open.
This patch must be backported on branches >= 2.3 because it will be
needed to fix a bug affecting SSL async. it should be adapted on 2.3
because state flags were stored in a different way (via bits in the
structure).
(cherry picked from commit f41a3f6762)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
Because of the previous fix, if the HTTP parsing is performed when the
"method" sample fetch is called, we always rely on the string representation
of the request method.
Indeed, if no parsing was performed when the "method" sample fetch is
called, the transaction method is HTTP_METH_OTHER because it was just
initialized. However, without this patch, in this case, we always retrieve
the method by reading the request start-line.
Now, when the method is HTTP_METH_OTHER, we systematically try to parse the
request but the method is tested once again after the parsing to be able to
use the integer representation when possible.
This patch must be backported as far as 2.0.
(cherry picked from commit dbbdb25f1c)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
This patch is required to fix "method" sample fetch. But it make sense to
initialize the method of an HTTP transaction to HTTP_METH_OTHER. This way,
before the request parsing, the method is considered as unknown except if we
are able to retrieve the request start-line. It is especially important for
TCP streams.
About the "method" sample fetch, this patch is a way to be sure no random
method is returned when the sample fetch is used on a TCP stream before any
HTTP parsing.
This patch must be backported as far as 2.0.
(cherry picked from commit 5eb67f5d74)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
A bug was introduced by commit 9bf3a1f67e
"BUG/MINOR: ssl: Fix crash when no private key is found in pem".
If a private key is already contained in a pem file, we will still look
for a .key file and load its private key if it exists when we should
not.
This patch should be backported to all branches where the original fix
was backported (all the way to 2.2).
(cherry picked from commit 1bad7db4a1)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
Glenn Strauss from Lighttpd reported a corner case affecting curl+lighttpd
that causes some uploads to degenerate to extremely suboptimal conditions
under certain circumstances, and noted that many other implementations
were possibly not safe against this degradation.
Glenn's detailed analysis is available here:
https://github.com/nghttp2/nghttp2/issues/1722
In short, curl uses a 65536 bytes buffer and the default stream window
is 65535, with 16384 bytes per frame. Curl will then send 3 frames of
16384 bytes followed by one of 16383, will wait for a window update to
send the last byte before recycling the buffer to read the next 64kB.
On each round like this, one extra single-byte frame will be sent, and
if ACKs for these single-byte frames are not aggregated, this will only
allow the client to send one extra byte at a time. At some point it is
possible (at least Glenn observed it) to have mostly 1-byte frames in
the transfer, resulting in huge CPU usage and a long transfer.
It was not possible to reproduce this with haproxy, even when playing
with frame sizes, buffer sizes nor window sizes. One reason seems to
be that we're using the same buffer size for the connection and the
stream and that the frame headers prevent the filling of the window
from happening on the same boundaries as on the sender. However it
does occasionally happen to see up to two 1-byte data frames in a row,
indicating that there's definitely room for improvement.
The WINDOW_UPDATE frames for the connection are sent at the end of the
demuxing, but the ones for the streams are currently sent immediately
after a DATA frame is processed, mostly for convenience. But we don't
need to proceed like this, we already have the counter of unacked bytes
in rcvd_s, so we can simply use that to decide when to send an ACK. It
must just be done before processing a new frame. The benefit is that
contiguous frames for the same stream will now only produce a single
WU, like for the connection. On complicated tests involving a client
that was limited to 100 Mbps transfers and a dummy Lua-based payload
consumer, it was possible to see the number of stream WU frames being
halved for a 100 MB transfer, which is already a nice saving anyway.
Glenn proposed a better workaround consisting in increasing the
default window size to 65536. This will be done in a separate patch
so that both can be studied independently in field and backported as
needed.
This patch is not much complicated and shold be backportable. It just
needs to be tested in development first.
(cherry picked from commit 617592c9eb)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
Released version 2.6.1 with the following main changes :
- BUG/MINOR: ssl_ckch: Free error msg if commit changes on a cert entry fails
- BUG/MINOR: ssl_ckch: Free error msg if commit changes on a CA/CRL entry fails
- BUG/MEDIUM: ssl_ckch: Don't delete a cert entry if it is being modified
- BUG/MEDIUM: ssl_ckch: Don't delete CA/CRL entry if it is being modified
- BUG/MINOR: ssl_ckch: Don't duplicate path when replacing a cert entry
- BUG/MINOR: ssl_ckch: Don't duplicate path when replacing a CA/CRL entry
- BUG/MEDIUM: ssl_ckch: Rework 'commit ssl cert' to handle full buffer cases
- BUG/MEDIUM: ssl_ckch: Rework 'commit ssl ca-file' to handle full buffer cases
- BUG/MEDIUM: ssl/crt-list: Rework 'add ssl crt-list' to handle full buffer cases
- BUG/MEDIUM: httpclient: Don't remove HTX header blocks before duplicating them
- BUG/MEDIUM: httpclient: Rework CLI I/O handler to handle full buffer cases
- MEDIUM: http-ana: Always report rewrite failures as PRXCOND in logs
- MEDIUM: httpclient: Don't close CLI applet at the end of a response
- REGTESTS: abortonclose: Add a barrier to not mix up log messages
- REGTESTS: http_request_buffer: Increase client timeout to wait "slow" clients
- BUG/MINOR: ssl_ckch: Use right type for old entry in show_crlfile_ctx
- BUG/MINOR: ssl_ckch: Dump CRL transaction only once if show command yield
- BUG/MINOR: ssl_ckch: Dump CA transaction only once if show command yield
- BUG/MINOR: ssl_ckch: Dump cert transaction only once if show command yield
- BUG/MINOR: ssl_ckch: Init right field when parsing "commit ssl crl-file" cmd
- BUG/MINOR: ssl_ckch: Fix possible uninitialized value in show_cert I/O handler
- BUG/MINOR: ssl_ckch: Fix possible uninitialized value in show_cafile I/O handler
- BUG/MINOR: ssl_ckch: Fix possible uninitialized value in show_crlfile I/O handler
- REGTESTS: http_abortonclose: Extend supported versions
- REGTESTS: restrict_req_hdr_names: Extend supported versions
- BUILD: compiler: implement unreachable for older compilers too
- BUG/MEDIUM: mailers: Set the object type for check attached to an email alert
- BUG/MINOR: trace: Test server existence for health-checks to get proxy
- BUG/MINOR: checks: Properly handle email alerts in trace messages
- REGTESTS: healthcheckmail: Update the test to be functionnal again
- REGTESTS: healthcheckmail: Relax health-check failure condition
- BUG/MINOR: h3: fix frame type definition
- BUG/MINOR: cli/stats: add missing trailing LF after JSON outputs
- BUG/MINOR: server: do not enable DNS resolution on disabled proxies
- BUG/MINOR: cli/stats: add missing trailing LF after "show info json"
- BUG/MEDIUM: mux-quic: fix flow control connection Tx level
- BUG/MINOR: mux-quic: fix memleak on frames rejected by transport
- BUG/MINOR: tcp-rules: Make action call final on read error and delay expiration
- BUG/MEDIUM: stconn: Don't wakeup applet for send if it won't consume data
- BUG/MEDIUM: cli: Notify cli applet won't consume data during request processing
- BUG/MEDIUM: mux-quic: fix segfault on flow-control frame cleanup
- BUG/MINOR: qpack: support header litteral name decoding
- MINOR: qpack: add comments and remove a useless trace
- BUG/MINOR: h3/qpack: deal with too many headers
- BUG/BUILD: h3: fix wrong label name
- BUG/MINOR: quic: Stop hardcoding Retry packet Version field
- BUG/MINOR: quic: Wrong PTO calculation
- BUG/MINOR: task: fix thread assignment in tasklet_kill()
- BUG/MEDIUM: stream: Properly handle destructive client connection upgrades
- MINOR: stream: Rely on stconn flags to abort stream destructive upgrade
- BUG/MINOR: log: Properly test connection retries to fix dontlog-normal option
- BUG/MINOR: quic: Unexpected half open connection counter wrapping
- BUG/MINOR: quic_stats: Duplicate "quic_streams_data_blocked_bidi" field name
- BUG/MINOR: quic: purge conn Rx packet list on release
- BUG/MINOR: quic: free rejected Rx packets
- BUG/MEDIUM: ssl/cli: crash when crt inserted into a crt-list
- BUG/MINOR: quic: Acknowledgement must be forced during handshake
- BUG/MEDIUM: mworker: use default maxconn in wait mode
- REGTESTS: ssl: add the same cert for client/server
Add the same certificate in server and bind line so we can try to catch
problems like in issue #1748 when updating over the CLI.
(cherry picked from commit ae6547f65f)
Signed-off-by: Amaury Denoyelle <adenoyelle@haproxy.com>
In bug #1751, it was reported that haproxy is consumming too much memory
since the 2.4 version. This is because of a change in the master, which
loses completely its configuration in wait mode, and lose its maxconn.
Without the maxconn, haproxy will try to compute one itself, and will
allocate RAM consequently, too much in our case. Which means the master
will have a too high maxconn and too much RAM allocated.
The patch fixes the issue by setting the maxconn to the default value
when re-executing the master in wait mode.
Must be backported as far as 2.5.
(cherry picked from commit 0a012aa16b)
Signed-off-by: Amaury Denoyelle <adenoyelle@haproxy.com>
All packets received during hanshakes must be acknowledged asap. This was
not the case for Handshake packets received. At this time, this had
no impact because the client has often only one Handshake packet to send
and last handshake to be sent on our side always embeds an HANDSHAKE_DONE
frame which leads the client to consider it has no more handshake packet
to send.
Add <force_ack> to qc_may_build_pkt() to force an ACK frame to be sent.
Set this parameter to 1 when sending packets from Initial or Handshake
packet number spaces, 0 when sending only Application level packet.
Must be backported to 2.6.
(cherry picked from commit 57bddbcbbb)
[ad: adjusted context. quic_version type does not exists as QUIC
version negotiation has not been backported.]
Signed-off-by: Amaury Denoyelle <adenoyelle@haproxy.com>
The crash occures when the same certificate which is used on both a
server line and a bind line is inserted in a crt-list over the CLI.
This is quite uncommon as using the same file for a client and a server
certificate does not make sense in a lot of environments.
This patch fixes the issue by skipping the insertion of the SNI when no
bind_conf is available in the ckch_inst.
Change the reg-test to reproduce this corner case.
Should fix issue #1748.
Must be backported as far as 2.2. (it was previously in ssl_sock.c)
(cherry picked from commit cb6c5f4683)
Signed-off-by: Amaury Denoyelle <adenoyelle@haproxy.com>
Free Rx packet in the datagram handler if the packet was not taken in
charge by a quic_conn instance. This is reflected by the packet refcount
which is null.
A packet can be rejected for a variety of reasons. For example, failed
decryption, no Initial token and Retry emission or for datagram null
padding.
This patch should resolve the Rx packets memory leak observed via "show
pools" with the previous commit
2c31e12936
BUG/MINOR: quic: purge conn Rx packet list on release
This specific memory leak instance was reproduced using quiche client
which uses null datagram padding.
This should partially resolve github issue #1751.
It must be backported up to 2.6.
(cherry picked from commit 23f908ccd6)
Signed-off-by: Amaury Denoyelle <adenoyelle@haproxy.com>
When releasing a quic_conn instance, free all remaining Rx packets in
quic_conn.rx.pkt_list. This partially fixes a memory leak on Rx packets
which can be observed after several QUIC connections establishment.
This should partially resolve github issue #1751.
It must be backported up to 2.6.
(cherry picked from commit 2c31e12936)
Signed-off-by: Amaury Denoyelle <adenoyelle@haproxy.com>
As reported by broxio in GH #1757, there was a duplication field name
for "quic_streams_data_blocked_bidi", due to a copy and paste without
renaming I guess.
Must be backported to 2.6.
(cherry picked from commit 483499dc22)
Signed-off-by: Amaury Denoyelle <adenoyelle@haproxy.com>
This counter must be incremented only one time by connection and decremented
as soon as the handshake has failed or succeeded. This is a gauge. Under certain
conditions this counter could be decremented twice. For instance
after having received a TLS alert, then upon SSL_do_handshake() failure.
To stop having to deal to all the current combinations which can lead to such a
situation (and the next to come), add a connection flag to denote if this counter
has been already decremented for a connection. So, this counter must be decremented
only if this flag has not been already set.
Must be backported up to 2.6.
(cherry picked from commit 2aebaa49b1)
Signed-off-by: Amaury Denoyelle <adenoyelle@haproxy.com>
The commit 731c8e6cf ("MINOR: stream: Simplify retries counter calculation")
introduced a regression. It broke the dontlog-normal option because the test
on the connection retries counter was not updated accordingly.
This patch should fix the issue #1754. It must be backported to 2.6.
(cherry picked from commit a892b7f15f)
Signed-off-by: Amaury Denoyelle <adenoyelle@haproxy.com>
On destructive connection upgrade, instead of using the new mux name to
abort the old stream, we can relay on the stream connector flags. If it is
detached after the upgrade, it means the stream will not be resused by the
new mux and it must be aborted.
This patch may be backported to 2.6.
(cherry picked from commit 9b8d7a11c0)
Signed-off-by: Amaury Denoyelle <adenoyelle@haproxy.com>
When the protocol is changed for a client connection at the stream level
(from TCP to H1/H2), there are two cases. The stream may be reused or
not. The first case, when the stream is reused is working. The second one is
buggy since the conn-stream refactoring and leads to a crash.
In this case, the new mux don't reuse the stream. It must be silently
aborted. However, its front stream connector is still referencing the
connection. So it must be detached. But it must be performed in two stages,
to be sure to not loose the context for the upgrade and to be able to
rollback on error. So now, before the upgrade, we prepare to detach the
stconn and it is finally detached if the upgrade succeeds. There is a trick
here. Because we pretend the stconn is detached but its state is preserved.
This patch must be backported to 2.6.
(cherry picked from commit b68f77d626)
Signed-off-by: Amaury Denoyelle <adenoyelle@haproxy.com>
tasklet_kill() was introduced in 2.5-dev4 with commit 7b368339a
("MEDIUM: task: implement tasklet kill"), but a comparison error
there makes tasklets killed on thread 1 assigned to the killing
thread. Fortunately, the function was finally not used so there's
no harm right now, hence the minor tag, but this must be fixed and
backported in case a later fix relies on it.
This should be backported to 2.5.
(cherry picked from commit 9b3aa63df7)
Signed-off-by: Amaury Denoyelle <adenoyelle@haproxy.com>
Due to missing brackets around the ternary C operator, quic_pto() could return zero
at the first run, before the QUIC connection was completely initialized. This leaded
the idle timeout task to be executed before this initialization completion. Then
this connection could be immediately released.
This bug was revealed by the multi_packet_client_hello QUIC tracker test.
Must be backported to 2.6.
(cherry picked from commit fa94f77bc5)
Signed-off-by: Amaury Denoyelle <adenoyelle@haproxy.com>
Use the same version as the one received. This is safe because the
version is treated before anything else sending a Version packet.
Must be backported to 2.6.
(cherry picked from commit 01d515e013)
Signed-off-by: Amaury Denoyelle <adenoyelle@haproxy.com>
A pretty ugly mistake introduced recently with an invalid goto statement
which prevents QUIC compilation on haproxy.
This must be backported on 2.6 as a complement to
60ef19f137
BUG/MINOR: h3/qpack: deal with too many headers
(cherry picked from commit fa7fadca19)
Signed-off-by: Amaury Denoyelle <adenoyelle@haproxy.com>
ensures that we never insert too many entries in a headers input list.
On the decoding side, a new error QPACK_ERR_TOO_LARGE is reported in
this case.
This prevents crash if headers number on a H3 request or response is
superior to tune.http.maxhdr config value. Previously, a crash would
occur in QPACK decoding function.
Note that the process still crashes later with ABORT_NOW() because error
reporting on frame parsing is not implemented for now. It should be
treated with a RESET_STREAM frame in most cases.
This can be backported up to 2.6.
(cherry picked from commit 60ef19f137)
[ad: adjusted context]
Signed-off-by: Amaury Denoyelle <adenoyelle@haproxy.com>
Add comments on the decoding function to facilitate code analysis.
Also remove the qpack_debug_hexdump() which prints the whole left buffer
on each header parsing. With large HEADERS frame payload, QPACK traces
are complicated to debug with this statement.
(cherry picked from commit c5d31ed8be)
[ad: this commit was cherry-picked to align context for future QPACK/H3
fixes]
Signed-off-by: Amaury Denoyelle <adenoyelle@haproxy.com>
Complete QPACK decoding with full implementation of litteral field name
with litteral value representation. This change is mandatory to support
decoding of headers name not present in the QPACK static table.
Previously, these headers were silently ignored and not transferred on
the backend request.
QPACK decoding should now be sufficient to deal with all real
situations. Only post-base indices representation are not handled but
this should not cause a problem as they are only used for the dynamic
table whose size is null as announced by the haproxy implementation.
The direct impact of this change is that it should now be possible to
use complex webapp through a QUIC frontend.
This must be backported up to 2.6.
(cherry picked from commit 4bcaf69dca)
Signed-off-by: Amaury Denoyelle <adenoyelle@haproxy.com>
LIST_ELEM macro was incorrectly used in the loop when purging
flow-control frames from qcc.lfctl.frms on MUX release. This caused a
segfault in qc_release() due to an invalid quic_frame pointer instance.
The occurence of this bug seems fairly rare. To happen, some
flow-control frames must have been allocated but not yet sent just as
the MUX release is triggered.
I did not find a reproducer scenario. Instead, I artificially triggered
it by inserting a quic_frame in qcc.lfctl.frms just before purging it in
qc_release() using the following snippet.
struct quic_frame *frm;
frm = pool_zalloc(pool_head_quic_frame);
LIST_INIT(&frm->reflist);
frm->type = QUIC_FT_MAX_DATA;
frm->max_data.max_data = 0;
LIST_APPEND(&qcc->lfctl.frms, &frm->list);
This should fix github issue #1747.
This must be backported up to 2.6.
(cherry picked from commit 040955fb39)
Signed-off-by: Amaury Denoyelle <adenoyelle@haproxy.com>
The CLI applet process one request after another. Thus, when several
requests are pipelined, it is important to notify it won't consume remaining
outgoing data while it is processing a request. Otherwise, the applet may be
woken up in loop. For instance, it may happen with the HTTP client while we
are waiting for the server response if a shutr is received.
This patch must be backported in all supported versions after an observation
period. But a massive refactoring was performed in 2.6. So, for the 2.5 and
below, the patch will have to be adapted. Note also that, AFAIK, the bug can
only be triggered by the HTTP client for now.
(cherry picked from commit 4167e05002)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
in .chk_snd applet callback function, we must not wake up an applet if
SE_FL_WONT_CONSUME flag is set. Indeed, if an applet explicitly specify it
will not consume any outgoing data, it is useless to wake it up when more
data are sent. Note the applet may still be woken up for another reason. In
this case SE_FL_WONT_CONSUME flag will be removed. It is the applet
responsibility to set it again, if necessary.
This patch must be backported to 2.6 after an observation period. On earlier
versions, the bug probably exists too. However, because a massive
refactoring was performed in 2.6, the patch will have to be adapted and
carefully reviewed/tested if it is backported..
(cherry picked from commit 04f03e15c3)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
When a TCP content ruleset is evaluated, we stop waiting for more data if
the inspect-delay is reached, if there is a read error or if we know no more
data will be received. This last point is only valid for ACLs. An action may
decide to yield for another reason. For instance, in the SPOE, the
"send-spoe-group" action yields while the agent response is not
received. Thus, now, an action call is final only when the inspect-delay is
reached or if there is a read error. But it is possible for an action to
yield if the buffer is full or if CF_EOI flag is set.
This patch could be backported to all supported versions.
(cherry picked from commit d1c6cfe45a)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
When the MUX transfers a big amount of data to the client, the transport
layer may reject some of them because of the congestion controller
limit. Frames built by the MUX are thus dropped, even if streams transferred
data are kept in buffers for future new frames.
Thus, the MUX is required to free rejected frames. This fixes a memory
leak which may grow with important data transfers.
It should be backported to 2.6 after it has been tested and validated.
(cherry picked from commit 43c090c1ed)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
The flow control enforced at connection level is incorrectly calculated.
There is a risk of exceeding the limit. In most cases, this results in a
segfault produced by a BUG_ON which is here to catch this kind of error.
If not compiled with DEBUG_STRICT, this should generate a connection
closed by the client due to the flow control overflow.
The problem is encountered when transfered payload is big enough to fill
the transport congestion window. In this case, some data are rejected by
the transport layer and kept by the MUX to be reemitted later. However,
these preserved data are not counted on the connection flow control when
resubmitted, which gradually amplify the gap between expected and real
consumed flow control.
To fix this, handle the flow-control at the connection level in the same
way as the stream level. A new field qcc.tx.offsets is incremented as
soon as data are transfered between stream TX buffers. The field
qcc.tx.sent_offsets is preserved to count bytes handled by the transport
layer and stop the MUX transfer if limit is reached.
As already stated, this bug can occur during transfers with enough
emitted data, over multiple streams. When using a single stream, the
flow control at the stream level hides it.
The BUG_ON crash is reproduced systematically with quiche client :
$ quiche-client --no-verify --http-version HTTP/3 -n 10000 https://127.0.0.1:20443/10K
This must be backported up to 2.6 when confirmed to work as expected.
This should fix github issue #1738.
(cherry picked from commit b9e0640405)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
This is the continuation of commit 5a0e7ca5d ("BUG/MINOR: cli/stats: add
missing trailing LF after JSON outputs"). There's also a "show info json"
command which was also missing the trailing LF. It's constructed exactly
like the "show stat json", in that it dumps a series of fields without any
LF. The difference however is that for the stats output, everything was
enclosed in an array which required an LF *after* the closing bracket,
while here there's no such array so we have to emit the LF after the loop.
That makes the two functions a bit inconsistent, it's quite annoying, but
making them better would require sending one LF per line in the stats
output, which is not particularly interesting.
Given that it took 5+ years to spot that this code wasn't working as
expected it doesn't seem worth investing much time trying to refactor
it to make it look cleaner at the risk of breaking other obscure parts.
(cherry picked from commit c6b7a97e54)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
Leonhard Wimmer reported an interesting bug in github issue #1742.
Servers in disabled proxies that are configured for resolution are still
subscribed to DNS resolutions, but the LB algos are not initialized at
all since the proxy is disabled, so when the server state changes,
attempts to update its status cause a crash when the server's weight
is recalculated via a divide by the proxy's total weight which is zero.
This should be backported to all versions. Beware that before 2.5 or
so, there's no PR_FL_DISABLED flag, instead px->disabled should be
used (2.3-2.4) or PR_STSTOPPED for older versions.
Thanks to Leonhard for his report and quick test!
(cherry picked from commit 9b46fb4cca)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
Patrick Hemmer reported that we have a bug in the CLI commands
"show stat json" and "show schema json" in that they forget the trailing
LF that's required to mark the end of the response. This has been the
case since the introduction of the feature in 1.8-dev1 by commit 6f6bb380e
("MEDIUM: stats: Add show json schema"), so this fix may be backported to
all versions.
(cherry picked from commit 5a0e7ca5d0)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
Frame type has changed during HTTP/3 specification process. Adjust it to
reflect the latest RFC 9114 status.
Concretly, type for GOAWAY and MAX_PUSH_ID frames has been adjusted.
The impact of this bug is limited as currently these frames are not
handled by haproxy and are ignored.
This can be backported up to 2.6.
(cherry picked from commit c715eb7898)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
The info field in the log message may change. For instance, on FreeBSD, a
"broken pipe" is reported. Thus, the expected log message must be more
generic.
(cherry picked from commit af936762d0)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
This reg-test is broken since a while. It was simplified to be
functionnal. Now, it only test email alerts.
(cherry picked from commit 52912579ee)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
There is no server for email alerts. So the trace messages must be adapted
to handle this case. Information related to the server are now skipped for
email alerts and "[EMAIL]" prefix is used.
This patch must be backported as far as 2.4.
(cherry picked from commit 4f1825c5db)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
Email alerts are based on health-checks but with no server. Thus, in
__trace() function, responsible to write a trace message, we must be
prepared to have no server and thus no proxy.
This patch must be backported as far as 2.4.
(cherry picked from commit e3b2574796)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
The health-check attached to an email alert has no type. It is unexpected,
and since the 2.6, it is important because we rely on it to know the
application type in front of a connection at the stream-connector
level. Because the object type is not set, the SE descriptor is not properly
initialized, leading to a segfault when a connection to the SMTP server is
established.
This patch must be backported to 2.6 and may be backported as far as
2.0. However, it is only an issue for the 2.6 and upper.
(cherry picked from commit 58e3501910)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
Benoit Dolez reported that gcc-4.4 emits several "may be used
uninitialized" warnings around places where there are BUG_ON()
or ABORT_NOW(). The reason is that __builtin_unreachable() was
introduced in gcc-4.5 thus older ones do not know that the code
after such statements is not reachable.
This patch solves the problem by deplacing the statement with
an infinite loop on older versions. The compiler knows that the
code following it cannot be reached, and this is quite cheap
(2 to 4 bytes depending on architectures). It even reduces the
code size a little bit as the compiler doesn't have to optimize
for branches that do not exist.
This may be backported to older versions.
(cherry picked from commit 7d318ed8cc)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
This reg-test was backported as far as 2.0. Thus, extend supported versions
accordingly.
This patch must be backported as far as 2.0.
(cherry picked from commit fdf693477a)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
This reg-test was backported as far as 2.0. Thus, extend supported versions
accordingly.
This patch must be backported as far as 2.0.
(cherry picked from commit 3d1da9a440)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
Commit 9a99e5478 ("BUG/MINOR: ssl_ckch: Dump CRL transaction only once if
show command yield") introduced a regression leading to a build error
because of a possible uninitialized value. It is now fixed.
This patch must be backported as far as 2.5.
(cherry picked from commit 88041b35c3)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
Commit 5a2154bf7 ("BUG/MINOR: ssl_ckch: Dump CA transaction only once if
show command yield") introduced a regression leading to a build error
because of a possible uninitialized value. It is now fixed.
This patch must be backported as far as 2.5.
(cherry picked from commit 677cb4fa91)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
Commit 3e94f5d4b ("BUG/MINOR: ssl_ckch: Dump cert transaction only once if
show command yield") introduced a regression leading to a build error
because of a possible uninitialized value. It is now fixed.
This patch must be backported as far as 2.2.
(cherry picked from commit d1d2e4dfe5)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
.next_ckchi_link field must be initialized to NULL instead of .next_ckchi in
cli_parse_commit_crlfile() function. Only '.nex_ckchi_link' is used in the
I/O handler.
This patch must be backported as far as 2.5 with some adaptations for the 2.5.
(cherry picked from commit f814c4aa98)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
When loaded SSL certificates are displayed via "show ssl cert" command, the
in-progess transaction, if any, is also displayed. However, if the command
yield, the transaction is re-displayed again and again.
To fix the issue, old_ckchs field is used to remember the transaction was
already displayed.
This patch must be backported as far as 2.2.
(cherry picked from commit 3e94f5d4b6)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
When loaded CA files are displayed via "show ssl ca-file" command, the
in-progress transaction, if any, is also displayed. However, if the command
yield, the transaction is re-displayed again and again.
To fix the issue, old_cafile_entry field is used to remember the transaction
was already displayed.
This patch must be backported as far as 2.5.
(cherry picked from commit 5a2154bf7c)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>