IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
The 'capture' action must be placed after the 'allow' action.
This patch could be backported as far as 2.5.
(cherry picked from commit d9d36b8b6b)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit 47192fe67fb71fd27ef4f9c9d3427aa706462051)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
There is a bug in b_slow_realign() function when wrapping output data are
copied in the swap buffer. block1 and block2 sizes are inverted. Thus blocks
with a wrong size are copied. It leads to data mixin if the first block is
in reality larger than the second one or to a copy of data outside the
buffer is the first block is smaller than the second one.
The bug was introduced when the buffer API was refactored in 1.9. It was
found by a code review and seems never to have been triggered in almost 5
years. However, we cannot exclude it is responsible of some unresolved bugs.
This patch should fix issue #1978. It must be backported as far as 2.0.
(cherry picked from commit 61aded057d)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit 4a048c13f5ec3bcd060c8af955fe51694400b69d)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
When an HTTP sample fetch is evaluated, a prefetch is performed to check the
channel contains a valid HTTP message. If the HTTP analysis was not already
started, some info are filled.
It may be an issue when an error is returned before the response analysis
and when http-after-response rules are used because the original HTTP txn
status may be crushed. For instance, with the following configuration:
listen l1
log global
mode http
bind :8000
log-format ST=%ST
http-after-response set-status 400
#http-after-response set-var(res.foo) status
A "ST=503" is reported in the log messages, independantly on the first
http-after-response rule. The same must happen if the second rule is
uncommented. However, for now, a "ST=400" is logged.
To fix the bug, during the prefetch, the HTTP txn status is only set if it
is undefined (-1). This way, we are sure the original one is never lost.
This patch should be backported as far as 2.2.
(cherry picked from commit 31850b470a)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit a9f36628395b012cef7f9efddaf90954b68ff167)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
sc-inc-gpc() learned to use arrays in 2.5 with commit 4d7ada8f9 ("MEDIUM:
stick-table: add the new arrays of gpc and gpc_rate"), but the error
message says "sc-set-gpc" instead of "sc-inc-gpc". Let's fix this to
avoid confusion.
This can be backported to 2.5.
(cherry picked from commit 20391519c3)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit 59bf319279b1457c6f01b160764e79c27a5808c9)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
The features list that appears in -vv appears in a random order, which
always makes it a pain to look for certain features. Let's sort it.
(cherry picked from commit 848362f2d2)
[wt: applied to Makefile]
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit a99f189732d6f2c1c2e99cf39d4b3a17b4e040c8)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
The BUILD_FEATURES string was created too early to inherit implicit
additions. This could make the features list report that some features
were disabled while they had later been enabled. Better make it a macro
that is interpreted where needed based on the current state of each
option.
(cherry picked from commit 39d6c34837)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit 650959acbecc2a607629fd905a39e0689a02ec92)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
Remove ABORT_NOW() on remote unidirectional stream closure. This is
required to ensure our implementation is evolutive enough to not fail on
unknown stream type.
Note that for the moment MAX_STREAMS_UNI flow-control frame is never
emitted. This should be unnecessary for HTTP/3 which have a limited
usage of unidirectional streams but may be required if other application
protocols are supported in the future.
ABORT_NOW() was triggered by s2n-quic which opens an unknown
unidirectional stream with greasing. This was detected by QUIC interop
runner for http3 testcase.
This must be backported up to 2.6.
(cherry picked from commit 9107731358)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit 1ac095486711084895763fe026bd9186f3415bd6)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
The same change was already performed for the cli. The stats applet and the
prometheus exporter are also concerned. Both use the stats API and rely on
pool functions to get total pool usage in bytes. pool_total_allocated() and
pool_total_used() must return 64 bits unsigned integer to avoid any wrapping
around 4G.
This may be backported to all versions.
(cherry picked from commit c960a3b60f)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit b174d82dff11d7fb67e9a7f53c20a658f23dd9e7)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
As state in RFC9113#8.1, HEADERS frame with the ES flag set that carries an
informational status code is malformed. However, there is no test on this
condition.
On 2.4 and higher, it is hard to predict consequences of this bug because
end of the message is only reported with a flag. But on 2.2 and lower, it
leads to a crash because there is an unexpected extra EOM block at the end
of an interim response.
Now, when a ES flag is detected on a HEADERS frame for an interim message, a
stream error is sent (RST_STREAM/PROTOCOL_ERROR).
This patch should solve the issue #1972. It should be backported as far as
2.0.
(cherry picked from commit 827a6299e6)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit ebfae006c6b5de1d1fe0cdd51847ec1e39d5cf59)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
qcs instances for bidirectional streams are inserted in
<qcc.opening_list>. It is removed from the list once a full HTTP request
has been parsed. This is required to implement http-request timeout.
In case a stream is deleted before receiving full HTTP request, it also
must be removed from <qcc.opening_list>. This was not the case on first
implementation but has been fixed by the following patch :
641a65ff3c
BUG/MINOR: mux-quic: remove qcs from opening-list on free
This means that now a stream can be deleted from the list in two
different functions. Sadly, as LIST_DELETE was used in both cases,
nothing prevented a double-deletion from the list, even though
LIST_INLIST was used. Both calls are replaced with LIST_DEL_INIT which
is idempotent.
This bug causes memory corruption which results in most cases in a
segfault, most of times outside of mux-quic code itself. It has been
found first by gabrieltz who reported it on the github issue #1903. Big
thanks to him for his testing.
This bug also causes failures on several 'M' transfer testcase of QUIC
interop-runner. The s2n-quic client is particularly useful in this case
as segfaults triggers were most of the times on the LIST_DELETE
operation itself. This is probably due to its encapsulating of HEADERS
frame with fin bit delayed in a following empty STREAM frame.
This must be backported wherever the above patch is, up to 2.6.
(cherry picked from commit 15337fd808)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit 151737fa818ffb37c8eb1706ef16722b6dd68f8b)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
Performance profiling on a 48-thread machine showed a lot of time spent
in pool_free(), precisely at the point where pool->limit was retrieved.
And the reason is simple. Some parts of the pool_head are heavily updated
only when facing a cache miss ("allocated", "used", "needed_avg"), while
others are always accessed (limit, flags, size). The fact that both
entries were stored into the same cache line makes it very difficult for
each thread to access these precious info even when working with its own
cache.
By just splitting the fields apart, a test on QUIC (which stresses pools
a lot) more than doubled performance from 42 Gbps to 96 Gbps!
Given that the patch only reorders fields and addresses such a significant
contention, it should be backported to 2.7 and 2.6.
(cherry picked from commit 4dd33d9c32)
Signed-off-by: Willy Tarreau <w@1wt.eu>
(cherry picked from commit 7d1b6977199fb663f39c928f3f159fd078d1b30d)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
It is possible to block the stats applet if a line exceeds the free space in
the responsse buffer while the buffer is empty. It is only an issue in HTTP
becaues of the HTX overhead and, AFAIK, only with json output.
In this case, the applet is unable to write anything in the response buffer
and waits for some free space to proceed further. On the other hand, because
the response channel is empty, nothing is sent and thus no space can be
freed. At this stage, the stream and the applet are blocked waiting for the
other side.
To avoid this situation, we must take care to not dump a line exceeding the
free space in the HTX message. It means we cannot rely anymore on the global
trash buffer. At least, not directly. The trick is to use a local trash
buffer, mapped on the global one but with a different size. We use b_make()
to do so. The local trash buffer is thread local to avoid any concurrency
issue.
It is a valid fix. However it could be good to review the internal API of
the stats applet to not rely on a global variable.
This patch should solve the #1873. It must be backported at least as far as
2.6. Older versions must be evaluated first but it is probably possible to
hit this bug with long proxy/server names.
(cherry picked from commit a8b7684319)
Signed-off-by: William Lallemand <wlallemand@haproxy.org>
(cherry picked from commit 86e057ec4228ec7188db46969c8e91a4c51481e3)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
testdir can be a very long directory since it depends on source
directory path, this can lead to failure during tests when UNIX socket
path exceeds maximum allowed length of 97 characters as defined in
str2sa_range().
16:48:14 [ALERT] *** h1 debug| (10082) : config : parsing [/tmp/haregtests-2022-12-17_16-47-39.4RNzIN/vtc.4850.5d0d728a/h1/cfg:19] : 'bind' : socket path 'unix@/local/p4clients/pkgbuild-bB20r/workspace/build/HAProxy/HAProxy-2.7.x.68.0/AL2_x86_64/DEV.STD.PTHREAD/build/private/HAProxy-2.7.x/src/reg-tests/lua/srv3' too long (max 97)
Also, it is not advisable to create UNIX socket in actual source
directory, but instead use dedicated temporary directory create for test
purpose.
This should be backported to 2.6
(cherry picked from commit 103966930a)
Signed-off-by: William Lallemand <wlallemand@haproxy.org>
(cherry picked from commit ec0b6777d6ec17e7b208e29ad5dbf4cd988c2a39)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
The test still need to have more start condition, like ulimit checks
and less strict value checks.
To be backported where it was activated (as far as 2.5)
(cherry picked from commit 7332a123c1)
Signed-off-by: William Lallemand <wlallemand@haproxy.org>
(cherry picked from commit a384f2862b3ad162d0b5ba92448c9c2b76836709)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
There is a possible segfault when accessing qc->timer_task in
quic_conn_io_cb() without testing it. It seems however very rare as it
requires several condition to be encounter.
* quic_conn must be in CLOSING state after having sent a
CONNECTION_CLOSE which free the qc.timer_task
* quic_conn handshake must still be in progress : in fact, qc.timer_task
is accessed on this path because of the anti-amplification limit
lifted.
I was unable thus far to trigger it but benchmarking tests seems to have
fire it with the following backtrace as a result :
#0 _task_wakeup (f=4096, caller=0x5620ed004a40 <_.46868>, t=0x0) at include/haproxy/task.h:195
195 state = _HA_ATOMIC_OR_FETCH(&t->state, f);
[Current thread is 1 (Thread 0x7fc714ff1700 (LWP 14305))]
(gdb) bt
#0 _task_wakeup (f=4096, caller=0x5620ed004a40 <_.46868>, t=0x0) at include/haproxy/task.h:195
#1 quic_conn_io_cb (t=0x7fc5d0e07060, context=0x7fc5d0df49c0, state=<optimized out>) at src/quic_conn.c:4393
#2 0x00005620ecedab6e in run_tasks_from_lists (budgets=<optimized out>) at src/task.c:596
#3 0x00005620ecedb63c in process_runnable_tasks () at src/task.c:861
#4 0x00005620ecea971a in run_poll_loop () at src/haproxy.c:2913
#5 0x00005620ecea9cf9 in run_thread_poll_loop (data=<optimized out>) at src/haproxy.c:3102
#6 0x00007fc773c3f609 in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#7 0x00007fc77372d133 in clone () from /lib/x86_64-linux-gnu/libc.so.6
(gdb) up
#1 quic_conn_io_cb (t=0x7fc5d0e07060, context=0x7fc5d0df49c0, state=<optimized out>) at src/quic_conn.c:4393
4393 task_wakeup(qc->timer_task, TASK_WOKEN_MSG);
(gdb) p qc
$1 = (struct quic_conn *) 0x7fc5d0df49c0
(gdb) p qc->timer_task
$2 = (struct task *) 0x0
This fix should be backported up to 2.6.
(cherry picked from commit 5ac6b3b125)
Signed-off-by: William Lallemand <wlallemand@haproxy.org>
(cherry picked from commit e8d7fdf498e37ced00683159ca2797018e93b37c)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
json output type is a lot more verbose than other output types.
Because of this and the increasing number of metrics implemented within
haproxy, we are starting to reach max bufsize limit (defaults to 16k)
when dumping stats to json since 2.6-dev1.
This results in stats output being truncated with
"[{"errorStr":"output buffer too short"}]"
This was reported by Gabriel in #1964.
Thanks to "MINOR: stats: introduce stats field ctx", we can now make
multipart (using multiple buffers) dumping, in case a single buffer is not big
enough to hold the complete stat line.
For now, only stats_dump_fields_json() makes use of it as it is by
far the most verbose stats output type.
(csv, typed and html outputs should be good for a while and may use this
capability if the need arises in some distant future)
--
It could be backported to 2.6 and 2.7.
This commit depends on:
- MINOR: stats: provide ctx for dumping functions
- MINOR: stats: introduce stats field ctx
(cherry picked from commit 42b18fb645)
Signed-off-by: William Lallemand <wlallemand@haproxy.org>
(cherry picked from commit e3f1715b82319eafada3dd51d4d468546986ec1d)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
Add a new value in stats ctx: field.
Implement field support in line dumping parent functions
stats_print_proxy_field_json() and stats_dump_proxy_to_buffer().
This will allow child dumping functions to support partial line dumping
when needed. ie: when dumping buffer is exhausted: do a partial send and
wait for a new buffer to finish the dump. Thanks to field ctx, the function
can start dumping where it left off on previous (unterminated) invokation.
(cherry picked from commit 5594184190)
Signed-off-by: William Lallemand <wlallemand@haproxy.org>
(cherry picked from commit 84f6ea521b4f92779b15d5cd4de6539462dba54a)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
This is a minor refactor to allow stats_dump_info_* and stats_dump_fields_*
functions to directly access stat ctx pointer instead of explicitly
passing stat ctx struct members to them.
This will allow dumping functions to benefit from upcoming ctx updates.
(cherry picked from commit e76a027b0b)
Signed-off-by: William Lallemand <wlallemand@haproxy.org>
(cherry picked from commit 298735804e2412d6876b720066e20f767b5f301d)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
The certificate chain that gets passed in the SSL_CTX through
SSL_CTX_set1_chain has its reference counter increased by OpenSSL
itself. But since the ssl_sock_load_cert_chain function might create a
brand new certificate chain if none exists in the ckch_data
(sk_X509_new_null), then we ended up returning a new certificate chain
to the caller that was never destroyed.
This patch can be backported to all stable branches but it might need to
be reworked for branches older than 2.4 because of commit ec805a32b9
that refactorized the modified code.
(cherry picked from commit 4cf0d3f1e8)
[wla: struct member data was called ckch before]
Signed-off-by: William Lallemand <wlallemand@haproxy.org>
(cherry picked from commit 3fc061bf30b45bbcab66b8bd8b38ce7578bc9ae6)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
If an error is triggered on H3 HEADERS parsing, the allocated buffer for
HTX data is not freed.
To prevent this memleak, all return path have been centralized using
goto statements.
Also, as a small bonus, offer_buffers() is not called anymore if buffer
is not freed because sedesc has taken it. However this change has
probably no noticeable effect as dynamic buffers management is not
functional currently.
This should be backported up to 2.6.
(cherry picked from commit 788fc05401)
Signed-off-by: William Lallemand <wlallemand@haproxy.org>
(cherry picked from commit e1c23e5a1662ea3c871215dc579d2f3e20d90c12)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
Cookie header are treated specifically to merge multiple occurences in a
single HTX header. This is treated in a if-block condition inside the
'while (1)' loop for headers parsing. The length value of ist
representing cookie header is set to -1 by http_cookie_register(). The
problem is that then a continue statement is used but without
incrementing 'hdr_idx' to pass on the next header.
This issue was revealed by the introduction of commit :
commit d6fb7a0e0f
BUG/MEDIUM: h3: reject request with invalid header name
Before the aformentionned patch, the bug was hidden : on the next while
iteration, all isteq() invocations won't match with cookie header length
now set to -1. htx_add_header() fails silently because length is
invalid. hdr_idx is finally incremented which allows parsing to proceed
normally with the next header.
Now, a cookie header with length -1 do not pass the test on header name
conformance introduced by the above patch. Thus, a spurrious
RESET_STREAM is emitted. This behavior has been reported on the mailing
list by Shawn Heisey who found out that browsers disabled H3 usage due
to the RESET_STREAM received. Big thanks to him for his testing on the
master branch.
This issue is simply resolved by incrementing hdr_idx before continue
statement. It could have been detected earlier if htx_add_header()
return value was checked. This will be the subject of a dedicated commit
outside of the backport scope.
This must be backported up to 2.6.
(cherry picked from commit 19942e3859)
Signed-off-by: William Lallemand <wlallemand@haproxy.org>
(cherry picked from commit fda9a5e4351d9b11bc2c1562d86a2da292443298)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
This clarifies that LGPL is also permitted for the wurfl.h dummy file.
Should be backported where relevant.
Signed-off-by: Luca Passani <luca.passani@scientiamobile.com>
(cherry picked from commit 6d6787ba7c)
Signed-off-by: William Lallemand <wlallemand@haproxy.org>
(cherry picked from commit d3120774e122280589a1453b640ea7bcc7d60108)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
Use qcs_free() on allocation failure in qcs_new() This ensures that all
qcs content is properly deallocated and prevent memleaks. Most notably,
qcs instance is now removed from qcc tree.
This bug is labelled as MINOR as it occurs only on qcs allocation
failure due to memory exhaustion.
This must be backported up to 2.6.
(cherry picked from commit 4b167006fd)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
(cherry picked from commit 164acf2d8a03ad068e7ca1de0964f5f0f07375df)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
qcs instances for bidirectional streams are inserted in
<qcc.opening_list>. It is removed from the list once a full HTTP request
has been parsed. This is required to implement http-request timeout.
If a qcs instance is freed before receiving a full HTTP request, it must
be removed from the <qcc.opening_list>. Else a segfault will occur in
qcc_refresh_timeout() when accessing a dangling pointer.
For the moment this bug was not reproduced in production. This is
because there exists only few rare cases where a qcs is freed before
HTTP request parsing. However, as error detection will be improved on
H3, this will occur more frequently in the near future.
This must be backported up to 2.6.
(cherry picked from commit 641a65ff3c)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
(cherry picked from commit 252b67c4722ff2d4131e7875879364087f27a2fa)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
qc_new_conn() is used to allocate a quic_conn instance and its various
internal members. If one allocation fails, quic_conn_release() is used
to cleanup things.
For the moment, pool_zalloc() is used which ensures that all content is
null. However, some members must be initialized to a special values
to be able to use quic_conn_release() safely. This is the case for
quic_conn lists and its tasklet.
Also, some quic_conn internal allocation functions were doing their own
cleanup on failure without reset to NULL. This caused an issue with
quic_conn_release() which also frees this members. To fix this, these
functions now only return an error without cleanup. It is the caller
responsibility to free the allocated content, which is done via
quic_conn_release().
Without this patch, allocation failure in qc_new_conn() would often
result in segfault. This was reproduced easily using fail-alloc at 10%.
This should be backported up to 2.6.
(cherry picked from commit dbf6ad470b)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
(cherry picked from commit d35d46916d8ff53b13c08862297f49b5d881d738)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
ML ref: https://www.mail-archive.com/haproxy@formilux.org/msg42934.html
we agreed to use "latest" images for development branches and fixed
images for stable branches
Can be backported to 2.6.
(cherry picked from commit f5994fc692)
Signed-off-by: William Lallemand <wlallemand@haproxy.org>
(cherry picked from commit e557ae9bac049e1a239510cc77c1812404c4d2ea)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
it was there because we only ran ASAN for clang, now no need to separate loop
Can be backported to 2.6.
(cherry picked from commit 6dedeb70da)
Signed-off-by: William Lallemand <wlallemand@haproxy.org>
(cherry picked from commit a468a38c3cc49b8d8876b05da534654134c38fda)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
Ensure that if a request contains a content-length header it matches
with the total size of following DATA frames. This is conformance with
HTTP/3 RFC 9114.
For the moment, this kind of errors triggers a connection close. In the
future, it should be handled only with a stream reset. To reduce
backport surface, this will be implemented in another commit.
This must be backported up to 2.6. It relies on the previous commit :
MINOR: http: extract content-length parsing from H2
(cherry picked from commit d2c5ee665e)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
(cherry picked from commit 43bb85f88d4a0273f90fa9d41ed52dbcb8c52abb)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
Extract function h2_parse_cont_len_header() in the generic HTTP module.
This allows to reuse it for all HTTP/x parsers. The function is now
available as http_parse_cont_len_header().
Most notably, this will be reused in the next bugfix for the H3 parser.
This is necessary to check that content-length header match the length
of DATA frames.
Thus, it must be backported to 2.6.
(cherry picked from commit 15f3cc4b38)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
(cherry picked from commit 76d3becee5c10aacabb5cb26b6776c00ca5b9ae6)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
RFC 9114 dictates several requirements for pseudo header usage in H3
request. Previously only minimal checks were implemented. Enforce all
the following requirements with this patch :
* reject request with undefined or invalid pseudo header
* reject request with duplicated pseudo header
* reject non-CONNECT request with missing mandatory pseudo header
* reject request with pseudo header after standard ones
For the moment, this kind of errors triggers a connection close. In the
future, it should be handled only with a stream reset. To reduce
backport surface, this will be implemented in another commit.
This must be backported up to 2.6.
(cherry picked from commit 7b5a671fb8)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
(cherry picked from commit d2938a95c987534b40ebf3a7b51cadc4f3f60867)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
Reject request containing invalid header name. This concerns every
header containing uppercase letter or a non HTTP token such as a space.
For the moment, this kind of errors triggers a connection close. In the
future, it should be handled only with a stream reset. To reduce
backport surface, this will be implemented in another commit.
Thanks to Yuki Mogi from FFRI Security, Inc. for having reported this.
This must be backported up to 2.6.
(cherry picked from commit d6fb7a0e0f)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
(cherry picked from commit 3ca4223c5e1f18a19dc93b0b09ffdbd295554d46)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
The calculated maxconn could produce other values when compiled with
debug options.
Must be backported where 6b6f082 was backported (as far as 2.5).
(cherry picked from commit f98b3b1107)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
(cherry picked from commit 20bd4a8d1507e3ee6d52cc5af6c23a006b0e3a75)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
change the expected maxconn from 10000 to 11000 in
automatic_maxconn.vtc
To be backported only if the test failed, the value might be the right
one in previous versions.
(cherry picked from commit 2a225390eb)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
(cherry picked from commit e191844b64bdc894f424a6e30858c7c55d4fd7dc)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
In resolv_update_resolvers_timeout(), the resolvers task timeout is updated
by checking running and waiting resolutions. However, to find the next
wakeup date, MIN() operator is used to compare ticks. Ticks must never be
compared with such operators, tick helper functions must be used, to
properly handled TICK_ETERNITY value. In this case, tick_first() must be
used instead of MIN().
It is an old bug but it is pretty visible since the commit fdecaf6ae4
("BUG/MINOR: resolvers: do not run the timeout task when there's no
resolution"). Because of this bug, the resolvers task timeout may be set to
TICK_ETERNITY, stopping periodic resolutions.
This patch should solve the issue #1962. It must be backported to all stable
versions.
(cherry picked from commit 819d48b14e)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
(cherry picked from commit d94ca04f965fd5a2ad7ee500b8bbf46acd722206)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
Check if USE_OBSOLETE_LINK=1 was used so it could run this test when
ASAN is not built, since ASAN require this option.
For this test to work, the ulimit -n value must be big enough.
Could be backported at least to 2.5.
(cherry picked from commit 6b6f082969)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
(cherry picked from commit b6bfe7b905a4fb8197c30db7fe937840506812af)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
Set ulimit -n to 65536 to limit less the maxconn computation.
Could be backported at least to 2.5.
(cherry picked from commit 2cb1493748)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
(cherry picked from commit c7ff3f0419d8ddb09b633f8aa50c167e45cc081e)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
With internal proxies using the SSL activated (httpclient for example)
the automatic computation of the maxconn is wrong because these proxies
are always activated by default.
This patch fixes the issue by not counting these internal proxies during
the computation.
Must be backported as far as 2.5.
(cherry picked from commit 0adafb307e)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
(cherry picked from commit b1005c0ba1db639c15d4fee17820af40039c1894)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
Check the maxconn computation with multiple -m parameters.
Broken with ASAN for now.
Could be backported as far as 2.2.
(cherry picked from commit 38c5b6ea97)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
(cherry picked from commit 8ffe3f24e889c8406cfd29eb6807cb4f45cfad25)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
A "Connection: close" header is added to responses to avoid any connection
reuse. This should avoid any "HTTP header incomplete" errors.
(cherry picked from commit e1b866a28a)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
(cherry picked from commit 50339568f9aed04dda6955129e11f92164da30b7)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
The output buffer is not zero-initialized. If we don't clear reserved
bytes, fcgi requests sent to backend will leak sensitive data.
This patch must be backported as far as 2.2.
(cherry picked from commit 2e6bf0a272)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
(cherry picked from commit db03179fee55c60a92ce6b86a0f04dbb9ba0328b)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
"haproxy_backend_agg_server_status" and "haproxy_backend_agg_check_status"
were not referenced in promex README.
"haproxy_backend_agg_server_check_status" is also missing but it is a
deprecated metric. Thus, it is better to not reference it.
(cherry picked from commit 7edec90c00)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
(cherry picked from commit e41897cad6400ca2a9de6d63af4ee7363563ac16)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
This patch introduces haproxy_backend_agg_check_status metric
as we wanted in 42d7c402d but with the right data source.
This patch could be backported as far as 2.4.
(cherry picked from commit e06e31ea3b)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
(cherry picked from commit f0319e0f56581873f906f79dc218bf6f10b8f6c2)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
haproxy_backend_agg_server_check_status currently aggregates
haproxy_server_status instead of haproxy_server_check_status.
We deprecate this and create a new one,
haproxy_backend_agg_server_status to clarify what it really
does.
This patch could be backported as far as 2.4.
(cherry picked from commit 7d6644e689)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
(cherry picked from commit 2c0d7982e7612b2e7157170aa7109f20b780bb64)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
The lua httpclient cleanup can be called in 2 places, the
hlua_httpclient_gc() and the hlua_httpclient_destroy_all().
A LIST_DELETE() is performed to remove the hlua_hc struct of the list.
However, when the lua task ends and call hlua_ctx_destroy(), it does a
LIST_DELETE() first, and then the gc tries to do a LIST_DELETE() again
in hlua_httpclient_gc(), provoking a crash.
This patch fixes the issue by doing a LIST_DEL_INIT() instead of
LIST_DELETE() in both cases.
Should fix issue #1958.
Must be backported where bb58142 is backported.
(cherry picked from commit 94dbfedec1)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
(cherry picked from commit c177de37d8f68a9434530e4f5706efdaa2b934b5)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
Commit b81483cf2 ("MEDIUM: da: update doc and build for new scheduler
mode service.") added a new directory to the Device Atlas dummy lib,
but this one is not cleaned during "make clean", causing build failures
sometimes when switching between compiler versions during development.
This should be backported to 2.6.
(cherry picked from commit 46676d44e0)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
(cherry picked from commit 3e815946e14487cc1318f3c78c7dd15c0c28de5c)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
During an early failure of the mworker mode, the
mworker_cleanlisteners() function is called and tries to cleanup the
peers, however the peers are in a semi-initialized state and will use
NULL pointers.
The fix check the variable before trying to use them.
Bug revealed in issue #1956.
Could be backported as far as 2.0.
(cherry picked from commit 035058e8bf)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
(cherry picked from commit 661557989e3a8d84d18c997ebdabb26146ebe8ad)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
When the mworker wait mode fails it does an exit, but there is no
error message which says it exits.
Add a message which specify that the error is non-recoverable.
Could be backported in 2.7 and possibly earlier branch.
(cherry picked from commit 40db4ae8bb)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
(cherry picked from commit bb0ab9833adf1c871143d8555fedbb9ec1823f8a)
Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
Coverity raised a potential overflow issue in these new functions that
work on unsigned long long objects. They were added in commit 9b25982
"BUG/MEDIUM: ssl: Verify error codes can exceed 63".
This patch needs to be backported alongside 9b25982.
(cherry picked from commit e239e4938d)
Signed-off-by: William Lallemand <wlallemand@haproxy.org>
The CRT and CA verify error codes were stored in 6 bits each in the
xprt_st field of the ssl_sock_ctx meaning that only error code up to 63
could be stored. Likewise, the ca-ignore-err and crt-ignore-err options
relied on two unsigned long longs that were used as bitfields for all
the ignored error codes. On the latest OpenSSL1.1.1 and with OpenSSLv3
and newer, verify errors have exceeded this value so these two storages
must be increased. The error codes will now be stored on 7 bits each and
the ignore-err bitfields are replaced by a big enough array and
dedicated bit get and set functions.
It can be backported on all stable branches.
[wla: let it be tested a little while before backport]
Signed-off-by: William Lallemand <wlallemand@haproxy.org>
(cherry picked from commit 9b25982716)
Signed-off-by: William Lallemand <wlallemand@haproxy.org>
peers-t.h uses "struct stktable" as well as STKTABLE_DATA_TYPES which
are defined in stick-table-t.h. It works by accident because
stick-table-t.h was always included before. But could provoke build
issue with EXTRA code.
To be backported as far as 2.2.
(cherry picked from commit 46bea1c616)
Signed-off-by: William Lallemand <wlallemand@haproxy.org>
(cherry picked from commit 5c89a0c0484b706cfa10398be8539f39c7b311e9)
Signed-off-by: William Lallemand <wlallemand@haproxy.org>