IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
mux-to-mux fast-forwarding will be added. To avoid mix with the splicing and
simplify the commits, the kernel splicing support is removed from the
stconn. CF_KERN_SPLICING flag is removed and the support is no longer tested
in process_stream().
In the stconn part, rcv_pipe() callback function is no longer called.
Reg-tests scripts testing the kernel splicing are temporarly marked as
broken.
This regtest declares and uses 3 log backends, one of which has TCP syslog
servers declared in it and other ones UDP syslog servers.
Some tests aims at testing log distribution reliability by leveraging the
log-balance hash algorithm with a key extracted from the request URL, and
the dummy vtest syslog servers ensure that messages are sent to the
correct endpoint. Overall this regtest covers essential parts of the log
message distribution and log-balancing logic involved with log backends.
It also leverages the log-forward section to perform the TCP->UDP
translation required to test UDP endpoints since vtest syslog servers
work in UDP mode.
Finally, we have some tests to ensure that the server queuing/dequeuing
and failover (backup) logics work properly.
This patch add a hash of the Origin header to the cache's secondary key.
This enables to manage store responses that have a "Vary: Origin" header
in the cache when vary is enabled.
This cannot be considered as a means to manage CORS requests though, it
only processes the Origin header and hashes the presented value without
any form of URI normalization.
This need was expressed by Philipp Hossner in GitHub issue #251.
Co-Authored-by: Philipp Hossner <philipp.hossner@posteo.de>
in random-forwarding.vtc script, adding "Content-Lnegth; 0" header in the
successful response to the CONNECT request is invalid but it may also lead
to wrong check on the response. "rxresp" directive don"t handle CONNECT
response. Thus "-no_obj" must be added instead, to be sure the payload won't
be retrieved or expected.
Added set-timeout for frontend side of session, so it can be used to set
custom per-client timeouts if needed. Added cur_client_timeout to fetch
client timeout samples.
Prior to this commit, converter "bytes" takes only integer values as
arguments. After this commit, it can take variable names as inputs.
This allows us to dynamically determine the offset/length and capture
them in variables. These variables can then be used with the converter.
Example use case: parsing a token present in a request header.
This patch implements the 'curves' keyword on server lines as well as
the 'ssl-default-server-curves' keyword in the global section.
It also add the keyword on the server line in the ssl_curves reg-test.
These keywords allow the configuration of the curves list for a server.
Based on the new, generic allocation infrastructure, a new sample
fetch fc_pp_tlv is introduced. It is an abstraction for existing
PPv2 TLV sample fetches. It takes any valid TLV ID as argument and
returns the value as a string, similar to fc_pp_authority and
fc_pp_unique_id.
Bug was introduced by commit 26654 ("MINOR: ssl: add "crt" in the
cert_exts array").
When looking for a .crt directly in the cert_exts array, the
ssl_sock_load_pem_into_ckch() function will be called with a argument
which does not have its ".crt" extensions anymore.
If "ssl-load-extra-del-ext" is used this is not a problem since we try
to add the ".crt" when doing the lookup in the tree.
However when using directly a ".crt" without this option it will failed
looking for the file in the tree.
The fix removes the "crt" entry from the array since it does not seem to
be really useful without a rework of all the lookups.
Should fix issue #2265
Must be backported as far as 2.6.
This test instantiates two haproxy instances :
* first one uses a reverse server with two bind pub and priv
* second one uses a reverse bind to initiate connection to priv endpoint
On startup, only first haproxy instance is up. A client send a request
to pub endpoint and should receive a HTTP 503 as no connection are
available on the reverse server.
Second haproxy instance is started. A delay of 3 seconds is inserted to
wait for the connection between the two LBs. Then a client retry the
request and this time should receive a HTTP 200 reusing the bootstrapped
connection.
This regtest is similar to the previous one, except the optional name
argument is specified.
An extra haproxy instance is used as a gateway for clear/TLS as vtest
does not support TLS natively.
A first request is done by specifying a name which does not match the
idle connection SNI. This must result in a HTTP 503. Then the correct
name is used which must result in a 200.
Test support for reverse server. This can be test without the opposite
haproxy reversal support though a combination of VTC clients used to
emit HTTP/2 responses after connection.
This test ensures that first we get a 503 when connecting on a reverse
server with no idle connection. Then a dummy VTC client is connected to
act as as server. It is then expected that the same request is achieved
with a 200 this time.
Introduced in:
424981cde REGTEST: add ifnone-forwardfor test
b015b3eb1 REGTEST: add RFC7239 forwarded header tests
see also:
fbbbc33df REGTESTS: Do not use REQUIRE_VERSION for HAProxy 2.5+
Ben Kallus also noticed that we preserve leading zeroes on content-length
values. While this is totally valid, it would be safer to at least trim
them before passing the value, because a bogus server written to parse
using "strtol(value, NULL, 0)" could inadvertently take a leading zero
as a prefix for an octal value. While there is not much that can be done
to protect such servers in general (e.g. lack of check for overflows etc),
at least it's quite cheap to make sure the transmitted value is normalized
and not taken for an octal one.
This is not really a bug, rather a missed opportunity to sanitize the
input, but is marked as a bug so that we don't forget to backport it to
stable branches.
A combined regtest was added to h1or2_to_h1c which already validates
end-to-end syntax consistency on aggregate headers.
The content-length header parser has its dedicated function, in order
to take extreme care about invalid, unparsable, or conflicting values.
But there's a corner case in it, by which it stops comparing values
when reaching the end of the header. This has for a side effect that
an empty value or a value that ends with a comma does not deserve
further analysis, and it acts as if the header was absent.
While this is not necessarily a problem for the value ending with a
comma as it will be cause a header folding and will disappear, it is a
problem for the first isolated empty header because this one will not
be recontructed when next ones are seen, and will be passed as-is to the
backend server. A vulnerable HTTP/1 server hosted behind haproxy that
would just use this first value as "0" and ignore the valid one would
then not be protected by haproxy and could be attacked this way, taking
the payload for an extra request.
In field the risk depends on the server. Most commonly used servers
already have safe content-length parsers, but users relying on haproxy
to protect a known-vulnerable server might be at risk (and the risk of
a bug even in a reputable server should never be dismissed).
A configuration-based work-around consists in adding the following rule
in the frontend, to explicitly reject requests featuring an empty
content-length header that would have not be folded into an existing
one:
http-request deny if { hdr_len(content-length) 0 }
The real fix consists in adjusting the parser so that it always expects a
value at the beginning of the header or after a comma. It will now reject
requests and responses having empty values anywhere in the C-L header.
This needs to be backported to all supported versions. Note that the
modification was made to functions h1_parse_cont_len_header() and
http_parse_cont_len_header(). Prior to 2.8 the latter was in
h2_parse_cont_len_header(). One day the two should be refused but the
former is also used by Lua.
The HTTP messaging reg-tests were completed to test these cases.
Thanks to Ben Kallus of Dartmouth College and Narf Industries for
reporting this! (this is in GH #2237).
Splicing is not available on all platform. Thus a dedicated script is used
to check we properly skip payload for bodyless response when splicing is
used. This way, we are still able to test the feature with the original
script on all platform.
This patch fixes an issue on the CI introduced by commit ef2b15998
("BUG/MINOR: htx/mux-h1: Properly handle bodyless responses when splicing is
used"). It must be backported with the above commit.
There is a mechanisme in the H1 and H2 multiplexer to skip the payload when
a response is returned to the client when it must not contain any payload
(response to a HEAD request or a 204/304 response). However, this does not
work when the splicing is used. The H2 multiplexer does not support the
splicing, so there is no issue. But with the mux-h1, when data are sent
using the kernel splicing, the mux on the server side is not aware the
client side should skip the payload. And once the data are put in a pipe,
there is no way to stop the sending.
It is a defect of the current design. This will be easier to deal with this
case when the mux-to-mux forwarding will be implemented. But for now, to fix
the issue, we should add an HTX flag on the start-line to pass the info from
the client side to the server side and be able to disable the splicing in
necessary.
The associated reg-test was improved to be sure it does not fail when the
splicing is configured.
This patch should be backported as far as 2.4..
Adds a new sample fetch method to get the curve name used in the
key agreement to enable better observability. In OpenSSLv3, the function
`SSL_get_negotiated_group` returns the NID of the curve and from the NID,
we get the curve name by passing the NID to OBJ_nid2sn. This was not
available in v1.1.1. SSL_get_curve_name(), which returns the curve name
directly was merged into OpenSSL master branch last week but will be available
only in its next release.
When a s-maxage cache-control directive is present, it overrides any
other max-age or expires value (see section 5.2.2.9 of RFC7234). So if
we have a max-age=0 alongside a strictly positive s-maxage, the response
should be cached.
This bug was raised in GitHub issue #2203.
The fix can be backported to all stable branches.
In GH #2187 it was mentioned that the ifnone-forwardfor regtest
did not cover the case where forwardfor ifnone is explicitly set in
the frontend but forwardfor option is not used in the backend.
Expected behavior in this case is that the frontend takes the precedence
because the backend did not specify the option.
Adding this missing case to prevent regressions in the future.
Depending on the timing, time to time, the log messages can be mixed. A
client can start and be fully handled by HAProxy (including its log message)
before the log message of the previous client was emitted or received. To
fix the issue, a barrier was added to be sure to eval the "expect" rule on
logs before starting the next client.
Because of the commit 5cb8d7b8f ("BUG/MINOR: peers: Improve detection of
config errors in peers sections"), 2 scripts now report errors during
startup because some variables are not set and the remote peer server is
thus malformed. To perform a peer synchro between 2 haproxys in these
scripts, the startup must be delayed to properly resolve addresses.
In addidiotn, we must wait (2s) to be sure the connection between peers is
properly established. These scripts are now flagged as slow.
As seen in commits 33a4461fa ("BUG/MINOR: stats: Fix Lua's `get_stats`
function") and a46b142e8 ("BUG/MINOR: Missing stat_field_names (since
f21d17bb)") it seems frequent to omit to update stats_fields[] when
adding a new ST_F_xxx entry. This breaks Lua's get_stats() and shows
a "(null)" in the header of "show stat", but that one is not detectable
to the naked eye anymore.
Let's add a reminder above the enum declaration about this, and a small
reg tests checking for the absence of "(null)". It was verified to fail
before the last patch above.
When a message is compressed, A "Vary" header is added with
"accept-encoding" value. However, a new header is always added, regardless
there is already a Vary header or not. In addition, if there is already a
Vary header, there is no check on values to be sure "accept-encoding" value
is not already there. So it is possible to have it twice.
To improve this part, we now test Vary header values and "accept-encoding"
is only added if it was not found. In addition, "accept-encoding" value is
appended to the last Vary header found, if any. Otherwise, a new header is
added.
It was previously reduced from 10s to 1s but it remains too high, espeically
for the CI. It may be drastically reduced to 100ms. Idea is to just be sure
we will wait for the response before evaluating the TCP rules.
Because of the previous fix, log/last_rule.vtc script is failing. The
inspect-delay is no longer shorten when the end of the message is
reached. Thus WAIT_END acl is trully respected. 10s is too high and hit the
Vtext timeout, making the script fails.
Fix the openssl build with older openssl version by disabling the new
ssl_c_r_dn fetch.
This also disable the ssl_client_samples.vtc file for OpenSSL version
older than 1.1.1
This patch addresses #1514, adds the ability to fetch DN of the root
ca that was in the chain when client certificate was verified during SSL
handshake.
Since mailers/healthcheckmail.vtc already requires lua to emulate the
SMTP server for the test, force it to use lua mailers example script
to send email-alerts so we don't rely anymore on legacy tcpcheck
mailers implementation.
This is done by simply loading examples/mailers.lua (as a symlink) from
haproxy config file.
As this feature has a dependency on resolvers being configured,
this test acts as good documentation as well.
This change also has a spelling fix for filename.
This commit makes sure that if three is no "alpn", "npn" nor "no-alpn"
setting on a "bind" line which corresponds to an HTTPS or QUIC frontend,
we automatically turn on "h2,http/1.1" as an ALPN default for an HTTP
listener, and "h3" for a QUIC listener. This simplifies the configuration
for end users since they won't have to explicitly configure the ALPN
string to enable H2, considering that at the time of writing, HTTP/1.1
represents less than 7% of the traffic on large infrastructures. The
doc and regtests were updated. For more info, refer to the following
thread:
https://www.mail-archive.com/haproxy@formilux.org/msg43410.html
Since the commit f2b02cfd9 ("MAJOR: http-ana: Review error handling during
HTTP payload forwarding"), during the payload forwarding, we are analyzing a
side, we stop to test the opposite side. It means when the HTTP request
forwarding analyzer is called, we no longer check the response side and vice
versa.
Unfortunately, since then, the HTTP tunneling is broken after a protocol
upgrade. On the response is switch in TUNNEL mode. The request remains in
DONE state. As a consequence, data received from the server are forwarded to
the client but not data received from the client.
To fix the bug, when both sides are in DONE state, both are switched in same
time in TUNNEL mode if it was requested. It is performed in the same way in
http_end_request() and http_end_response().
This patch should fix the issue #2125. It is 2.8-specific. No backport
needed.
This patch proposes to enumerate servers using internal HAProxy list.
Also, remove the flag SRV_F_NON_PURGEABLE which makes the server non
purgeable each time Lua uses the server.
Removing reg-tests/cli_delete_server_lua.vtc since this test is no
longer relevant (we don't set the SRV_F_NON_PURGEABLE flag anymore)
and we already have a more generic test:
reg-tests/server/cli_delete_server.vtc
Co-authored-by: Aurelien DARRAGON <adarragon@haproxy.com>
In order to increase usability, the "show ssl ocsp-response" also takes
a frontend certificate path as parameter. In such a case, it behaves the
same way as "show ssl cert foo.pem.ocsp".
Instead of having a dedicated httpclient instance and its own code
decorrelated from the actual auto update one, the "update ssl
ocsp-response" will now use the update task in order to perform updates.
Since the cli command allows to update responses that were never
included in the auto update tree, a new flag was added to the
certificate_ocsp structure so that the said entry can be inserted into
the tree "by hand" and it won't be reinserted back into the tree after
the update process is performed. The 'update_once' flag "stole" a bit
from the 'fail_count' counter since it is the one less likely to reach
UINT_MAX among the ocsp counters of the certificate_ocsp structure.
This new logic required that every certificate_ocsp entry contained all
the ocsp-related information at all time since entries that are not
supposed to be configured automatically can still be updated through the
cli. The logic of the ssl_sock_load_ocsp was changed accordingly.
This patch adds the support for the PS algorithms when verifying JWT
signatures (rsa-pss). It was not managed during the first implementation
and previously raised an "Unmanaged algorithm" error.
The tests use the same rsa signature as the plain rsa tests (RS256 ...)
and the implementation simply adds a call to
EVP_PKEY_CTX_set_rsa_padding in the function that manages rsa and ecdsa
signatures.
The signatures in the reg-test were built thanks to the PyJWT python
library once again.
When adding a new certificate through the CLI and appending it to a
crt-list with the 'ocsp-update' option set, the new certificate would
not be added to the OCSP response update list.
The only thing that was missing was the copy of the ocsp_update mode
from the ssl_bind_conf into the ckch_store's object.
An extra wakeup of the update task also needed to happen in case the
newly inserted entry needs to be updated before the next wakeup of the
task.
This patch does not need to be backported.
Add tests for the "show ssl ocsp-updates" cli command as well as the new
'base64' parameter that can be passed to the "show ssl ocsp-response"
command.
The options were after the filters which does not work well and now
raises a warning. It did not break the regtest because the crt-lists
were not actually used by clients.
Added new testcases for all 4 branches of smp_fetch_hdr_ip():
- a plain IPv4 address
- an IPv4 address with an port number
- a plain IPv6 address
- an IPv6 address wrapped in [] brackets
304 responses contains "Content-length" or "Transfer-encoding"
headers. rxresp action expects to get a payload in this case, even if 304
reponses must not have any payload. A workaround was added to remove these
headers from the 304 responses. However, a better solution is to only get
the response headers from clients using rxresphdrs action.
If a payload is erroneously added in these reponses, the scripts will fail
the same way. So it is safe.
Since commit cc9bf2e5f "MEDIUM: cache: Change caching conditions"
responses that do not have an explicit expiration time are not cached
anymore. But this mechanism wrongly used the TX_CACHE_IGNORE flag
instead of the TX_CACHEABLE one. The effect this had is that a cacheable
response that corresponded to a request having a "Cache-Control:
no-cache" for instance would not be cached.
Contrary to what was said in the other commit message, the "checkcache"
option should not be impacted by the use of the TX_CACHEABLE flag
instead of the TX_CACHE_IGNORE one. The response is indeed considered as
not cacheable if it has no expiration time, regardless of the presence
of a cookie in the response.
This should fix GitHub issue #2048.
This patch can be backported up to branch 2.4.
In this scripts, several clients perform a requests and exit because an SSL
error is expected and thus no response is sent. However, we must explicitly
wait for the connection close, via an "expect_close" statement. Otherwise,
depending on the timing, HAProxy may detect the client abort before any
connection attempt on the server side and no SSL error is reported, making
the script to fail.
If "-dF" command line argument is passed to haproxy to execute the script,
by sepcifying HAPROXY_ARGS variable, http_splicing.vtc is now skipped.
Without this patch, the script fails when the fast-forward is disabled.
A feature command was added to detect if infinite forward is disabled to be
able to skip the script. Unfortunately, it is no supported to evaluate such
expression. Thus remove it. For now, reg-tests must not be executed with
"-dF" option.
The -dF option can now be used to disable data fast-forward. It does the
same than the global option "tune.fast-forward off". Some reg-tests may rely
on this optim. To detect the feature and skip such script, the following
vtest command must be used:
feature cmd "$HAPROXY_PROGRAM -cc '!(globa.tune & GTUNE_NO_FAST_FWD)'"
Add a new test to prevent any regression for the if-none parameter in
the "forwardfor" proxy option.
This will ensure upcoming refactors don't break reference behavior.
The wrong return value was checked, resulting in dead code and
potential bugs.
It should fix GitHub issue #2005.
This patch should be backported up to 2.5.
When the JWT token signature is using ECDSA algorithm (ES256 for
instance), the signature is a direct concatenation of the R and S
parameters instead of OpenSSL's DER format (see section
3.4 of RFC7518).
The code that verified the signatures wrongly assumed that they came in
OpenSSL's format and it did not actually work.
We now have the extra step of converting the signature into a complete
ECDSA_SIG that can be fed into OpenSSL's digest verification functions.
The ECDSA signatures in the regtest had to be recalculated and it was
made via the PyJWT python library so that we don't end up checking
signatures that we built ourselves anymore.
This patch should fix GitHub issue #2001.
It should be backported up to branch 2.5.
The error handling in the HTTP payload forwarding is far to be ideal because
both sides (request and response) are tested each time. It is espcially ugly
on the request side. To report a server error instead of a client error,
there are some workarounds to delay the error handling. The reason is that
the request analyzer is evaluated before the response one. In addition,
errors are tested before the data analysis. It means it is possible to
truncate data because errors may be handled to early.
So the error handling at this stages was totally reviewed. Aborts are now
handled after the data analysis. We also stop to finish the response on
request error or the opposite. As a side effect, the HTTP_MSG_ERROR state is
now useless. As another side effect, the termination flags are now set by
the HTTP analysers and not process_stream().
When we wait for the request body, we are still in the request analysis. So
a SF_FINST_R flag must be reported in logs. Even if some data are already
received, at this staged, nothing is sent to the server.
This patch could be backported in all stable versions.
Tests a subpart of the ocsp auto update feature. It will mainly focus on
the 'auto' mode since the 'on' one relies strongly on timers way too
long to be used in a regtest context.
testdir can be a very long directory since it depends on source
directory path, this can lead to failure during tests when UNIX socket
path exceeds maximum allowed length of 97 characters as defined in
str2sa_range().
16:48:14 [ALERT] *** h1 debug| (10082) : config : parsing [/tmp/haregtests-2022-12-17_16-47-39.4RNzIN/vtc.4850.5d0d728a/h1/cfg:19] : 'bind' : socket path 'unix@/local/p4clients/pkgbuild-bB20r/workspace/build/HAProxy/HAProxy-2.7.x.68.0/AL2_x86_64/DEV.STD.PTHREAD/build/private/HAProxy-2.7.x/src/reg-tests/lua/srv3' too long (max 97)
Also, it is not advisable to create UNIX socket in actual source
directory, but instead use dedicated temporary directory create for test
purpose.
This should be backported to 2.6
The test still need to have more start condition, like ulimit checks
and less strict value checks.
To be backported where it was activated (as far as 2.5)
When an error occurred during the request parsing, the H1 multiplexer is
responsible to sent a response to the client and to release the H1 stream
and the H1 connection. In HTTP mode, it is not an issue because at this
stage the H1 connection is in embryonic state. Thus it can be released
immediately.
However, it is a problem if the connection was first upgraded from a TCP
connection. In this case, a stream-connector is attached. The H1 stream is
not orphan. Thus it must not be released at this stage. It must be detached
first. Otherwise a BUG_ON() is triggered in h1s_destroy().
So now, the H1S is destroyed on early errors but only if the H1C is in
embryonic state.
This patch may be related to #1966. It must be backported to 2.7.
Check if USE_OBSOLETE_LINK=1 was used so it could run this test when
ASAN is not built, since ASAN require this option.
For this test to work, the ulimit -n value must be big enough.
Could be backported at least to 2.5.
change the expected maxconn from 10000 to 11000 in
automatic_maxconn.vtc
To be backported only if the test failed, the value might be the right
one in previous versions.
When I added commit 16b282f4b ("MINOR: stick-table: show the shard
number in each entry's "show table" output"), I don't know how but
I managed to mess up my reg tests since everything worked fine,
most likely by running it on a binary built in the wrong branch.
Several reg tests include some table outputs that were upset by the
new "shard=" field. This test added them and revealed at the same
time that entries learned over peers are not properly initialized,
which will be fixed in a future series of fixes.
This commit requires previous fix "BUG/MINOR: peers: always
initialize the stksess shard value" so as not to trip on entries
learned from peers.
VTEST does not properly handle 304-Not-Modified responses. If a
Transfer-Encoding header (and probably a Content-Lenght header too), it
waits for a body. Waiting for a fix, the Transfer-Encoding encoding of
cached responses in 2 VTEST scripts are removed.
Note it is now an issue because of a fix in the H1 multiplexer :
* 226082d13a "BUG/MINOR: mux-h1: Do not send a last null chunk on body-less answers"
This patch must be backported with the above commit.
The ca-ignore-err and crt-ignore-err directives are now able to use the
openssl X509_V_ERR constant names instead of the numerical values.
This allow a configuration to survive an OpenSSL upgrade, because the
numerical ID can change between versions. For example
X509_V_ERR_INVALID_CA was 24 in OpenSSL 1 and is 79 in OpenSSL 3.
The list of errors must be updated when a new major OpenSSL version is
released.
Test the httpclient when the lua action timeout. The lua timeout is
reached before the httpclient is ended. This test that the httpclient
are correctly cleaned when destroying the hlua context.
Must be backported as far as 2.5.
This patch adds support to the following authentication methods:
- AUTH_REQ_GSS (7)
- AUTH_REQ_SSPI (9)
- AUTH_REQ_SASL (10)
Note that since AUTH_REQ_SASL allows multiple authentication mechanisms
such as SCRAM-SHA-256 or SCRAM-SHA-256-PLUS, the auth payload length may
vary since the method is sent in plaintext. In order to allow this, the
regex now matches any payload length.
This partially fixes Github issue #1508 since user authentication is
still broken but should restore pre-2.2 behavior.
This should be backported up to 2.2.
Signed-off-by: Fatih Acar <facar@scaleway.com>
At present option smtpchk closes the TCP connection abruptly on completion of service checking,
even if successful. This can result in a very high volume of errors in backend SMTP server logs.
This patch ensures an SMTP QUIT is sent and a positive 2xx response is received from the SMTP
server prior to disconnection.
This patch depends on the following one:
* MINOR: smtpchk: Update expect rule to fully match replies to EHLO commands
This patch should fix the issue #1812. It may be backported as far as 2.2
with the commit above On the 2.2, proxy_parse_smtpchk_opt() function is
located in src/check.c
[cf: I updated reg-tests script accordingly]
The s1 server is acting like a SMTP server. But it sends two CRLF at the end of
each line, while only one CRLF must be returned. It only works becaue both CRLF
are received at the same time.