IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
this was a missing piece for a while now even though it was planned. This
patch adds listen stats.
Nothing in particular but we make use of the status helper previously
added. `promex_st_metrics` diff also looks scary, but I had to realign
all lines.
Signed-off-by: William Dauchy <wdauchy@gmail.com>
logical followup from cli commands addition, so that the state server
file stays compatible with the changes made at runtime; use previously
added helper to load server attributes.
also alloc a specific chunk to avoid mixing with other called functions
using it
Signed-off-by: William Dauchy <wdauchy@gmail.com>
libressl 3.3.0 is stricter on the sni field and fails if it contains
illegal characters such as the underscore. Replace sni field with proper
name to pass the test on the CI environment.
Use the proxy protocol frame if proxy protocol is activated on the
server line. Do not add anymore these connections in the private list.
If some requests are made with the same proxy fields, they can reuse
the idle connection.
The reg-tests proxy_protocol_send_unique_id must be adapted has it
relied on the side effect behavior that every requests from a same
connection reused a private server connection. Now, a new connection is
created as expected if the proxy protocol fields differ.
Historically we've been counting lots of client-triggered events in stick
tables to help detect misbehaving ones, but we've been missing the same on
the server side, and there's been repeated requests for being able to count
the server errors per URL in order to precisely monitor the quality of
service or even to avoid routing requests to certain dead services, which
is also called "circuit breaking" nowadays.
This commit introduces http_fail_cnt and http_fail_rate, which work like
http_err_cnt and http_err_rate in that they respectively count events and
their frequency, but they only consider server-side issues such as network
errors, unparsable and truncated responses, and 5xx status codes other
than 501 and 505 (since these ones are usually triggered by the client).
Note that retryable errors are purposely not accounted for, so that only
what the client really sees is considered.
With this it becomes very simple to put some protective measures in place
to perform a redirect or return an excuse page when the error rate goes
beyond a certain threshold for a given URL, and give more chances to the
server to recover from this condition. Typically it could look like this
to bypass a URL causing more than 10 requests per second:
stick-table type string len 80 size 4k expire 1m store http_fail_rate(1m)
http-request track-sc0 base # track host+path, ignore query string
http-request return status 503 content-type text/html \
lf-file excuse.html if { sc0_http_fail_rate gt 10 }
A more advanced mechanism using gpt0 could even implement high/low rates
to disable/enable the service.
Reg-test converteers_ref_cnt_never_dec.vtc was updated to test it.
I saw some people falling back to unix socket to collect some data they
could not find in prometheus exporter. One of them is base info from
stick tables (used/size).
I do not plan to extend it more for now; keys are quite a mess to
handle.
This should resolve github issue #1008.
Signed-off-by: William Dauchy <wdauchy@gmail.com>
As noticed by Christopher, I messed up the version fix in commit
cb4ed02ef ("REGTESTS: mark http-check-send.vtc as 2.4-only"), as while
looking up the commit introducing the change I accidently reverted it.
Let's reinsert the contents of the file prior to that fix, except the
version, of course.
Commit 9eea56009 ("REGTESTS: add tests for the xxh3 converter") introduced
the xxh3 to the tests thus made it incompatible with 2.3 and older, let's
upgrade the version requirement.
We can currently change the check-port using the cli command `set server
check-port` but there is a consistency issue when using server state.
This patch aims to fix this problem but will be also a good preparation
work to get rid of checkport flag, so we are able to know when checkport
was set by config.
I am fully aware this is not making github #953 moving forward, I
however think this might be acceptable while waiting for a proper
solution and resolve consistency problem faced with port settings.
Signed-off-by: William Dauchy <wdauchy@gmail.com>
Flush the SSL session cache when updating a certificate which is used on a
server line. This prevent connections to be established with a cached
session which was using the previous SSL_CTX.
This patch also replace the ha_barrier with a thread_isolate() since there
are more operations to do. The reg-test was also updated to remove the
'no-ssl-reuse' keyword which is now uneeded.
The "abort ssl cert" command is buggy and removes the current ckch store,
and instances, leading to SNI removal. It must only removes the new one.
This patch also adds a check in set_ssl_cert.vtc and
set_ssl_server_cert.vtc.
Must be backported as far as 2.2.
In a previous commit this test was disabled because I though the
feature was broken, but in fact this is the test which is broken.
Indeed the connection between the server and the client was not
renegociated and was using the SSL cache or a ticket. To be work
correctly these 2 features must be disabled or a new connection must be
established after the ticket timeout, which is too long for a regtest.
Also a "nbthread 1" was added as it was easier to reproduce the problem
with it.
Now, some conformance tests are performed when an HTTP connection is
upgraded to websocket. This make the http-check-send.vtc script failed for
the backend <be6_ws>. Because the purpose of this health-check is to pass a
"Connection: Upgrade" header on an http-check send rule, we may use a dummy
protocal instead.
Test the conformance of websocket rfc6455 in haproxy. In particular, if
a missing key is detected on a h1 message, haproxy must close the
connection.
Note that the case h2 client/h1 srv is not tested as I did not find a
way to calculate the key on the server side.
The EOM block will be removed on the 2.4, thus this script will be broken on
this version. Now it is skipped for this version. It remains valid for 2.3
and 2.2.
When trying to update a backend certificate, we should find a
server-side ckch instance thanks to which we can rebuild a new ssl
context and a new ckch instance that replace the previous ones in the
server structure. This way any new ssl session will be built out of the
new ssl context and the newly updated certificate.
This resolves a subpart of GitHub issue #427 (the certificate part)
It is only a problem on the response path because the request payload length
it always known. But when a filter is registered to analyze the response
payload, the filtering may hang if the server closes just after the headers.
The root cause of the bug comes from an attempt to allow the filters to not
immediately forward the headers if necessary. A filter may choose to hold
the headers by not forwarding any bytes of the payload. For a message with
no payload but a known payload length, there is always a EOM block to
forward. Thus holding the EOM block for bodyless messages is a good way to
also hold the headers. However, messages with an unknown payload length,
there is no EOM block finishing the message, but only a SHUTR flag on the
channel to mark the end of the stream. If there is no payload when it
happens, there is no payload at all to forward. In the filters API, it is
wrongly detected as a condition to not forward the headers.
Because it is not the most used feature and not the obvious one, this patch
introduces another way to hold the message headers at the begining of the
forwarding. A filter flag is added to explicitly says the headers should be
hold. A filter may choose to set the STRM_FLT_FL_HOLD_HTTP_HDRS flag and not
forwad anything to hold the headers. This flag is removed at each call, thus
it must always be explicitly set by filters. This flag is only evaluated if
no byte has ever been forwarded because the headers are forwarded with the
first byte of the payload.
reg-tests/filters/random-forwarding.vtc reg-test is updated to also test
responses with unknown payload length (with and without payload).
This patch must be backported as far as 2.0.
If a server varies on the accept-encoding header and it sends a response
with an encoding we do not know (see parse_encoding_value function), we
will not store it. This will prevent unexpected errors caused by
cache collisions that could happen in accept_encoding_hash_cmp.
it does not seem to have a reason to close connections after each
request; reflect that in tests by doing all requests within the same
client.
Signed-off-by: William Dauchy <wdauchy@gmail.com>
Add a base test to start with something, even though this is not
necessarily complete.
Also make use of the recent REQUIRE_SERVICE option to exclude it from
test list of it was not build with prometheus included.
note: I thought it was possible to send multiple requests within the
same client, but I'm getting "HTTP header is incomplete" from the second
request
Signed-off-by: William Dauchy <wdauchy@gmail.com>
- check functions are never called with a NULL args list, it is always
an array, so first check can be removed
- the expression parser guarantees that we can't have anything else,
because we mentioned json converter takes a mandatory string argument.
Thus test on `ARGT_STR` can be removed as well
- also add breaking line between enum and function declaration
In order to validate it, add a simple json test testing very simple
cases but can be improved in the future:
- default json converter without args
- json converter failing on error (utf8)
- json converter with error being removed (utf8s)
Signed-off-by: William Dauchy <wdauchy@gmail.com>
add base support for url encode following RFC3986, supporting `query`
type only.
- add test checking url_enc/url_dec/url_enc
- update documentation
- leave the door open for future changes
this should resolve github issue #941
Signed-off-by: William Dauchy <wdauchy@gmail.com>
This patch fixes GitHub Issue #988. Commit ce9e7b25217c46db1ac636b2c885a05bf91ae57e
was not sufficient, because it fell back to a hash comparison if the bitmap
of known encodings was not acceptable instead of directly returning the the
cached response is not compatible.
This patch also extends the reg-test to test the hash collision that was
mentioned in #988.
Vary handling is 2.4, no backport needed.
The accept-encoding part of the secondary key (vary) was only built out
of the first occurrence of the header. So if a client had two
accept-encoding headers, gzip and br for instance, the key would have
been built out of the gzip string. So another client that only managed
gzip would have been sent the cached resource, even if it was a br resource.
The http_find_header function is now called directly by the normalizers
so that they can manage multiple headers if needed.
A request that has more than 16 encodings will be considered as an
illegitimate request and its response will not be stored.
This fixes GitHub issue #987.
It does not need any backport.
Add a new check for a pseudo-websocket handshake, specifying the
Connection header to verify if it is properly handled by http-check send
directive. Also check that default http/1.1 checks have the header
Connection: close.