IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
A memory allocation failure happening in chash_init_server_tree while
trying to allocate a server's lb_nodes item used in consistent hashing
would have resulted in a crash. This function is only called during
configuration parsing.
It was raised in GitHub issue #1233.
It could be backported to all stable branches.
A memory allocation failure happening in make_arg_list when trying to
allocate the argument list would have resulted in a crash. This function
is only called during configuration parsing.
It was raised in GitHub issue #1233.
It could be backported to all stable branches.
A memory allocation failure happening in http_parse_redirect_rule when
trying to allocate a redirect_rule structure would have resulted in a
crash. This function is only called during configuration parsing.
It was raised in GitHub issue #1233.
It could be backported to all stable branches.
A memory allocation failure happening in mworker_env_to_proc_list when
trying to allocate a mworker_proc would have resulted in a crash. This
function is only called during init.
It was raised in GitHub issue #1233.
It could be backported to all stable branches.
A memory allocation failure happening in comp_append_type or
comp_append_algo called while parsing compression options would have
resulted in a crash. These functions are only called during
configuration parsing.
It was raised in GitHub issue #1233.
It could be backported to all stable branches.
A memory allocation failure happening in tcp_parse_request_rule while
processing the "capture" keyword and trying to allocate a cap_hdr
structure would have resulted in a crash. This function is only called
during configuration parsing.
It was raised in GitHub issue #1233.
It could be backported to all stable branches.
A memory allocation failure happening in tcp_parse_tcp_req or
tcp_parse_tcp_rep when trying to allocate an act_rule structure would
have resulted in a crash. These functions are only called during
configuration parsing.
It was raised in GitHub issue #1233.
It could be backported to all stable branches.
A memory allocation failure happening in proxy_defproxy_cpy while
copying the default compression options would have resulted in a crash.
This function is called for every new proxy found while parsing the
configuration.
It was raised in GitHub issue #1233.
It could be backported to all stable branches.
A memory allocation failure happening during proxy_parse_declare while
processing the "capture" keyword and allocating a cap_hdr structure
would have resulted in a crash. This function is only called during
configuration parsing.
It was raised in GitHub issue #1233.
It could be backported to all stable branches.
A memory allocation failure happening in parse_http_req_capture while
processing a "len" keyword and allocating a cap_hdr structure would
have resulted in a crash. This function is only called during
configuration parsing.
It was raised in GitHub issue #1233.
It could be backported to all stable branches.
A memory allocation failure happening during ssl_init_single_engine
would have resulted in a crash. This function is only called during
init.
It was raised in GitHub issue #1233.
It could be backported to all stable branches.
A memory allocation failure happening during peers_register_table would
have resulted in a crash. This function is only called during init.
It was raised in GitHub issue #1233.
It could be backported to all stable branches.
Two calloc calls were not checked in the srv_parse_source function.
Considering that this function could be called at runtime through a
dynamic server creation via the CLI, this could lead to an unfortunate
crash.
It was raised in GitHub issue #1233.
It could be backported to all stable branches even though the runtime
crash could only happen on branches where dynamic server creation is
possible.
L7 retries because of status codes are now performed in the response
analyser. This way, it is no longer required to handle L7 retries in
si_cs_recv(). It is also useless to set CF_READ_ERROR on the response
channel to be able to trigger such retries.
In addition, if no L7 retries are performed when the response is received,
the L7 buffer is immediately released. Before in this case, it was only
released with the stream.
When a network error occurred on the server side, if it is not the first
request (in case of keep-alive), nothing is returned to the client and its
connexion is closed to be sure it may retry. However L7 retries on refused
early data (0rtt-rejected) must be performed first.
In addition, such L7 retries must also be performed before incrementing the
failed responses counter.
This patch must be backported as far as 2.0.
This bug was introduced by the previous commit (9f5382e45 Revert "MEDIUM:
http-ana: Deal with L7 retries in HTTP analysers") because I failed the
revert.
On L7 retry, if the maximum connection retries is reached, an error must be
return to the client. Depending the situation, it may be a 502-Bad-Gateway
(empty-response or junk-response), a 504-Gateway-Timeout (response-timeout)
or a 425-Too-Early (0rtt-rejected). But contrary to what the comment says,
the do_l7_retry() function always returns a success.
Note it is not a problem for L7 retries on the response status code because
the stream-interface already takes care to have not reached the maximum
connection retries counter to trigger a L7 retry.
This patch must be backported to 2.4 because the commit must also be
backported to 2.4.
This reverts commit 5b82cc5b5c350c7cfa194cc6bc16ad9308784541. The purpose of
this commit was to fully handle L7 retries in HTTP analysers and stop to
deal with the L7 buffer in si_cs_send()/si_cs_recv(). It is of course
cleaner this way. But there is a huge drawback. The L7 buffer is reserved
from the time the request analysis is finished until the moment the response
is received. For a small request, the analysis is finished before the
connection to the server. Thus for the L7 buffer will be kept for queued
sessions while it is not mandatory.
So, for now, the commit is reverted to go back to the less expensive
solution. This patch must be backported to 2.4.
Main functions are renamed h1_process_demux() and h1_process_mux() to be
consistent with the H2 mux. For the same reason,
h1_process_header/data/tralers) functions, responsible to parse incoming
data are renamed with "h1_handle_" prefix.
Input buffers have never output data. So, use b_slow_realign_ofs() function
instead of b_slow_realign(). It is a slighly simpler function. And in the H1
mux, it allows a realign by setting the input buffer head to permit
zero-copies.
b_slow_realign() function may be used to realign a buffer with a given
amount of output data, eventually 0. In such case, the head is set to
0. This function is not designed to be used with input only buffers, like
those used in the muxes. It is the purpose of b_slow_realign_ofs()
function. It does almost the same, realign a buffer. But it do so by setting
the buffer head to a specific offset.
Add h1_parse_full_contig_chunks() function to parse full contiguous chunks.
This function neither handles incomplete chunks nor wrapping buffers. It is
designed to efficiently parse a buffer with several small chunks. Of course,
there is no zero copy here because it is not possible. This function is a
bit tricky and all changes may a have a impact. This one may probably be
optimized, but it is good enough for now and not too complex.
The main function (h1_parse_msg_chunks) always tries to use this function
when the HTTP parser is waiting for a chunk size. In this case, there is no
zero-copy, so there is no reason to call the generic version to parse the
chunk. However, if some unparsed data remain after this step, the generic
function is called. This way, wrapping data and incomplete chunks may be
parsed.
Quick tests show it is now slightly faster in all cases than the legacy
mode.
A generic function is now used to only parse the current chunk (h1_parse_chunk)
and the main one (h1_parse_msg_chunks) is used to loop on the buffer and relies
on the first one. This change is mandatory to be able to use an optimized
function to parse contiguous small chunks.
Chunked data are now parsed in a dedicated function. This way, it will be
possible to have two functions to parse chunked messages. The current one
for messages with large chunks and an other one to parse messages with small
chunks.
The parsing of small chunks is really sensitive because it may be used as a
DoS attack. So we must be carefull to have an optimized function to parse
such messages.
Because the function parsing H1 data is now able to handle wrapping input
buffers, there is no reason to loop anymore in the muxes to be sure to parse
wrapping data.
Since the beginning, wrapping input data are parsed and copied in 2 steps to
not deal with the wrapping in H1 parsing functions. But there is no reason
to do so. This needs 2 calls to parsing functions. This also means, most of
time, when the input buffer does not wrap, there is an extra call for
nothing.
Thus, now, the data parsing functions try to copy as much data as possible,
handling wrapping buffer if necessary.
h1 parsing functions (h1_parse_msg_*) returns the number of bytes parsed or
0 if nothing is parsed because an error occurred or some data are
missing. But they never return negative values. Thus, instead of a signed
integer, these function now return a size_t value.
The H1 and FCGI muxes are updated accordingly. Note that h1_parse_msg_data()
has been slightly adapted because the parsing of chunked messages still need
to handle negative values when a parsing error is reported by
h1_parse_chunk_size() or h1_skip_chunk_crlf().
The output of "show map/acl" now contains the 'entry_cnt' value that
represents the count of all the entries for each map/acl, not just the
active ones, which means that it also includes entries currently being
added.
This flag is set on the response when its payload is compressed by HAProxy.
It must be preserved because it may be used when the log message is emitted.
When the compression filter was refactored to support the HTX, an
optimization was added to not perform extra proessing on the trailers.
HTTP_MSGF_COMPRESSIONG flag is removed when the last data block is
compressed. It is not required, it is just an optimization and unfortunately
a bug. This optimization must be removed to preserve the flag.
This patch must be backported as far as 2.0. On the HTX is affected.
For each filter, pre and post callback functions must only be called one
time. To do so, when one of them is finished, the corresponding analyser bit
must be removed from pre_analyzers or post_analyzers bit field. It is only
an issue with pre-analyser callback functions if the corresponding analyser
yields. It may happens with lua action for instance. In this case, the
filters pre analyser callback function is unexpectedly called several times.
This patch should fix the issue #1263. It must be backported is all stable
versions.
A deadlock is possible with 'set maxconn server' command, if there is
pending connection ready to be dequeued. This is caused by the locking
of server spinlock in both cli_parse_set_maxconn_server and
process_srv_queue.
Fix this by reducing the scope of the server lock into
server_parse_maxconn_change_request. If connection are dequeued, the
lock is taken a second time. This can be seen as suboptimal but as it
happens only during 'set maxconn server' it can be considered as
tolerable.
This issue was reported on the mailing list, for the 1.8.x branch.
It must be backported up to the 1.8.
The first item inserted into an ebtree will be inserted directly below
the root, which is a simple struct eb_root which only holds two branch
pointers (left and right).
If we try to find a duplicated entry to this first leaf through a
ebmb_next_dup, our leaf_p pointer will point to the eb_root instead of a
complete eb_node so we cannot look for the bit part of our leaf_p since
it would try to cast our eb_root into an eb_node and perform an out of
bounds access when reading "eb_root_to_node(eb_untag(t,EB_LEFT)))->bit".
This bug was found by address sanitizer running on a CRL hot update VTC
test.
Note that the bug has been there since the import of the eb_next_dup()
and eb_prev_dup() function in 1.5-dev19 by commit 2b5702030 ("MINOR:
ebtree: add new eb_next_dup/eb_prev_dup() functions to visit duplicates").
It can be backported to all stable branches.
The following functions used in CA/CRL file hot update were not defined
in OpenSSL 1.0.2 so they need to be defined in openssl-compat :
- X509_CRL_get_signature_nid
- X509_CRL_get0_lastUpdate
- X509_CRL_get0_nextUpdate
- X509_REVOKED_get0_serialNumber
- X509_REVOKED_get0_revocationDate
The CA/CRL hot update patches did not compile on some targets of the CI
(mainly gcc + ssl). This patch should fix almost all of them. It adds
missing variable initializations and return value checks to the
BIO_reset calls in show_crl_detail.
This vtc tests the "new ssl crl-file" which allows to create a new empty
CRL file that can then be set through a "set+commit ssl crl-file"
command pair. It also tests the "del ssl crl-file" command which allows
to delete an unused CRL file.
This patch adds the "show ssl crl-file [<crlfile>]" CLI command. This
command can be used to display the list of all the known CRL files when
no specific file name is specified, or to display the details of a
specific CRL file when a name is given.
The details displayed for a specific CRL file are inspired by the ones
shown by a "openssl crl -text -noout -in <filename>".
The "abort" command aborts an ongoing transaction started by a "set ssl
crl-file" command. Since the updated CRL file data is not pushed into
the CA file tree until a "commit ssl crl-file" call is performed, the
abort command simply deleted the new cafile_entry (storing the new CRL
file data) stored in the transaction.
This patch adds the "new ssl crl-file" and "del ssl crl-file" CLI
commands.
The "new" command can be used to create a new empty CRL file that can be
filled in thanks to a "set ssl crl-file" command. It can then be used in
a new crt-list line.
The newly created CRL file is added to the CA file tree so any call to
"show ssl crl-file" will display its name.
The "del" command allows to delete an unused CRL file. A CRL file will
be considered unused if its list of ckch instances is empty. It does not
work on an uncommitted CRL file transaction created via a "set ssl
crl-file" command call.
This patch adds the "set ssl crl-file" and "commit ssl crl-file"
commands, following the same logic as the certificate and CA file update
equivalents.
When trying to update a Certificate Revocation List (CRL) file via a
"set" command, we start by looking for the entry in the CA file tree and
then building a new cafile_entry out of the payload, without adding it
to the tree yet. It will only be added when a "commit" command is
called.
During a "commit" command, we insert the newly built cafile_entry in the
CA file tree while keeping the previous entry. We then iterate over all
the instances that used the CRL file and rebuild a new one and its
dedicated SSL context for every one of them.
When all the contexts are properly created, the old instances get
replaced by the new ones and the old CRL file is removed from the tree.
In order for crl-file hot update to be possible, we need to add an extra
link between the CA file tree entries that hold Certificate Revocation
Lists and the instances that use them. This way we will be able to
rebuild each instance upon CRL modification.
This mechanism is similar to what was made for the actual CA file update
since both the CA files and the CRL files are stored in the same CA file
tree.