IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
- 1.5.14
- run demon as _haproxy user
- update default config
- update init script
- add systemd unit
- build with libssl support
- build with zlib support
Released version 1.5.14 with the following main changes :
- BUILD/MINOR: tools: rename popcount to my_popcountl
- BUG/MAJOR: buffers: make the buffer_slow_realign() function respect output data
The function buffer_slow_realign() was initially designed for requests
only and did not consider pending outgoing data. This causes a problem
when called on responses where data remain in the buffer, which may
happen with pipelined requests when the client is slow to read data.
The user-visible effect is that if less than <maxrewrite> bytes are
present in the buffer from a previous response and these bytes cross
the <maxrewrite> boundary close to the end of the buffer, then a new
response will cause a realign and will destroy these pending data and
move the pointer to what's believed to contain pending output data.
Thus the client receives the crap that lies in the buffer instead of
the original output bytes.
This new implementation now properly realigns everything including the
outgoing data which are moved to the end of the buffer while the input
data are moved to the beginning.
This implementation still uses a buffer-to-buffer copy which is not
optimal in terms of performance and which should be replaced by a
buffer switch later.
Prior to this patch, the following script would return different hashes
on each round when run from a 100 Mbps-connected machine :
i=0
while usleep 100000; do
echo round $((i++))
set -- $(nc6 0 8001 < 1kreq5k.txt | grep -v '^[0-9A-Z]' | md5sum)
if [ "$1" != "3861afbb6566cd48740ce01edc426020" ]; then echo $1;break;fi
done
The file contains 1000 times this request with "Connection: close" on the
last one :
GET /?s=5k&R=1 HTTP/1.1
The config is very simple :
global
tune.bufsize 16384
tune.maxrewrite 8192
defaults
mode http
timeout client 10s
timeout server 5s
timeout connect 3s
listen px
bind :8001
option http-server-close
server s1 127.0.0.1:8000
And httpterm-1.7.2 is used as the server on port 8000.
After the fix, 1 million requests were sent and all returned the same
contents.
Many thanks to Charlie Smurthwaite of atechmedia.com for his precious
help on this issue, which would not have been diagnosed without his
very detailed traces and numerous tests.
The patch must be backported to 1.5 which is where the bug was introduced.
(cherry picked from commit 27187ab56a2f1104818c2f21c5139c1edd8b838f)
This is in order to avoid conflicting with NetBSD popcount* functions
since 6.x release, the final l to mentions the argument is a long like
NetBSD does.
This patch could be backported to 1.5 to fix the build issue there as well.
(cherry picked from commit e6c39416682863d1eaee3acd45ccaadf96f76b12)
Released version 1.5.13 with the following main changes :
- BUG/MINOR: check: fix tcpcheck error message
- CLEANUP: deinit: remove codes for cleaning p->block_rules
- DOC: Update doc about weight, act and bck fields in the statistics
- MINOR: ssl: add a destructor to free allocated SSL ressources
- BUG/MEDIUM: ssl: fix tune.ssl.default-dh-param value being overwritten
- MEDIUM: ssl: replace standards DH groups with custom ones
- BUG/MINOR: debug: display (null) in place of "meth"
- BUG/MINOR: cfgparse: fix typo in 'option httplog' error message
- BUG/MEDIUM: cfgparse: segfault when userlist is misused
- BUG/MEDIUM: stats: properly initialize the scope before dumping stats
- BUG/MEDIUM: http: don't forward client shutdown without NOLINGER except for tunnels
- CLEANUP: checks: fix double usage of cur / current_step in tcp-checks
- BUG/MEDIUM: checks: do not dereference head of a tcp-check at the end
- CLEANUP: checks: simplify the loop processing of tcp-checks
- BUG/MAJOR: checks: always check for end of list before proceeding
- BUG/MEDIUM: checks: do not dereference a list as a tcpcheck struct
- BUG/MEDIUM: peers: apply a random reconnection timeout
- BUG/MINOR: ssl: fix smp_fetch_ssl_fc_session_id
- MEDIUM: init: don't stop proxies in parent process when exiting
- MINOR: peers: store the pointer to the signal handler
- MEDIUM: peers: unregister peers that were never started
- MEDIUM: config: propagate the table's process list to the peers sections
- MEDIUM: init: stop any peers section not bound to the correct process
- MEDIUM: config: validate that peers sections are bound to exactly one process
- MAJOR: peers: allow peers section to be used with nbproc > 1
- DOC: relax the peers restriction to single-process
- CLEANUP: config: fix misleading information in error message.
- MINOR: config: report the number of processes using a peers section in the error case
- BUG/MEDIUM: config: properly compute the default number of processes for a proxy
Chad Lavoie reported an interesting regression caused by the latest
updates to automatically detect the processes a peers section runs on.
It turns out that if a config has neither nbproc nor a bind-process
statement and depending on the frontend->backend chaining, it is possible
to evade all bind_proc propagations, resulting in assigning only ~0UL (all
processes, which is 32 or 64) without ever restricting it to nbproc. It
was not visible in backends until they started to reference peers sections
which saw themselves with 64 processes at once.
This patch addresses this by replacing all those ~0UL with nbits(nbproc).
That way all "bind-process" settings *default* to the number of processes
defined in nbproc instead of 32 or 64.
This fix could possibly be backported into 1.5, though there is no indication
that this bug could have any effect there.
(cherry picked from commit e428b08ee72879072897d1bcfa38589b7d1a89a5)
It can be helpful to know how many different processes try to use the
same peers section when trying to find the culprits.
(cherry picked from commit 64c5722e051768188754bd4c25a440ecd9103b38)
The parameter name is "bind-process", not "bind_proc" which is the
internal variable name.
(cherry picked from commit 0334ffc65d791120a102d72df2f5503537fe0077)
If a peers section is bound to no process, it's silently discarded. If its
bound to multiple processes, an error is emitted and the process will not
start.
(cherry picked from commit 1e273018663ac1e4b8f0a0b23fd238c5a7e2dc28)
This will prevent the peers section from remaining in listen state on
the incorrect process. The peers_fe pointer is set to NULL, which will
tell the peers task to commit suicide if it was already scheduled.
(cherry picked from commit f83d3fe00a7d8b90ead5924faca1e4b6df362aec)
The peers initialization sequence is a bit complex, they're attached
to stick-tables and initialized very early in the boot process. When
we fork, if some must not start, it's too late to find them. Instead,
simply add a guard in their respective tasks to stop them once they
want to start.
(cherry picked from commit 46dc1ca76114bff925460aee9439fc7dbef1185f)
Dmitry Sivachenko reported the following build warning using Clang
which is a real bug :
src/ssl_sock.c:4104:44: warning: address of 'smp->data.str.len' will always
evaluate to 'true' [-Wpointer-bool-conversion]
if (!smp->data.str.str || !&smp->data.str.len)
The impact is very low however, it will return an empty session_id
instead of no session id when none is found.
The fix should be backported to 1.5.
(cherry picked from commit 745d4127582a8c66e2e8ce35f746a78e867960af)
Since all rules listed in p->block_rules have been moved to the beginning of
the http-request rules in check_config_validity(), there is no need to clean
p->block_rules in deinit().
Signed-off-by: Godbach <nylzhaowei@gmail.com>
(cherry picked from commit 28b48ccbc879a552f988e6e1db22941e3362b4db)
The array which contains names of types, miss the METH entry.
[wt: should be backported to 1.5 as well]
(cherry picked from commit 4c2479e1c455e2cc46c02cfc28ea2a185f9a7747)
It is likely that powerful adversaries have been pre-computing the
standardized DH groups, because being widely used have made them
valuable targets. While users are advised to generate their own
DH parameters, replace the ones we ship by values been randomly
generated for this product only.
[wt: replaced dh1024_p, dh2048_p, and dh4096_p with locally-generated
ones as recommended by Rmi]
(cherry picked from commit d3a341a96fb6107d2b8e3d7a9c0afa2ff43bb0b6)
If the 'userlist' keyword parsing returns an error and no userlist were
previously created. The parsing of 'user' and 'group' leads to NULL
derefence.
The userlist pointer is now tested to prevent this issue.
(cherry picked from commit 4ac9f546120d42be8147e3d90588e7b9738af0cc)
The error message was displaying the wrong argument when 'option
httplog' took a wrong argument.
(cherry picked from commit 77063bc0c6ceb4257c4e2c08411811ecc48be1aa)
Herv Commowick reported that the logic used to avoid complaining about
ssl-default-dh-param not being set when static DH params are present
in the certificate file was clearly wrong when more than one sni_ctx
is used.
This patch stores whether static DH params are being used for each
SSL_CTX individually, and does not overwrite the value of
tune.ssl.default-dh-param.
(cherry picked from commit 4f902b88323927c9d25d391a809e3678ac31df41)
Using valgrind or another memory leak tracking tool is easier
when the memory internally allocated by OpenSSL is cleanly released
at shutdown.
(cherry picked from commit d3a23c3eb8c0950d26204568a133207099923494)
Commit 9ff95bb ("BUG/MEDIUM: peers: correctly configure the client timeout")
uncovered an old bug in the peers : upon disconnect, we reconnect immediately.
This sometimes results in both ends to do the same thing in parallel causing
a loop of connect/accept/close/close that can last several seconds. The risk
of occurrence of the trouble increases with latency, and is emphasized by the
fact that idle connections are now frequently recycled (after 5s of idle).
In order to avoid this we must apply a random delay before reconnecting.
Fortunately the mechanism already supports a reconnect delay, so here we
compute the random timeout when killing a session. The delay is 50ms plus
a random between 0 and 2 seconds. Ideally an exponential back-off would
be preferred but it's preferable to keep the fix simple.
This bug was reported by Marco Corte.
This fix must be backported to 1.5 since the fix above was backported into
1.5.12.
(cherry picked from commit b4e34da692d8a7f6837ad16b3389f5830dbc11d2)
The method used to skip to next rule in the list is wrong, it assumes
that the list element starts at the same offset as the rule. It happens
to be true on most architectures since the list is the first element for
now but it's definitely wrong. Now the code doesn't crash anymore when
the struct list is moved anywhere else in the struct tcpcheck_rule.
This fix must be backported to 1.5.
(cherry picked from commit 5581c27b579cbfc53afb0ca04cdeebe7e2200131)
[wt: changes from 1.6 : no tcp-check comments, check becomes s->proxy]
This is the most important fix of this series. There's a risk of endless
loop and crashes caused by the fact that we go past the head of the list
when skipping to next rule, without checking if it's still a valid element.
Most of the time, the ->action field is checked, which points to the proxy's
check_req pointer (generally NULL), meaning the element is confused with a
TCPCHK_ACT_SEND action.
The situation was accidently made worse with the addition of tcp-check
comment since it also skips list elements. However, since the action that
makes it go forward is TCPCHK_ACT_COMMENT (3), there's little chance to
see this as a valid pointer, except on 64-bit machines where it can match
the end of a check_req string pointer.
This fix heavily depends on previous cleanup and both must be backported
to 1.5 where the bug is present.
(cherry picked from commit f2c87353a7f8160930b5f342bb6d6ad0991ee3d1)
[wt: this patch differs significantly from 1.6 since we don't have comments]
There is some unobvious redundancy between the various ways we can leave
the loop. Some of them can be factored out. So now we leave the loop when
we can't go further, whether it's caused by reaching the end of the rules
or by a blocking I/O.
(cherry picked from commit 263013d031d754c9f96de0d0cb5afcc011af6441)
[wt: this patch is required for the next fix]
When the end of the list is reached, the current step's action is checked
to know if we must poll or not. Unfortunately, the main reason for going
there is that we walked past the end of list and current_step points to
the head. We cannot dereference ->action since it does not belong to this
structure and can definitely crash if the address is not mapped.
This bug is unlikely to cause a crash since the action appears just after
the list, and corresponds to the "char *check_req" pointer in the proxy
struct, and it seems that we can't go there with current_step being null.
At worst it can cause the check to register for recv events.
This fix needs to be backported to 1.5 since the code is incorrect there
as well.
(cherry picked from commit 53c5a049e1f4dbf67412472e23690dc6b3c8d0f8)
This cleanup is a preliminary requirement to the upcoming fixes for
the bug that affect tcp-check's improper use of lists. It will have
to be backported to 1.5 though it will not easily apply.
There are two variables pointing to the current rule within the loop,
and either one or the other is used depending on the code blocks,
making it much harder to apply checks to fix the list walking bug.
So first get rid of "cur" and only focus on current_step.
(cherry picked from commit ce8c42a37a44a1e0cb94e81abb7cc2baf3d0ef80)
[wt: 1.5 doesn't have comments so this patch differs significantly
from 1.6, but it's needed for the next batch of fixes]
There's an issue related with shutting down POST transfers or closing the
connection after the end of the upload : the shutdown is forwarded to the
server regardless of the abortonclose option. The problem it causes is that
during a scan, brute force or whatever, it becomes possible that all source
ports are exhausted with all sockets in TIME_WAIT state.
There are multiple issues at once in fact :
- no action is done for the close, it automatically happens at the lower
layers thanks for channel_auto_close(), so we cannot act on NOLINGER ;
- we *do* want to continue to send a clean shutdown in tunnel mode because
some protocols transported over HTTP may need this, regardless of option
abortonclose, thus we can't set the option inconditionally
- for all other modes, we do want to close the dirty way because we're
certain whether we've sent everything or not, and we don't want to eat
all source ports.
The solution is a bit complex and applies to DONE/TUNNEL states :
1) disable automatic close for everything not a tunnel and not just
keep-alive / server-close. Force-close is now covered, as is HTTP/1.0
which implicitly works in force-close mode ;
2) when processing option abortonclose, we know we can disable lingering
if the client has closed and the connection is not in tunnel mode.
Since the last case above leads to a situation where the client side reports
an error, we know the connection will not be reused, so leaving the flag on
the stream-interface is safe. A client closing in the middle of the data
transmission already aborts the transaction so this case is not a problem.
This fix must be backported to 1.5 where the problem was detected.
(cherry picked from commit bbfb6c40854925367ae5f9e8b22c5c9a18dc69d5)
Issuing a "show sess all" prior to a "show stat" on the CLI results in no
proxy being dumped because the scope_len union member was not properly
reinitialized.
This fix must be backported into 1.5.
(cherry picked from commit 6bcb95da5b9cb143088102b460c7bcb37c1b3d81)
Released version 1.5.12 with the following main changes :
- BUG/MINOR: ssl: Display correct filename in error message
- DOC: Fix L4TOUT typo in documentation
- BUG/MEDIUM: Do not consider an agent check as failed on L7 error
- BUG/MINOR: pattern: error message missing
- BUG/MEDIUM: pattern: some entries are not deleted with case insensitive match
- BUG/MEDIUM: buffer: one byte miss in buffer free space check
- BUG/MAJOR: http: don't read past buffer's end in http_replace_value
- BUG/MEDIUM: http: the function "(req|res)-replace-value" doesn't respect the HTTP syntax
- BUG/MEDIUM: peers: correctly configure the client timeout
- BUG/MINOR: compression: consider the expansion factor in init
- BUG/MEDIUM: http: hdr_cnt would not count any header when called without name
- BUG/MEDIUM: listener: don't report an error when resuming unbound listeners
- BUG/MEDIUM: init: don't limit cpu-map to the first 32 processes only
- BUG/MEDIUM: stream-int: always reset si->ops when si->end is nullified
- BUG/MEDIUM: http: remove content-length from chunked messages
- DOC: http: update the comments about the rules for determining transfer-length
- BUG/MEDIUM: http: do not restrict parsing of transfer-encoding to HTTP/1.1
- BUG/MEDIUM: http: incorrect transfer-coding in the request is a bad request
- BUG/MEDIUM: http: remove content-length form responses with bad transfer-encoding
- MEDIUM: http: restrict the HTTP version token to 1 digit as per RFC7230
- MEDIUM: http: add option-ignore-probes to get rid of the floods of 408
- BUG/MINOR: config: clear proxy->table.peers.p for disabled proxies
- MINOR: stick-table: don't attach to peers in stopped state
- MEDIUM: config: initialize stick-tables after peers, not before
- MEDIUM: peers: add the ability to disable a peers section
- DOC: document option http-ignore-probes
- DOC: fix the comments about the meaning of msg->sol in HTTP
- BUG/MEDIUM: http: wait for the exact amount of body bytes in wait_for_request_body
- BUG/MAJOR: http: prevent risk of reading past end with balance url_param
- DOC: update the doc on the proxy protocol
The get_server_ph_post() function assumes that the buffer is contiguous.
While this is true for all the header part, it is not necessarily true
for the end of data the fit in the reserve. In this case there's a risk
to read past the end of the buffer for a few hundred bytes, and possibly
to crash the process if what follows is not mapped.
The fix consists in truncating the analyzed length to the length of the
contiguous block that follows the headers.
A config workaround for this bug would be to disable balance url_param.
This fix must be backported to 1.5. It seems 1.4 did have the check.
(cherry picked from commit f69d4ff0063723442ba62af7ca582b1db163bd31)
Due to the fact that we were still considering only msg->sov for the
first byte of data after calling http_parse_chunk_size(), we used to
miscompute the input data size and to count the CRLF and the chunk size
as part of the input data. The effect is that it was possible to release
the processing with 3 or 4 missing bytes, especially if they're typed by
hand during debugging sessions. This can cause the stats page to return
some errors in admin mode, and the url_param balance algorithm to fail
to properly hash a body input.
This fix must be backported to 1.5.
(cherry picked from commit e115b49c399a0fd9cfa07ae41531549144ced9b0)
It has a meaning while parsing a body when using chunked encoding.
This must be backported to 1.5 since it caused a bug there as well.
(cherry picked from commit 30fe8189794114e66337e7ad5e167f386e57e257)
Sometimes it's very hard to disable the use of peers because an empty
section is not valid, so it is necessary to comment out all references
to the section, and not to forget to restore them in the same state
after the operation.
Let's add a "disabled" keyword just like for proxies. A ->state member
in the peers struct is even present for this purpose but was never used
at all.
Maybe it would make sense to backport this to 1.5 as it's really cumbersome
there.
(cherry picked from commit 77e4bd1497802a69fed73feb61cee53f3fbf75a0)
It's dangerous to initialize stick-tables before peers because they
start a task that cannot be stopped before we know if the peers need
to be disabled and destroyed. Move this after.
(cherry picked from commit 6866f3f33f970989843a1dddb71489d9b1ad3e28)
If a table in a disabled proxy references a peers section, the peers
name is not resolved to a pointer to a table, but since it belongs to
a union, it can later be dereferenced. Right now it seems it cannot
happen, but it definitely will after the pending changes.
It doesn't cost anything to backport this into 1.5, it will make gdb
sessions less head-scratching.
(cherry picked from commit 02df7740fbb5bf55113d9082a237158e26751eea)
Recently some browsers started to implement a "pre-connect" feature
consisting in speculatively connecting to some recently visited web sites
just in case the user would like to visit them. This results in many
connections being established to web sites, which end up in 408 Request
Timeout if the timeout strikes first, or 400 Bad Request when the browser
decides to close them first. These ones pollute the log and feed the error
counters. There was already "option dontlognull" but it's insufficient in
this case. Instead, this option does the following things :
- prevent any 400/408 message from being sent to the client if nothing
was received over a connection before it was closed ;
- prevent any log from being emitted in this situation ;
- prevent any error counter from being incremented
That way the empty connection is silently ignored. Note that it is better
not to use this unless it is clear that it is needed, because it will hide
real problems. The most common reason for not receiving a request and seeing
a 408 is due to an MTU inconsistency between the client and an intermediary
element such as a VPN, which blocks too large packets. These issues are
generally seen with POST requests as well as GET with large cookies. The logs
are often the only way to detect them.
This patch should be backported to 1.5 since it avoids false alerts and
makes it easier to monitor haproxy's status.
(cherry picked from commit 0f228a037a3565c83309dc9c0e2e546f94c17e8a)
While RFC2616 used to allow an undeterminate amount of digits for the
major and minor components of the HTTP version, RFC7230 has reduced
that to a single digit for each.
If a server can't properly parse the version string and falls back to 0.9,
it could then send a head-less response whose payload would be taken for
headers, which could confuse downstream agents.
Since there's no more reason for supporting a version scheme that was
never used, let's upgrade to the updated version of the standard. It is
still possible to enforce support for the old behaviour using options
accept-invalid-http-request and accept-invalid-http-response.
(cherry picked from commit 91852eb4280e5fbe63dcd9ea32c168d7516c6667)
The spec mandates that content-length must be removed from messages if
Transfer-Encoding is present, not just for valid ones.
This must be backported to 1.5 and 1.4.
(cherry picked from commit b4d0c03aee283e286880f93c4a8c053772a430c8)