IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
It has a meaning while parsing a body when using chunked encoding.
This must be backported to 1.5 since it caused a bug there as well.
(cherry picked from commit 30fe8189794114e66337e7ad5e167f386e57e257)
Sometimes it's very hard to disable the use of peers because an empty
section is not valid, so it is necessary to comment out all references
to the section, and not to forget to restore them in the same state
after the operation.
Let's add a "disabled" keyword just like for proxies. A ->state member
in the peers struct is even present for this purpose but was never used
at all.
Maybe it would make sense to backport this to 1.5 as it's really cumbersome
there.
(cherry picked from commit 77e4bd1497802a69fed73feb61cee53f3fbf75a0)
It's dangerous to initialize stick-tables before peers because they
start a task that cannot be stopped before we know if the peers need
to be disabled and destroyed. Move this after.
(cherry picked from commit 6866f3f33f970989843a1dddb71489d9b1ad3e28)
If a table in a disabled proxy references a peers section, the peers
name is not resolved to a pointer to a table, but since it belongs to
a union, it can later be dereferenced. Right now it seems it cannot
happen, but it definitely will after the pending changes.
It doesn't cost anything to backport this into 1.5, it will make gdb
sessions less head-scratching.
(cherry picked from commit 02df7740fbb5bf55113d9082a237158e26751eea)
Recently some browsers started to implement a "pre-connect" feature
consisting in speculatively connecting to some recently visited web sites
just in case the user would like to visit them. This results in many
connections being established to web sites, which end up in 408 Request
Timeout if the timeout strikes first, or 400 Bad Request when the browser
decides to close them first. These ones pollute the log and feed the error
counters. There was already "option dontlognull" but it's insufficient in
this case. Instead, this option does the following things :
- prevent any 400/408 message from being sent to the client if nothing
was received over a connection before it was closed ;
- prevent any log from being emitted in this situation ;
- prevent any error counter from being incremented
That way the empty connection is silently ignored. Note that it is better
not to use this unless it is clear that it is needed, because it will hide
real problems. The most common reason for not receiving a request and seeing
a 408 is due to an MTU inconsistency between the client and an intermediary
element such as a VPN, which blocks too large packets. These issues are
generally seen with POST requests as well as GET with large cookies. The logs
are often the only way to detect them.
This patch should be backported to 1.5 since it avoids false alerts and
makes it easier to monitor haproxy's status.
(cherry picked from commit 0f228a037a3565c83309dc9c0e2e546f94c17e8a)
While RFC2616 used to allow an undeterminate amount of digits for the
major and minor components of the HTTP version, RFC7230 has reduced
that to a single digit for each.
If a server can't properly parse the version string and falls back to 0.9,
it could then send a head-less response whose payload would be taken for
headers, which could confuse downstream agents.
Since there's no more reason for supporting a version scheme that was
never used, let's upgrade to the updated version of the standard. It is
still possible to enforce support for the old behaviour using options
accept-invalid-http-request and accept-invalid-http-response.
(cherry picked from commit 91852eb4280e5fbe63dcd9ea32c168d7516c6667)
The spec mandates that content-length must be removed from messages if
Transfer-Encoding is present, not just for valid ones.
This must be backported to 1.5 and 1.4.
(cherry picked from commit b4d0c03aee283e286880f93c4a8c053772a430c8)
The rules related to how to handle a bad transfer-encoding header (one
where "chunked" is not at the final place) have evolved to mandate an
abort when this happens in the request. Previously it was only a close
(which is still valid for the server side).
This must be backported to 1.5 and 1.4.
(cherry picked from commit 34dfc605717e6e196a2a8553254437fcc4844cbc)
While Transfer-Encoding is HTTP/1.1, we must still parse it in HTTP/1.0
in case an agent sends it, because it's likely that the other side might
use it as well, causing confusion. This will also result in getting rid
of the Content-Length header in such abnormal situations and in having
a clean connection.
This must be backported to 1.5 and 1.4.
(cherry picked from commit 4979d5c5d100e7064862d2f586b65184d68bcd75)
Let's now use the text from RFC7230 which is stricter and more precise.
This must be backported to 1.5 and 1.4.
(cherry picked from commit 557f199fb72777baf789bf662359735ddbf3b2aa)
RFC7230 clarified the behaviour to adopt when facing both a
content-length and a transfer-encoding: chunked in a message. While
haproxy already complied with the method for getting the message
length right, and used to detect improper content-length duplicates,
it still did not remove the content-length header when facing a
transfer-encoding: chunked. Usually it is not a problem since other
agents (clients and servers) are required to parse the message
according to the rules that have been in place since RFC2616 in
1999.
However Rgis Leroy reported the existence of at least one such
non-compliant agent so haproxy could be abused to get out of sync
with it on pipelined requests (HTTP request smuggling attack),
it consider part of a payload as a subsequent request.
The best thing to do is then to remove the content-length according
to RFC7230. It used to be in the todo list with a fixme in the code
while waiting for the standard to stabilize, let's apply it now that
it's published.
Thanks to Rgis for bringing that subject to our attention.
This fix must be backported to 1.5 and 1.4.
(cherry picked from commit 1c91391df4e255bbf444dac8001bda1164524769)
It happened after changing the stream interface deinitialization
sequence that we got random crashes with si_shutw() being called
on NULL si->end. The reason was that si->ops was not reset after
a call to si_release_endpoint() which is sometimes called directly.
Thus we now move the resetting of si->ops just after any si->end
assignment. It happens that si_detach() is now just the same as
si_release_endpoint() and stream_int_unregister_handler(). Some
cleanup will have to be performed there.
It's not sure whether this problem can impact 1.5 since in 1.5
applets are part of the default embedded stream handler. The only
way it could cause some trouble is if it's used with a connection,
which doesn't seem possible at first glance.
(cherry picked from commit 7365dad40f977a1535a2cea3963a3b1098da813a)
We have to allow 32 or 64 processes depending on the machine's word
size, and on 64-bit machines only the first 32 processes were properly
bound.
This fix should be backported to 1.5.
(cherry picked from commit e759749b50417895632c4e4481434f947176f28c)
Pavlos Parissis reported that a sequence of disable/enable on a frontend
performed on the CLI can result in an error if the frontend has several
"bind" lines each bound to different processes. This is because the
resume_listener() function returns a failure for frontends not part of
the current process instead of returning a success to pretend there was
no failure.
This fix should be backported to 1.5.
(cherry picked from commit af2fd584f32ec72b3d6d27a915f15df8041b56e7)
It's documented that these sample fetch functions should count all headers
and/or all values when called with no name but in practice it's not what is
being done as a missing name causes an immediate return and an absence of
result.
This bug is present in 1.5 as well and must be backported.
(cherry picked from commit 601a4d1741100d7a861b6d9b66561335c9911277)
When checking if the buffer is large enough, we used to rely on a fixed
size that was "apparently" enough. We need to consider the expansion
factor of deflate-encoded streams instead, which is of 5 bytes per 32kB.
The previous value was OK till 128kB buffers but became wrong past that.
It's totally harmless since we always keep the reserve when compressiong,
so there's 1kB or so available, which is enough for buffers as large as
6.5 MB, but better fix the check anyway.
This fix could be backported into 1.5 since compression was added there.
(cherry picked from commit 2aee2215c908c6997addcd1714b5b10f73c0703d)
These function used an invalid header parser.
- The trailing white-spaces were embedded in the replacement regex,
- The double-quote (") containing comma (,) were not respected.
This patch replace this parser by the "official" parser http_find_header2().
(cherry picked from commit 191f9efdc58f21af1d9dde3db5ba198d7f1ce22e)
The function http_replace_value use bad variable to detect the end
of the input string.
Regression introduced by the patch "MEDIUM: regex: Remove null
terminated strings." (c9c2daf2)
We need to backport this patch int the 1.5 stable branch.
WT: there is no possibility to overwrite existing data as we only read
past the end of the request buffer, to copy into the trash. The copy
is bounded by buffer_replace2(), just like the replacement performed
by exp_replace(). However if a buffer happens to contain non-zero data
up to the next unmapped page boundary, there's a theorical risk of
crashing the process despite this not being reproducible in tests.
The risk is low because "http-request replace-value" did not work due
to this bug so that probably means it's not used yet.
(cherry picked from commit 534101658d6e19aeb598bf7833a8ce167498c4ed)
Space is not avalaible only if the end of the data inserted
is strictly greater than the end of buffer. If these two value
are equal, the space is avamaible.
(cherry picked from commit fdda6777bffb4f933569c609ba54e24ea5eabf29)
The peers frontend timeout was mistakenly set on timeout.connect instead
of timeout.client, resulting in no timeout being applied to the peers
connections. The impact is just that peers can establish connections and
remain connected until they speak. Once they start speaking, only one of
them will still be accepted, and old sessions will be killed, so the
problem is limited. This fix should however be backported to 1.5 since
it was introduced in 1.5-dev3 with peers.
(cherry picked from commit 9ff95bb18c4cd9ae747fa5b3bef6d3f94e54172f)
As failure to connect to the agent check is not sufficient to mark it as
failed it stands to reason that an L7 error shouldn't either.
Without this fix if an L7 error occurs, for example of connectivity to the
agent is lost immediately after establishing a connection to it, then the
agent check will be considered to have failed and thus may end up with zero
health. Once this has occurred if the primary health check also reaches
zero health, which is likely if connectivity to the server is lost, then
the server will be marked as down and not be marked as up again until a
successful agent check occurs regardless of the success of any primary
health checks.
This behaviour is not correct as a failed agent check should never cause a
server to be marked as down or by extension continue to be marked as down.
Signed-off-by: Simon Horman <horms@verge.net.au>
(cherry picked from commit eaabd52e29a29187f9829fe727028a6ca530cbf9)
ACL or map entries are not deleted with the command "del acl" or "del map"
if the case insentive flag is set.
This is because the the case insensitive string are stored in a list and the
default delete function associated with string looks in a tree. I add a check
of the case insensitive flag and execute the delete function for lists if it
is set.
This patch must be backported in 1.5 version.
(cherry picked from commit 73bc285be194f443dc7eab9c949e87e1dbe8f70c)
Released version 1.5.11 with the following main changes :
- BUG/MEDIUM: backend: correctly detect the domain when use_domain_only is used
- MINOR: ssl: load certificates in alphabetical order
- BUG/MINOR: checks: prevent http keep-alive with http-check expect
- BUG/MEDIUM: Do not set agent health to zero if server is disabled in config
- MEDIUM/BUG: Only explicitly report "DOWN (agent)" if the agent health is zero
- BUG/MINOR: stats:Fix incorrect printf type.
- DOC: add missing entry for log-format and clarify the text
- BUG/MEDIUM: http: fix header removal when previous header ends with pure LF
- BUG/MEDIUM: channel: fix possible integer overflow on reserved size computation
- BUG/MINOR: channel: compare to_forward with buf->i, not buf->size
- MINOR: channel: add channel_in_transit()
- MEDIUM: channel: make buffer_reserved() use channel_in_transit()
- MEDIUM: channel: make bi_avail() use channel_in_transit()
- BUG/MEDIUM: channel: don't schedule data in transit for leaving until connected
- BUG/MAJOR: log: don't try to emit a log if no logger is set
- BUG/MINOR: args: add missing entry for ARGT_MAP in arg_type_names
- BUG/MEDIUM: http: make http-request set-header compute the string before removal
- BUG/MINOR: http: fix incorrect header value offset in replace-hdr/replace-value
- BUG/MINOR: http: abort request processing on filter failure
The value is defined in include/types/global.h to be an unsigned int.
The type format in the printf is for a signed int. This eventually wraps
around.
WT: This bug was introduced in 1.5.
(cherry picked from commit b197d7f433fc126f754df3923761d2dcc0893f69)
Commit c600204 ("BUG/MEDIUM: regex: fix risk of buffer overrun in
exp_replace()") added a control of failure on the response headers,
but forgot to check for the error during request processing. So if
the filters fail to apply, we could keep the request. It might
cause some headers to silently fail to be added for example. Note
that it's tagged MINOR because a standard configuration cannot make
this case happen.
The fix should be backported to 1.5 and 1.4 though.
(cherry picked from commit 34d4c3c13f0172f0f8f0dd99f92c61e7eb78e98f)
Sbastien Rohaut reported that string negation in http-check expect didn't
work as expected.
The misbehaviour is caused by responses with HTTP keep-alive. When the
condition is not met, haproxy awaits more data until the buffer is full or the
connection is closed, resulting in a check timeout when "timeout check" is
lower than the keep-alive timeout on the server side.
In order to avoid the issue, when a "http-check expect" is used, haproxy will
ask the server to disable keep-alive by automatically appending a
"Connection: close" header to the request.
(cherry picked from commit 32602d23610981b48143d1f82885b8cfae286e0f)
The two http-req/http-resp actions "replace-hdr" and "replace-value" were
expecting exactly one space after the colon, which is wrong. It was causing
the first char not to be seen/modified when no space was present, and empty
headers not to be modified either. Instead of using name->len+2, we must use
ctx->val which points to the first character of the value even if there is
no value.
This fix must be backported into 1.5.
(cherry picked from commit aa435e7d7eb1a2dd5f2b9b8008e352a816946ccc)
As reported by Raphal Enrici, certificates loaded from a directory are loaded
in a non predictive order. If no certificate was first loaded from a file, it
can result in different behaviours when haproxy is used in cluster.
We can also imagine other cases which weren't met yet.
Instead of using readdir(), we can use scandir() and sort files alphabetically.
This will ensure a predictive behaviour.
This patch should also be backported to 1.5.
(cherry picked from commit 3180f7b55434f19ba65f262dea450c93c769228b)
Make check check used to report explicitly report "DOWN (agent)" slightly
more restrictive such that it only triggers if the agent health is zero.
This avoids the following problem.
1. Backend is started disabled, agent check is is enabled
2. Backend is stabled using set server vip/rip state ready
3. Health is marked as down using set server vip/rip health down
At this point the http stats page will report "DOWN (agent)"
but the backend being down has nothing to do with the agent check
This problem appears to have been introduced by cf2924bc2537bb08c
("MEDIUM: stats: report down caused by agent prior to reporting up").
Note that "DOWN (agent)" may also be reported by a more generic conditional
which immediately follows the code changed by this patch.
Reported-by: Mark Brooks <mark@loadbalancer.org>
Signed-off-by: Simon Horman <horms@verge.net.au>
(cherry picked from commit 0766e441dd32b39ab884f5882e35e082a1da7a7e)
disable starts a server in the disabled state, however setting the health
of an agent implies that the agent is disabled as well as the server.
This is a problem because the state of the agent is not restored if
the state of the server is subsequently updated leading to an
unexpected state.
For example, if a server is started disabled and then the server
state is set to ready then without this change show stat indicates
that the server is "DOWN (agent)" when it is expected that the server
would be UP if its (non-agent) health check passes.
Reported-by: Mark Brooks <mark@loadbalancer.org>
Signed-off-by: Simon Horman <horms@verge.net.au>
(cherry picked from commit 1a23cf0dfbdd117bce46bc3d75f2e0d7b3343958)
The way http-request/response set-header works is stupid. For a naive
reuse of the del-header code, it removes all occurrences of the header
to be set before computing the new format string. This makes it almost
unusable because it is not possible to append values to an existing
header without first copying them to a dummy header, performing the
copy back and removing the dummy header.
Instead, let's share the same code as add-header and perform the optional
removal after the string is computed. That way it becomes possible to
write things like :
http-request set-header X-Forwarded-For %[hdr(X-Forwarded-For)],%[src]
Note that this change is not expected to have any undesirable impact on
existing configs since if they rely on the bogus behaviour, they don't
work as they always retrieve an empty string.
This fix must be backported to 1.5 to stop the spreadth of ugly configs.
(cherry picked from commit 85603282111d6fecb4a703f4af156c69a983cc5e)
This type is currently not used in an argument so it's harmles. But
better correctly fill the name.
(cherry picked from commit 53c250e1656dda7c9fcece5b688f0f3a9a3ec068)
send_log() calls update_hdr() to build a log header. It may happen
that no logger is defined at all but that we try to send a log anyway
(eg: upon startup). This results in a segfault when building the log
header because logline was never allocated.
This bug was revealed by the recent log-tag changes because the logline
is dereferenced after the call to snprintf(). So in 1.5 on most platforms
it has no impact because snprintf() will ignore NULL, but not necessarily
on all platforms.
The fix needs to be backported to 1.5.
(cherry picked from commit 8c97ab5eb2da447441932ad35b06dc94574140ad)
Option http-send-name-header is still hurting. If a POST request has to be
redispatched when this option is used, and the next server's name is larger
than the initial one, and the POST body fills the buffer, it becomes
impossible to rewrite the server's name in the buffer when redispatching.
In 1.4, this is worse, the process may crash because of a negative size
computation for the memmove().
The only solution to fix this is to refrain from eating the reserve before
we're certain that we won't modify the buffer anymore. And the condition for
that is that the connection is established.
This patch introduces "channel_may_send()" which helps to detect whether it's
safe to eat the reserve or not. This condition is used by channel_in_transit()
introduced by recent patches.
This patch series must be backported into 1.5, and a simpler version must be
backported into 1.4 where fixing the bug is much easier since there were no
channels by then. Note that in 1.4 the severity is major.
(cherry picked from commit 9c06ee4ccfded412739fb94260ba105be65651f8)
This ensures that we rely on a sane computation for the buffer size.
(cherry picked from commit 27bb0e14a8ae17d4bdf4ca8bd73e1ceb79ca4a14)
[wt: this is not a bug but the next bugfix needs it]
This ensures that we rely on a sane computation for the buffer size.
(cherry picked from commit fe5783495546da274c0270f0f59e3f84de3bf790)
[wt: this is not a bug but the next bugfix needs it]
This function returns the amount of bytes in transit in a channel's buffer,
which is the amount of outgoing data plus the amount of incoming data bound
to the forward limit.
(cherry picked from commit 1a4484dec8488cdd14c52e6cb93198e3c8f83942)
[wt: this is not a bug but the next bugfix needs it]
We know that all incoming data are going to be purged if to_forward
is greater than them, not only if greater than the buffer size. This
buf has no direct impact on this version, but it participates to some
bugs affecting http-send-name-header since 1.4. This fix will have to
be backported down to 1.4 albeit in a different form.
(cherry picked from commit bb3f994f1a379289534c4ccf0f790df7b69d2611)
The buffer_max_len() function is subject to an integer overflow in this
calculus :
int ret = global.tune.maxrewrite - chn->to_forward - chn->buf->o;
- chn->to_forward may be up to 2^31 - 1
- chn->buf->o may be up to chn->buf->size
- global.tune.maxrewrite is by definition smaller than chn->buf->size
Thus here we can subtract (2^31 + buf->o) (highly negative) from something
slightly positive, and result in ret being larger than expected.
Fortunately in 1.5 and 1.6, this is only used by bi_avail() which itself
is used by applets which do not set high values for to_forward so this
problem does not happen there. However in 1.4 the equivalent computation
was used to limit the size of a read and can result in a read overflow
when combined with the nasty http-send-name-header feature.
This fix must be backported to 1.5 and 1.4.
(cherry picked from commit 0428a146c0ee776f7c89359e8aed0d755ffbad07)
In 1.4-dev7, a header removal mechanism was introduced with commit 68085d8
("[MINOR] http: add http_remove_header2() to remove a header value."). Due
to a typo in the function, the beginning of the headers gets desynchronized
if the header preceeding the deleted one ends with an LF/CRLF combination
different form the one of the removed header. The reason is that while
rewinding the pointer, we go back by a number of bytes taking into account
the LF/CRLF status of the removed header instead of the previous one. The
case where it fails is in http-request del-header/set-header where the
multiple occurrences of a header are present and their LF/CRLF ending
differs from the preceeding header. The loop then stops because no more
headers are found given that the names and length do not match.
Another point to take into consideration is that removing headers using
a loop of http_find_header2() and this function is inefficient since we
remove values one at a time while it could be simpler and faster to
remove full header lines. This is something that should be addressed
separately.
This fix must be backported to 1.5 and 1.4. Note that http-send-name-header
relies on this function as well so it could be possible that some of the
issues encountered with it in 1.4 come from this bug.
(cherry picked from commit 7c1c21742639e57919b315fe02f6592c832be146)
There was no "log-format" entry in the keyword index! This should be
backported to 1.5.
(cherry picked from commit fb4e7ea30745a6a3a12077fa75f0cd1ba422a01b)
balance hdr(<name>) provides on option 'use_domain_only' to match only the
domain part in a header (designed for the Host header).
Olivier Fredj reported that the hashes were not the same for
'subdomain.domain.tld' and 'domain.tld'.
This is because the pointer was rewinded one step to far, resulting in a hash
calculated against wrong values :
- '.domai' for 'subdomain.domain.tld'
- ' domai' for 'domain.tld' (beginning with the space in the header line)
Another special case is when no dot can be found in the header : the hash will
be calculated against an empty string.
The patch addresses both cases : 'domain' will be used to compute the hash for
'subdomain.domain.tld', 'domain.tld' and 'domain' (using the whole header value
for the last case).
The fix must be backported to haproxy 1.5 and 1.4.
(cherry picked from commit f607d81d09ab839fb1143b749ff231d6093f2038)
Released version 1.5.10 with the following main changes :
- DOC: fix a few typos
- BUG/MINOR: http: fix typo: "401 Unauthorized" => "407 Unauthorized"
- BUG/MINOR: parse: refer curproxy instead of proxy
- DOC: httplog does not support 'no'
- MINOR: map/acl/dumpstats: remove the "Done." message
- BUG/MEDIUM: sample: fix random number upper-bound
- BUG/MEDIUM: patterns: previous fix was incomplete
- BUG/MEDIUM: payload: ensure that a request channel is available
- BUG/MINOR: tcp-check: don't condition data polling on check type
- BUG/MEDIUM: tcp-check: don't rely on random memory contents
- BUG/MEDIUM: tcp-checks: disable quick-ack unless next rule is an expect
- BUG/MINOR: config: fix typo in condition when propagating process binding
- BUG/MEDIUM: config: do not propagate processes between stopped processes
- BUG/MAJOR: stream-int: properly check the memory allocation return
- BUG/MEDIUM: memory: fix freeing logic in pool_gc2()
- BUG/MEDIUM: compression: correctly report zlib_mem
In zlib we track memory usage. The problem is that the way alloc_zlib()
and free_zlib() account for memory is different, resulting in variations
that can lead to negative zlib_mem being reported. The alloc() function
uses the requested size while the free() function uses the pool size. The
difference can happen when pools are shared with other pools of similar
size. The net effect is that zlib_mem can be reported negative with a
slowly decreasing count, and over the long term the limit will not be
enforced anymore.
The fix is simple : let's use the pool size in both cases, which is also
the exact value when it comes to memory usage.
This fix must be backported to 1.5.
(cherry picked from commit 4f31fc2f28f09cb1bcd27af3367ee5acea29b656)
There's a long-standing bug in pool_gc2(). It tries to protect the pool
against releasing of too many entries but the formula is wrong as it
compares allocated to minavail instead of (allocated-used) to minavail.
Under memory contention, it ends up releasing more than what is granted
by minavail and causes trouble to the dynamic buffer allocator.
This bug is in fact major by itself, but since minavail has never been
used till now, there is no impact at least in mainline. A backport to
1.5 is desired anyway in case any future backport or out-of-tree patch
relies on this.
(cherry picked from commit 57767b80329ceade67302aed4fd9760ed5f3d644)