IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Sbastien Rohaut reported that string negation in http-check expect didn't
work as expected.
The misbehaviour is caused by responses with HTTP keep-alive. When the
condition is not met, haproxy awaits more data until the buffer is full or the
connection is closed, resulting in a check timeout when "timeout check" is
lower than the keep-alive timeout on the server side.
In order to avoid the issue, when a "http-check expect" is used, haproxy will
ask the server to disable keep-alive by automatically appending a
"Connection: close" header to the request.
(cherry picked from commit 32602d23610981b48143d1f82885b8cfae286e0f)
The two http-req/http-resp actions "replace-hdr" and "replace-value" were
expecting exactly one space after the colon, which is wrong. It was causing
the first char not to be seen/modified when no space was present, and empty
headers not to be modified either. Instead of using name->len+2, we must use
ctx->val which points to the first character of the value even if there is
no value.
This fix must be backported into 1.5.
(cherry picked from commit aa435e7d7eb1a2dd5f2b9b8008e352a816946ccc)
As reported by Raphal Enrici, certificates loaded from a directory are loaded
in a non predictive order. If no certificate was first loaded from a file, it
can result in different behaviours when haproxy is used in cluster.
We can also imagine other cases which weren't met yet.
Instead of using readdir(), we can use scandir() and sort files alphabetically.
This will ensure a predictive behaviour.
This patch should also be backported to 1.5.
(cherry picked from commit 3180f7b55434f19ba65f262dea450c93c769228b)
Make check check used to report explicitly report "DOWN (agent)" slightly
more restrictive such that it only triggers if the agent health is zero.
This avoids the following problem.
1. Backend is started disabled, agent check is is enabled
2. Backend is stabled using set server vip/rip state ready
3. Health is marked as down using set server vip/rip health down
At this point the http stats page will report "DOWN (agent)"
but the backend being down has nothing to do with the agent check
This problem appears to have been introduced by cf2924bc2537bb08c
("MEDIUM: stats: report down caused by agent prior to reporting up").
Note that "DOWN (agent)" may also be reported by a more generic conditional
which immediately follows the code changed by this patch.
Reported-by: Mark Brooks <mark@loadbalancer.org>
Signed-off-by: Simon Horman <horms@verge.net.au>
(cherry picked from commit 0766e441dd32b39ab884f5882e35e082a1da7a7e)
disable starts a server in the disabled state, however setting the health
of an agent implies that the agent is disabled as well as the server.
This is a problem because the state of the agent is not restored if
the state of the server is subsequently updated leading to an
unexpected state.
For example, if a server is started disabled and then the server
state is set to ready then without this change show stat indicates
that the server is "DOWN (agent)" when it is expected that the server
would be UP if its (non-agent) health check passes.
Reported-by: Mark Brooks <mark@loadbalancer.org>
Signed-off-by: Simon Horman <horms@verge.net.au>
(cherry picked from commit 1a23cf0dfbdd117bce46bc3d75f2e0d7b3343958)
The way http-request/response set-header works is stupid. For a naive
reuse of the del-header code, it removes all occurrences of the header
to be set before computing the new format string. This makes it almost
unusable because it is not possible to append values to an existing
header without first copying them to a dummy header, performing the
copy back and removing the dummy header.
Instead, let's share the same code as add-header and perform the optional
removal after the string is computed. That way it becomes possible to
write things like :
http-request set-header X-Forwarded-For %[hdr(X-Forwarded-For)],%[src]
Note that this change is not expected to have any undesirable impact on
existing configs since if they rely on the bogus behaviour, they don't
work as they always retrieve an empty string.
This fix must be backported to 1.5 to stop the spreadth of ugly configs.
(cherry picked from commit 85603282111d6fecb4a703f4af156c69a983cc5e)
This type is currently not used in an argument so it's harmles. But
better correctly fill the name.
(cherry picked from commit 53c250e1656dda7c9fcece5b688f0f3a9a3ec068)
send_log() calls update_hdr() to build a log header. It may happen
that no logger is defined at all but that we try to send a log anyway
(eg: upon startup). This results in a segfault when building the log
header because logline was never allocated.
This bug was revealed by the recent log-tag changes because the logline
is dereferenced after the call to snprintf(). So in 1.5 on most platforms
it has no impact because snprintf() will ignore NULL, but not necessarily
on all platforms.
The fix needs to be backported to 1.5.
(cherry picked from commit 8c97ab5eb2da447441932ad35b06dc94574140ad)
Option http-send-name-header is still hurting. If a POST request has to be
redispatched when this option is used, and the next server's name is larger
than the initial one, and the POST body fills the buffer, it becomes
impossible to rewrite the server's name in the buffer when redispatching.
In 1.4, this is worse, the process may crash because of a negative size
computation for the memmove().
The only solution to fix this is to refrain from eating the reserve before
we're certain that we won't modify the buffer anymore. And the condition for
that is that the connection is established.
This patch introduces "channel_may_send()" which helps to detect whether it's
safe to eat the reserve or not. This condition is used by channel_in_transit()
introduced by recent patches.
This patch series must be backported into 1.5, and a simpler version must be
backported into 1.4 where fixing the bug is much easier since there were no
channels by then. Note that in 1.4 the severity is major.
(cherry picked from commit 9c06ee4ccfded412739fb94260ba105be65651f8)
This ensures that we rely on a sane computation for the buffer size.
(cherry picked from commit 27bb0e14a8ae17d4bdf4ca8bd73e1ceb79ca4a14)
[wt: this is not a bug but the next bugfix needs it]
This ensures that we rely on a sane computation for the buffer size.
(cherry picked from commit fe5783495546da274c0270f0f59e3f84de3bf790)
[wt: this is not a bug but the next bugfix needs it]
This function returns the amount of bytes in transit in a channel's buffer,
which is the amount of outgoing data plus the amount of incoming data bound
to the forward limit.
(cherry picked from commit 1a4484dec8488cdd14c52e6cb93198e3c8f83942)
[wt: this is not a bug but the next bugfix needs it]
We know that all incoming data are going to be purged if to_forward
is greater than them, not only if greater than the buffer size. This
buf has no direct impact on this version, but it participates to some
bugs affecting http-send-name-header since 1.4. This fix will have to
be backported down to 1.4 albeit in a different form.
(cherry picked from commit bb3f994f1a379289534c4ccf0f790df7b69d2611)
The buffer_max_len() function is subject to an integer overflow in this
calculus :
int ret = global.tune.maxrewrite - chn->to_forward - chn->buf->o;
- chn->to_forward may be up to 2^31 - 1
- chn->buf->o may be up to chn->buf->size
- global.tune.maxrewrite is by definition smaller than chn->buf->size
Thus here we can subtract (2^31 + buf->o) (highly negative) from something
slightly positive, and result in ret being larger than expected.
Fortunately in 1.5 and 1.6, this is only used by bi_avail() which itself
is used by applets which do not set high values for to_forward so this
problem does not happen there. However in 1.4 the equivalent computation
was used to limit the size of a read and can result in a read overflow
when combined with the nasty http-send-name-header feature.
This fix must be backported to 1.5 and 1.4.
(cherry picked from commit 0428a146c0ee776f7c89359e8aed0d755ffbad07)
In 1.4-dev7, a header removal mechanism was introduced with commit 68085d8
("[MINOR] http: add http_remove_header2() to remove a header value."). Due
to a typo in the function, the beginning of the headers gets desynchronized
if the header preceeding the deleted one ends with an LF/CRLF combination
different form the one of the removed header. The reason is that while
rewinding the pointer, we go back by a number of bytes taking into account
the LF/CRLF status of the removed header instead of the previous one. The
case where it fails is in http-request del-header/set-header where the
multiple occurrences of a header are present and their LF/CRLF ending
differs from the preceeding header. The loop then stops because no more
headers are found given that the names and length do not match.
Another point to take into consideration is that removing headers using
a loop of http_find_header2() and this function is inefficient since we
remove values one at a time while it could be simpler and faster to
remove full header lines. This is something that should be addressed
separately.
This fix must be backported to 1.5 and 1.4. Note that http-send-name-header
relies on this function as well so it could be possible that some of the
issues encountered with it in 1.4 come from this bug.
(cherry picked from commit 7c1c21742639e57919b315fe02f6592c832be146)
There was no "log-format" entry in the keyword index! This should be
backported to 1.5.
(cherry picked from commit fb4e7ea30745a6a3a12077fa75f0cd1ba422a01b)
balance hdr(<name>) provides on option 'use_domain_only' to match only the
domain part in a header (designed for the Host header).
Olivier Fredj reported that the hashes were not the same for
'subdomain.domain.tld' and 'domain.tld'.
This is because the pointer was rewinded one step to far, resulting in a hash
calculated against wrong values :
- '.domai' for 'subdomain.domain.tld'
- ' domai' for 'domain.tld' (beginning with the space in the header line)
Another special case is when no dot can be found in the header : the hash will
be calculated against an empty string.
The patch addresses both cases : 'domain' will be used to compute the hash for
'subdomain.domain.tld', 'domain.tld' and 'domain' (using the whole header value
for the last case).
The fix must be backported to haproxy 1.5 and 1.4.
(cherry picked from commit f607d81d09ab839fb1143b749ff231d6093f2038)
Released version 1.5.10 with the following main changes :
- DOC: fix a few typos
- BUG/MINOR: http: fix typo: "401 Unauthorized" => "407 Unauthorized"
- BUG/MINOR: parse: refer curproxy instead of proxy
- DOC: httplog does not support 'no'
- MINOR: map/acl/dumpstats: remove the "Done." message
- BUG/MEDIUM: sample: fix random number upper-bound
- BUG/MEDIUM: patterns: previous fix was incomplete
- BUG/MEDIUM: payload: ensure that a request channel is available
- BUG/MINOR: tcp-check: don't condition data polling on check type
- BUG/MEDIUM: tcp-check: don't rely on random memory contents
- BUG/MEDIUM: tcp-checks: disable quick-ack unless next rule is an expect
- BUG/MINOR: config: fix typo in condition when propagating process binding
- BUG/MEDIUM: config: do not propagate processes between stopped processes
- BUG/MAJOR: stream-int: properly check the memory allocation return
- BUG/MEDIUM: memory: fix freeing logic in pool_gc2()
- BUG/MEDIUM: compression: correctly report zlib_mem
In zlib we track memory usage. The problem is that the way alloc_zlib()
and free_zlib() account for memory is different, resulting in variations
that can lead to negative zlib_mem being reported. The alloc() function
uses the requested size while the free() function uses the pool size. The
difference can happen when pools are shared with other pools of similar
size. The net effect is that zlib_mem can be reported negative with a
slowly decreasing count, and over the long term the limit will not be
enforced anymore.
The fix is simple : let's use the pool size in both cases, which is also
the exact value when it comes to memory usage.
This fix must be backported to 1.5.
(cherry picked from commit 4f31fc2f28f09cb1bcd27af3367ee5acea29b656)
There's a long-standing bug in pool_gc2(). It tries to protect the pool
against releasing of too many entries but the formula is wrong as it
compares allocated to minavail instead of (allocated-used) to minavail.
Under memory contention, it ends up releasing more than what is granted
by minavail and causes trouble to the dynamic buffer allocator.
This bug is in fact major by itself, but since minavail has never been
used till now, there is no impact at least in mainline. A backport to
1.5 is desired anyway in case any future backport or out-of-tree patch
relies on this.
(cherry picked from commit 57767b80329ceade67302aed4fd9760ed5f3d644)
In stream_int_register_handler(), we call si_alloc_appctx(si) but as a
mistake, instead of checking the return value for a NULL, we test <si>.
This bug was discovered under extreme memory contention (memory for only
two buffers with 500 connections waiting) and after 3 million failed
connections. While it was very hard to produce it, the fix is tagged
major because in theory it could happen when haproxy runs with a very
low "-m" setting preventing from allocating just the few bytes needed
for an appctx. But most users will never be able to trigger it. The
fix was confirmed to address the bug.
This fix must be backported to 1.5.
(cherry picked from commit a69fc9f803c05de93b03fc7d4a28d5c503c6d3c9)
By convention, the HAProxy CLI doesn't return message if the opration
is sucessfully done. The MAP and ACL returns the "Done." message, an
its noise the output during big MAP or ACL injection.
(cherry picked from commit 07e78c50b524a25a8fa67f107429fc68d78e14f5)
[wt: this is not a bug but better clean that as it annoys users, so
it's better to backport it to stable]
Immo Goltz reported a case of segfault while parsing the config where
we try to propagate processes across stopped frontends (those with a
"disabled" statement). The fix is trivial. The workaround consists in
commenting out these frontends, although not always easy.
This fix must be backported to 1.5.
(cherry picked from commit f6b70013389cf9378c6a4d55d3d570de4f95c33c)
propagate_processes() has a typo in a condition :
if (!from->cap & PR_CAP_FE)
return;
The return is never taken because each proxy has at least one capability
so !from->cap always evaluates to zero. Most of the time the caller already
checks that <from> is a frontend. In the cases where it's not tested
(use_backend, reqsetbe), the rules have been checked for the context to
be a frontend as well, so in the end it had no nasty side effect.
This should be backported to 1.5.
(cherry picked from commit 8a95d8cd61c8ec61b9e1c9c9e571405878a40624)
Since during parsing stage, curproxy always represents a proxy to be operated,
it should be a mistake by referring proxy.
Signed-off-by: Godbach <nylzhaowei@gmail.com>
(cherry picked from commit d972203fbc3aab527cbd73ede70f9d9b35ecb809)
random() will generate a number between 0 and RAND_MAX. POSIX mandates
RAND_MAX to be at least 32767. GNU libc uses (1<<31 - 1) as
RAND_MAX.
In smp_fetch_rand(), a reduction is done with a multiply and shift to
avoid skewing the results. However, the shift was always 32 and hence
the numbers were not distributed uniformly in the specified range. We
fix that by dividing by RAND_MAX+1. gcc is smart enough to turn that
into a shift:
0x000000000046ecc8 <+40>: shr $0x1f,%rax
(cherry picked from commit 1228dc0e7ae4a6b16c0c7f74a28f8e84601b526c)
Using "option tcp-checks" without any rule is different from not using
it at all in that checks are sent with the TCP quick ack mode enabled,
causing servers to log incoming port probes.
This commit fixes this behaviour by disabling quick-ack on tcp-checks
unless the next rule exists and is an expect. All combinations were
tested and now the behaviour is as expected : basic port probes are
now doing a SYN-SYN/ACK-RST sequence.
This fix must be backported to 1.5.
(cherry picked from commit f3d3482c984593b21a6d3de6d66a83a338316b85)
If "option tcp-check" is used and no "tcp-check" rule is specified, we
only look at rule->action which dereferences the proxy's memory and which
can randomly match TCPCHK_ACT_CONNECT or whatever else, causing a check
to fail. This bug is the result of an incorrect fix attempted in commit
f621bea ("BUG/MINOR: tcpcheck connect wrong behavior").
This fix must be backported into 1.5.
(cherry picked from commit d2a49592faca66381303d8e0d63665ae44046def)
tcp_check_main() would condition the polling for writes on check->type,
but this is absurd given that check->type == PR_O2_TCPCHK_CHK since this
is the only way we can get there! This patch removes this confusing test.
(cherry picked from commit e7b9ed33ee7306c1f9fd276bb5e683befab06e82)
Denys Fedoryshchenko reported a segfault when using certain
sample fetch functions in the "tcp-request connection" rulesets
despite the warnings. This is because some tests for the existence
of the channel were missing.
The fetches which were fixed are :
- req.ssl_hello_type
- rep.ssl_hello_type
- req.ssl_sni
This fix must be backported to 1.5.
(cherry picked from commit 83f2592bcd2e186beeabcba16be16faaab82bd39)
Dmitry Sivachenko <trtrmitya@gmail.com> reported that commit 315ec42
("BUG/MEDIUM: pattern: don't load more than once a pattern list.")
relies on an uninitialised variable in the stack. While it used to
work fine during the tests, if the uninitialized variable is non-null,
some patterns may be aggregated if loaded multiple times, resulting in
slower processing, which was the original issue it tried to address.
The fix needs to be backported to 1.5.
(cherry picked from commit 4deaf39243c4d941998b1b0175bad05b8a287c0b)
Released version 1.5.9 with the following main changes :
- BUILD: fix "make install" to support spaces in the install dirs
- BUG/MEDIUM: checks: fix conflicts between agent checks and ssl healthchecks
- BUG/MEDIUM: ssl: fix bad ssl context init can cause segfault in case of OOM.
- BUG/MINOR: samples: fix unnecessary memcopy converting binary to string.
- BUG/MEDIUM: connection: sanitize PPv2 header length before parsing address information
- BUG/MEDIUM: pattern: don't load more than once a pattern list.
- BUG/MEDIUM: ssl: force a full GC in case of memory shortage
- BUG/MINOR: config: don't inherit the default balance algorithm in frontends
- BUG/MAJOR: frontend: initialize capture pointers earlier
- BUG/MINOR: stats: correctly set the request/response analysers
- DOC: fix typo in the body parser documentation for msg.sov
- BUG/MINOR: peers: the buffer size is global.tune.bufsize, not trash.size
- MINOR: sample: add a few basic internal fetches (nbproc, proc, stopping)
- BUG/MAJOR: sessions: unlink session from list on out of memory
Since embryonic sessions were introduced in 1.5-dev12 with commit
2542b53 ("MAJOR: session: introduce embryonic sessions"), a major
bug remained present. If haproxy cannot allocate memory during
session_complete() (for example, no more buffers), it will not
unlink the new session from the sessions list. This will cause
memory corruptions if the memory area from the session is reused
for anything else, and may also cause bogus output on "show sess"
on the CLI.
This fix must be backported to 1.5.
(cherry picked from commit 3b24641745b32289235d765f441ec60fa7381f99)
Sometimes, either for debugging or for logging we'd like to have a bit
of information about the running process. Here are 3 new fetches for this :
nbproc : integer
Returns an integer value corresponding to the number of processes that were
started (it equals the global "nbproc" setting). This is useful for logging
and debugging purposes.
proc : integer
Returns an integer value corresponding to the position of the process calling
the function, between 1 and global.nbproc. This is useful for logging and
debugging purposes.
stopping : boolean
Returns TRUE if the process calling the function is currently stopping. This
can be useful for logging, or for relaxing certain checks or helping close
certain connections upon graceful shutdown.
(cherry picked from commit 0f30d26dbf74c21bce9a4d216dd868dc4d84d324)
Currently this is harmless since trash.size is copied from
global.tune.bufsize, but this may soon change when buffers become
more dynamic.
At least for consistency it should be backported to 1.5.
(cherry picked from commit 42fb809cf455aac59dbe35ebd59262fb52e58bb5)
A memory optimization can use the same pattern expression for many
equal pattern list (same parse method, index method and index_smp
method).
The pattern expression is returned by "pattern_new_expr", but this
function dont indicate if the returned pattern is already in use.
So, the caller function reload the list of patterns in addition with
the existing patterns. This behavior is not a problem with tree indexed
pattern, but it grows the lists indexed patterns.
This fix add a "reuse" flag in return of the function "pattern_new_expr".
If the flag is set, I suppose that the patterns are already loaded.
This fix must be backported into 1.5.
(cherry picked from commit 315ec4217f912f6cc8fcf98624d852f9cd8399f9)
When enabling stats, response analysers were set on the request
analyser list, which 1) has no effect, and 2) means we don't have
the response analysers properly set.
In practice these response analysers are set when the connection
to the server or applet is established so we don't need/must not
set them here.
Fortunately this bug had no impact since the flags are distinct,
but it definitely is confusing.
It should be backported to 1.5.
(cherry picked from commit 5506e3f8b69927c1a381a268d5b0ca9907b363f5)
Previously, if hdr_v2->len was less than the length of the protocol
specific address information we could have read after the end of the
buffer and initialize the sockaddr structure with junk.
Signed-off-by: KOVACS Krisztian <hidden@balabit.com>
[WT: this is only tagged medium since proxy protocol is only used from
trusted sources]
This must be backported to 1.5.
(cherry picked from commit efd3aa93412648cf923bf3d2e171c0b84e9d7a69)
Denys Fedoryshchenko reported and diagnosed a nasty bug caused by TCP
captures, introduced in late 1.5-dev by commit 18bf01e ("MEDIUM: tcp:
add a new tcp-request capture directive"). The problem is that we're
using the array of capture pointers initially designed for HTTP usage
only, and that this array was only reset when starting to process an
HTTP request. In a tcp-only frontend, the pointers are not reset, and
if the capture pool is shared, we can very well point to whatever other
memory location, resulting in random crashes when tcp-request content
captures are processed.
The fix simply consists in initializing these pointers when the pools
are prepared.
A workaround for existing versions consists in either disabling TCP
captures in tcp-only frontends, or in forcing the frontends to work in
HTTP mode.
Thanks to Denys for the amount of testing and detailed reports.
This fix must be backported to 1.5.
(cherry picked from commit 9654e57fac86c773091b892f42015ba2ba56be5a)
Tom Limoncelli from Stack Exchange reported a minor bug : the frontend
inherits the LB parameters from the defaults sections. The impact is
that if a "balance" directive uses any L7 parameter in the defaults
sections and the frontend is in TCP mode, a warning is emitted about
their incompatibility. The warning is harmless but a valid, sane config
should never cause any warning to be reported.
This fix should be backported into 1.5 and possibly 1.4.
(cherry picked from commit 743c128580ee29c8f073b4a29771a5ce715f3721)
Lasse Birnbaum Jensen reported an issue when agent checks are used at the same
time as standard healthchecks when SSL is enabled on the server side.
The symptom is that agent checks try to communicate in SSL while it should
manage raw data. This happens because the transport layer is shared between all
kind of checks.
To fix the issue, the transport layer is now stored in each check type,
allowing to use SSL healthchecks when required, while an agent check should
always use the raw_sock implementation.
The fix must be backported to 1.5.
(cherry picked from commit 9ce1311ebc834e20addc7a8392c0fc4e4ad687b7)
When memory becomes scarce and openssl refuses to allocate a new SSL
session, it is worth freeing the pools and trying again instead of
rejecting all incoming SSL connection. This can happen when some
memory usage limits have been assigned to the haproxy process using
-m or with ulimit -m/-v.
This is mostly an enhancement of previous fix and is worth backporting
to 1.5.
(cherry picked from commit fba03cdc5ac6e3ca318b34915596cbc0a0dacc55)
Some SSL context's init functions errors were not handled and
can cause a segfault due to an incomplete SSL context
initialization.
This fix must be backported to 1.5.
(cherry picked from commit 5547615cdac377797ae351a2e024376dbf6d6963)
Released version 1.5.8 with the following main changes :
- BUG/MAJOR: buffer: check the space left is enough or not when input data in a buffer is wrapped
- BUG/BUILD: revert accidental change in the makefile from latest SSL fix