IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Those 3 parts are the buffer side, the remote side and the communication
functions. This change has no functional effect but is needed to proceed
further.
I/O handlers are still delicate to manipulate. They have no type, they're
just raw functions which have no knowledge of themselves. Let's have them
declared as applets once for all. That way we can have multiple applets
share the same handler functions and we can store their names there. When
we later need to add more parameters (eg: usage stats), we'll be able to
do so in the applets themselves.
The CLI functions has been prefixed with "cli" instead of "stats" as it's
clearly what is going on there.
The applet descriptor in the stream interface should get all the applet
specific data (st0, ...) but this will be done in the next patch so that
we don't pollute this one too much.
Both Hank A. Paulson and Rob at pixsense reported a crash when
loading ACLs from a pattern file which contains empty lines.
From the tests, it appears that only files that contain nothing
but empty lines are causing that (in the past they would have had
their line feeds loaded as patterns).
The crash happens in the free_pattern() call which doesn't like to
be called with a NULL pattern. Let's make it accept it so that it's
more in line with the standard uses of free() which ignores NULLs.
Similar to the stats socket bug, we must check that the proxy is not disabled
before trying to enable/disable a server.
Even if a disabled proxy is not displayed, someone can inject a faulty proxy
name in the POST parameters. So, we must ensure that no disabled proxy can be
used.
As reported by Bryan Talbot, enabling and disabling a server in a disabled
proxy causes a segfault.
Changing the weight can also cause a similar segfault.
Bryan Talbot reported that POST requests with a query string were not
correctly processed if the hash parameter was the first one, because
the delimiter that was looked for to trigger the parsing was '&' instead
of '?'.
Also, while checking the code, it became apparent that it was enough for
a query string to be present in the request for POST parameters to be
ignored, even if the url_param was in the body and not in the URL.
The code has then been fixed like this :
1) look for URL param. If found, return it.
2) if no URL param was found and method is POST, then look it up into
the body
The code now seems to pass all request combinations.
This patch must be backported to 1.4 since 1.4 is equally broken right now.
Till now, the forwarding code was making use of the hdr_content_len member
to hold the size of the last chunk parsed. As such, it was reset after being
scheduled for forwarding. The issue is that this entry was reset before the
data could be viewed by backend.c in order to parse a POST body, so the
"balance url_param check_post" did not work anymore.
In order to fix this, we need two things :
- the chunk size (reset upon every forward)
- the total body size (not reset)
hdr_content_len was thus replaced by the former (hence the size of the patch)
as it makes more sense to have it stored that way than the way around.
This patch should be backported to 1.4 with care, considering that it affects
the forwarding code.
It seems like if a response message is chunked and the chunk size wraps
at the end of the buffer and the crlf sequence is incomplete, then we
can forward a wrong chunk size due to incorrect handling of the wrapped
size. It seems extremely unlikely to occur on real traffic (no reason to
have half of the CRLF after a chunk) but nothing prevents it from being
possible.
This fix must be backported to 1.4.
As reported by the Loadbalancer.org team, it was not possible to bind
more than 1024 ports. This is because the process' limits were set after
trying to bind the sockets, which defeats their purpose.
This fix must be backported to 1.4 and 1.3.
We used to only count one socket instead of one per listener. This makes
the socket count wrong, preventing from automatically computing the proper
number of sockets to bind.
This fix must be backported to 1.4 and 1.3.
req_acl was used instead of req_acl_final. As a matter of luck, both
happen to be the same at this point, but this is not granted in the
future.
This fix should be backported to 1.4.
Some browsers send POST requests in several packets, which was not supported
by the "stats admin" function.
This patch allows to wait for more data when they are not fully received
(we are still limited to a certain size defined by the buffer size minus its
reserved space).
It also adds support for the "Expect: 100-Continue" header.
Stefan Behte reported a strange case where depending on the position of
the Connection header in the header list, some headers added after it
were or were not usable in "balance hdr()". The reason is that when the
last header is removed, the list's tail was not updated, so any header
added after that one was not visible from the list.
This fix must be backported to 1.4 and possibly 1.3.
while working further on the changes to allow for dynamic
adding/removing of backend servers we noticed a potential problem: the
path given for the 'stats socket' global option may get truncated when
copying it into the sockaddr_un.sun_path field.
Attached patch checks the length, and reports an error if truncation
would happen.
This issue was noticed by Joerg Sonnenberger <joerg@NetBSD.org>.
src/frontend.c: In function 'frontend_accept':
src/frontend.c:110: warning: pointer targets in passing argument 5 of 'getsockopt' differ in signedness
The argument should be socklen_t and not int.
I have written a small patch to enable a correct PostgreSQL health check
It works similar to mysql-check with the very same parameters.
E.g.:
listen pgsql 127.0.0.1:5432
mode tcp
option pgsql-check user pgsql
server masterdb pgsql.server.com:5432 check inter 10000
It's better to avoid sticking on empty parameter values, as this almost
always indicates a missing parameter. Otherwise it's easy to enter a
situation where all new visitors stick to the same server.
Revert commits 035da6d1b0c436b85add48bc22120aa814c9cab9 and
f18b5f21bafef909901b7b5cf95625a63e609c75.
These fixes were wrong. They worked but they were fixing the symptom
instead of the root cause of the problem. The real issue was in the
ebtree lookup code and it has been fixed now so these patches are not
needed anymore. It's better not to copy memory blocks when we don't
need to, so let's revert them.
(from ebtree 6.0.5)
Both of them are very short and rely on another non-inlined lookup function,
so it's pointless to have them as pure functions, it wastes space.
(cherry picked from commit 1e68d6fef815f759304d4cc0e65f957689e19a7a)
(from ebtree 6.0.5)
Last bugfix has introduced a de-optimization in the lookup function because
it artificially extended the scope of some local variables, which resulted in
higher stack usage and more numerous moves between stack and registers.
We can reduce that by moving the return code out of the loop, because gcc
notices that it never needs both "troot" and "node" at the same time and
can use the same register for both. Doing so has reduced the code size by
39 bytes for the lookup function alone, and has sensibly reduced the
instruction dependencies caused by data moves.
(cherry picked from commit 59be3cdb96296b65a57aff30cc203269f9a94ebe)
It should be backported to 1.4 if previous ebtree fix is backported.
(from ebtree 6.0.5)
ebmb_lookup() is used by ebst_lookup_len() to lookup a string starting
with a known substring. Since the substring does not necessarily end
with a zero, we must absolutely ensure that the comparison stops at
<len> bytes, otherwise we can end up comparing crap and most often
returning the wrong node in case of multiple matches.
ebim_lookup() was fixed too by resyncing it with ebmb_lookup().
(cherry picked from commit 98eba315aa2c3285181375d312bcb770f058fd2b)
This should be backported to 1.4 though it's not critical there.
Commit 035da6d1b0c436b85add48bc22120aa814c9cab9 was incorrect as it
could modify a live buffer. We must first ensure that we're on the
private buffer or perform a copy before modifying the data.
Gabriel Sosa reported that haproxy unexpectedly reports an error
when a pattern file loaded by an ACL contains an empty line. The
test was present but inefficient as it did not consider the '\n'
as the end of the line. This fix relies on the line length instead.
It should be backported to 1.4.
If a key to be looked up is extracted from data without being padded
and if it matches the beginning of another stored key, it is not
found in subsequent lookups because it does not end with a zero.
This bug was discovered and diagnosed by David Cournapeau.
One of the requirements we have is to run multiple instances of haproxy on a
single host; this is so that we can split the responsibilities (and change
permissions) between product teams. An issue we ran up against is how we
would distinguish between the logs generated by each instance. The solution
we came up with (please let me know if there is a better way) is to override
the application tag written to syslog. We can then configure syslog to write
these to different files.
I have attached a patch adding a global option 'log-tag' to override the
default syslog tag 'haproxy' (actually defaults to argv[0]).
By passing a negative value to the "mss" argument of "bind" lines, it
becomes possible to subtract this value to the MSS advertised by the
client, which results in segments smaller than advertised. The effect
is useful with some TCP stacks which ACK less often when segments are
not full, because they only ACK every other full segment as suggested
by RFC1122.
NOTE: currently this has no effect on Linux kernel 2.6, a kernel patch
is still required to change the MSS of established connections.
Haproxy does not include the hostname rather the IP of the machine in
the syslog headers it sends. Unfortunately this means that for each log
line rsyslog does a reverse dns on the client IP and in the case of
non-routable IPs one gets the public hostname not the internal one.
While this is valid according to RFC3164 as one might imagine this is
troublsome if you have some machines with public IPs, internal IPs, no
reverse DNS entries, etc and you want a standardized hostname based log
directory structure. The rfc says the preferred value is the hostname.
This patch adds a global "log-send-hostname" statement which accepts an
optional string to force the host name. If unset, the local host name
is used.
Since haproxy 1.4.9, combining option httpclose and option
http-pretend-keepalive can leave the connections opened until the backend
keep-alive timeout is reached, providing bad performances.
The same can occur when the proxy is in tunnel mode.
This patch ensures that the server side connection is closed after the
response and ignore http-pretend-keepalive in tunnel mode.
When a connection error is encountered on a server and the server's
connection pool is full, pending connections are not woken up because
the current connection is still accounted for on the server, so it
still appears full. This becomes visible on a server which has
"maxconn 1" because the pending connections will only be able to
expire in the queue.
Now we take care of releasing our current connection before trying to
offer it to another pending request, so that the server can accept a
next connection.
This patch should be backported to 1.4.
When a client connection aborts while the server-side connection is in
turn-around after a failed connection attempt, the turn-around timeout
is reset in shutw() but the state is not changed. The session then
remains stuck in this state forever. Change the QUE and TAR states to
DIS just as we do for CER to fix this.
This patch should be backported to 1.4.
We've had several issues related to data transfers. First, if a
client aborted an upload before the server started to respond, it
would get a 502 followed by a 400. The same was true (in the other
way around) if the server suddenly aborted while the client was
uploading the data.
The flags reported in the logs were misleading. Request errors could
be reported while the transfer was stopped during the data phase. The
status codes could also be overwritten by a 400 eventhough the start
of the response was transferred to the client.
The stats were also wrong in case of data aborts. The server or the
client could sometimes be miscredited for being the author of the
abort depending on where the abort was detected. Some client aborts
could also be accounted as request errors and some server aborts as
response errors.
Now it seems like all such issues are fixed. Since we don't have a
specific state for data flowing from the client to the server
before the server responds, we're still counting the client aborted
transfers as "CH", and they become "CD" when the server starts to
respond. Ideally a "P" state would be desired.
This patch should be backported to 1.4.
HTTP pipelining currently needs to monitor the response buffer to wait
for some free space to be able to send a response. It was not possible
for the HTTP analyser to be called based on response buffer activity.
Now we introduce a new buffer flag BF_WAKE_ONCE which is set when the
HTTP request analyser is set on the response buffer and some activity
is detected. This is not clean at all but once of the only ways to fix
the issue before we make it possible to register events for analysers.
Also it appeared that one realign condition did not cover all cases.
Using haproxy in multi-process mode (nbproc > 1), some features can be
not fully compatible or not work at all. haproxy will now display a warning on
startup for :
- appsession
- sticking rules
- stats / stats admin
- stats socket
- peers (fatal error in that case)
During the documentation of the "ignore-persist" keyword, I documented an
invalid "option ignore-persist" and forgot to remove it. It's time to fix it.