IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
This patch implements ability to set the current state of one server
by tracking another one. It:
- adds two variables: *tracknext, *tracked to struct server
- implements findserver(), similar to findproxy()
- adds "track" keyword accepting both "proxy/server" and "server" (assuming current proxy)
- verifies if both checks and tracking is not enabled at the same time
- changes set_server_down() to notify tracking server
- creates set_server_up(), set_server_disabled(), set_server_enabled() by
moving the code from process_chk() and adding notifications
- changes stats to show a name of tracked server instead of Chk/Dwn/Dwntime(html)
or by adding new variable (csv)
Changes from the previuos version:
- it is possibile to track independently of the declaration order
- one extra comma bug is fixed
- new condition to check if there is no disable-on-404 inconsistency
GCC4 is stupid (unbelievable news!).
When some code uses __builtin_expect(x != 0, 1), it really performs
the check of x != 0 then tests that the result is not zero! This is
a double check when only one was expected. Some performance drops
of 10% in the HTTP parser code have been observed due to this bug.
GCC 3.4 is fine though.
A solution consists in expecting that the tested value is 1. In
this case, it emits the correct code, but it's still not optimal
it seems. Finally the best solution is to ignore likely() and to
pray for the compiler to emit correct code. However, we still have
to fix unlikely() to remove the test there too, and to fix all
code which passed pointers overthere to pass integers instead.
State and offsets within http_msg were incorrectly set to signed int.
Turning them into unsigned slightly improved performance while reducing
code size.
Now when a server has "redir <prefix>" on its config line, any HEAD or GET
request addressing it will lead to a 302 with Location set to "<prefix>"
immediately followed by the relative URI of the incoming request. This makes
it very easy to send redirect to browsers to check remote static servers, as
well as to provide redirection for remote sites when the local one is down.
The servers now support the "redir" keyword, making it possible to
return a 302 with the specified prefix in front of the request instead
of connecting to them. This is generally useful for multi-site load
balancing but may also serve in order to achieve very high traffic
rate.
The keyword has only been added to the config parser and to structures,
it's not used yet.
This patch adds two new variables: fastinter and downinter.
When server state is:
- non-transitionally UP -> inter (no change)
- transitionally UP (going down), unchecked or transitionally DOWN (going up) -> fastinter
- down -> downinter
It allows to set something like:
server sr6 127.0.51.61:80 cookie s6 check inter 10000 downinter 20000 fastinter 500 fall 3 weight 40
In the above example haproxy uses 10000ms between checks but as soon as
one check fails fastinter (500ms) is used. If server is down
downinter (20000) is used or fastinter (500ms) if one check pass.
Fastinter is also used when haproxy starts.
New "timeout.check" variable was added, if set haproxy uses it as an additional
read timeout, but only after a connection has been already established. I was
thinking about using "timeout.server" here but most people set this
with an addition reserve but still want checks to kick out laggy servers.
Please also note that in most cases check request is much simpler
and faster to handle than normal requests so this timeout should be smaller.
I also changed the timeout used for check connections establishing.
Changes from the previous version:
- use tv_isset() to check if the timeout is set,
- use min("timeout connect", "inter") but only if "timeout check" is set
as this min alone may be to short for full (connect + read) check,
- debug code (fprintf) commented/removed
- documentation
Compile tested only (sorry!) as I'm currently traveling but changes
are rather small and trivial.
Due to the way Linux delivers EPOLLIN and EPOLLHUP, a closed connection
received after some server data sometimes results in truncated responses
if the client disconnects before server starts to respond. The reason
is that the EPOLLHUP flag is processed as an indication of end of
transfer while some data may remain in the system's socket buffers.
This problem could only be triggered with sepoll, although nothing should
prevent it from happening with normal epoll. In fact, the work factoring
performed by sepoll increases the risk that this bug appears.
The fix consists in making FD_POLL_HUP and FD_POLL_ERR sticky and that
they are only checked if FD_POLL_IN is not set, meaning that we have
read all pending data.
That way, the problem is definitely fixed and sepoll still remains about
17% faster than epoll since it can take into account all information
returned by the kernel.
The source address selection for health checks did not consider
the new transparent proxy method. Rely on the same unified function
as the other connect() calls.
This patch also fixes a bug by which the proxy's source address was
ignored if cttproxy was used.
Balabit's TPROXY version 4 which replaces CTTPROXY provides a similar
API to the previous proxy, but relies on IP_FREEBIND instead of
IP_TRANSPARENT. Let's add it.
Using some Linux kernel patches, it is possible to redirect non-local
traffic to local sockets when IP forwarding is enabled. In order to
enable this option, we introduce the "transparent" option keyword on
the "bind" command line. It will make the socket reachable by remote
sources even if the destination address does not belong to the machine.
Several users have complained that when haproxy gets a connection
failure due to an active reject from a server, it immediately
retries, often leading to the same situation being repeated until
the retry counter reaches zero.
Now if a connection error shows up, a turn-around state of 1 second
is applied before retrying. This is performed by faking a connection
timeout in order not to touch much code. However, a cleaner method
would involve an extra state.
This patch extends a little previously added functionality to also
count retries and redispatches for servers. Now it is possible to know
which server causes redispatches as it is not always the same that takes
most retries.
While working with the code I found that redistribute_pending() does not increment
srv->redispatches && be->redispatches. I don't know how to test it but
I think the fix is correct. If not I can withdraw it.
I also extended logs to show how many retries were done and if redispatching
was necessary ('+'). I'm using an additional session flag SN_REDISP to match
redispatched connections. I had to rearrange all defines in session.h to make
more room for it.
The documentation about logs was also fixed a little (sorry, english only),
as current version uses totally different format. BTW: examples are still
outdated, maybe next time...
Finally, I changed %d -> %u for retries/redispatches as those variables
are declared as unsigned.
In order to offer DoS protection, it may be required to lower the maximum
accepted time to receive a complete HTTP request without affecting the client
timeout. This helps protecting against established connections on which
nothing is sent. The client timeout cannot offer a good protection against
this abuse because it is an inactivity timeout, which means that if the
attacker sends one character every now and then, the timeout will not
trigger. With the HTTP request timeout, no matter what speed the client
types, the request will be aborted if it does not complete in time.
This new parameter makes it possible to override the default
number of consecutive incoming connections which can be
accepted on a socket. By default it is not limited on single
process mode, and limited to 8 in multi-process mode.
Add the "backlog" parameter to frontends, to give hints to
the system about the approximate listen backlog desired size.
In order to protect against SYN flood attacks, one solution is
to increase the system's SYN backlog size. Depending on the
system, sometimes it is just tunable via a system parameter,
sometimes it is not adjustable at all, and sometimes the system
relies on hints given by the application at the time of the
listen() syscall. By default, HAProxy passes the frontend's
maxconn value to the listen() syscall. On systems which can
make use of this value, it can sometimes be useful to be able
to specify a different value, hence this backlog parameter.
It is sometimes required to know some informations such as the
process uptime when consulting statistics. This patch adds the
"show info" command to query those informations on the UNIX
socket.
This patch adds a possibility to invert most of available options by
introducing the "no" keyword, available as an additional prefix.
If it is found arguments are shifted left and an additional flag (inv)
is set.
It allows to use all options from a current defaults section, except
the selected ones, for example:
-- cut here --
defaults
contimeout 4200
clitimeout 50000
srvtimeout 40000
option contstats
listen stats 1.2.3.4:80
no option contstats
-- cut here --
Currenly inversion works only with the "option" keyword.
The patch also moves last_checks calculation at the end of the readcfgfile()
function and changes "PR_O_FORCE_CLO | PR_O_HTTP_CLOSE" into "PR_O_FORCE_CLO"
in cfg_opts so it is possible to invert forceclose without breaking httpclose
(and vice versa) and to invert tcpsplice in one proxy but to keep a proper
last_checks value when tcpsplice is used in another proxy. Now, the code
checks for PR_O_FORCE_CLO everywhere it checks for PR_O_HTTP_CLOSE.
I also decided to depreciate "redisp" and "redispatch" keywords as it is IMHO
better to use "option redispatch" which can be inverted.
Some useful documentation were added and at the same time I sorted
(alfabetically) all valid options both in the code and the documentation.
The code in haproxy-1.3.13.1 only supports syslogging to an internet
address. The attached patch:
- Adds support for syslogging to a UNIX domain socket (e.g., /dev/log).
If the address field begins with '/' (absolute file path), then
AF_UNIX is used to construct the socket. Otherwise, AF_INET is used.
- Achieves clean single-source build on both Mac OS X and Linux
(sockaddr_in.sin_len and sockaddr_un.sun_len field aren't always present).
For handling sendto() failures in send_log(), it appears that the existing
code is fine (no need to close/recreate socket) for both UDP and UNIX-domain
syslog server. So I left things alone (did not close/recreate socket).
Closing/recreating socket after each failure would also work, but would lead
to increased amount of unnecessary socket creation/destruction if syslog is
temporarily unavailable for some reason (especially for verbose loggers).
Please consider this patch for inclusion into the upstream haproxy codebase.
One user reported that an indicator was missing in the statistics:
the number of times each server was selected by load balancing. It
is in fact the total number of sessions assigned to a server by the
load balancing algorithm. It should directly reflect the weight for
"fair" algorithms such as round-robin, since it will not account for
persistant connections.
It should help a lot tuning each server's weight depending on the
load it receives.
A new "timeout" keyword replaces old "{con|cli|srv}timeout", and
provides the ability to independantly set the following timeouts :
- client
- tarpit
- queue
- connect
- server
- appsession
Additionally, the "clitimeout", "contimeout" and "srvtimeout" values
are supported but deprecated. No warning is emitted yet when they are
used since the option is very new.
Other timeouts should follow soon now.
Now the connect timeout, tarpit timeout and queue timeout are
distinct. In order to retain compatibility with older versions,
if either queue or tarpit is left unset both in the proxy and
in the default proxy, then it is inherited from the connect
timeout as before.
This new function accepts inputs in various default units, from
the microsecond to the day. It detects suffixes after numbers
and performs the appropriate conversions between the user's unit
and the program's unit, considering a unit-less number in the
default unit.
In order to avoid issues in the future, we want to restrict
the set of allowed characters for identifiers. Starting from
now, only A-Z, a-z, 0-9, '-', '_', '.' and ':' will be allowed
for a proxy, a server or an ACL name.
A test file has been added to check the restriction.
Now we can compute the max place depending on the number of servers,
maximum weight and weight scale. The formula has been stored as a
comment so that it's easy to choose between smooth weight ramp up
and high number of servers. The default scale has been set to 16,
which permits 4000 servers with a granularity of 6% in the worst
case (weight=1).
Under certain circumstances, it is very useful to be able to fail some
monitor requests. One specific case is when the number of servers in
the backend falls below a certain level. The new "monitor fail" construct
followed by either "if"/"unless" <condition> makes it possible to specify
ACL-based conditions which will make the monitor return 503 instead of
200. Any number of conditions can be passed. Another use may be to limit
the requests to local networks only.
AIX does not know about MSG_DONTWAIT. Fortunately, nearly all sockets
are already set to O_NONBLOCK, so it's not even required to change the
code. It was only necessary to add this fcntl to the log socket which
lacked it. The MSG_DONTWAIT value has been defined to zero when unset
in order to make the code cleaner and more portable.
Also, on AIX, "hz" is defined, which causes a problem with one function
parameter in time.c. It's enough to rename the parameter there. Last,
fix a missing #include <string.h> in proxy.c.
The new 'slowstart' parameter for a server accepts a value in
milliseconds which indicates after how long a server which has
just come back up will run at full speed. The speed grows
linearly from 0 to 100% during this time. The limitation applies
to two parameters :
- maxconn: the number of connections accepted by the server
will grow from 1 to 100% of the usual dynamic limit defined
by (minconn,maxconn,fullconn).
- weight: when the backend uses a dynamic weighted algorithm,
the weight grows linearly from 1 to 100%. In this case, the
weight is updated at every health-check. For this reason, it
is important that the 'inter' parameter is smaller than the
'slowstart', in order to maximize the number of steps.
The slowstart never applies when haproxy starts, otherwise it
would cause trouble to running servers. It only applies when
a server has been previously seen as failed.
When an HTTP server returns "404 not found", it indicates that at least
part of it is still running. For this reason, it can be convenient for
application administrators to be able to consider code 404 as valid,
but for a server which does not want to participate to load balancing
anymore. This is useful to seamlessly exclude a server from a farm
without acting on the load balancer. For instance, let's consider that
haproxy checks for the "/alive" file. To enable load balancing on a
server, the admin would simply do :
# touch /var/www/alive
And to disable the server, he would simply do :
# rm /var/www/alive
Another immediate gain from doing this is that it is now possible to
send NOTICE messages instead of ALERT messages when a server is first
disable, then goes down. This provides a graceful shutdown method.
To enable this behaviour, specify "http-check disable-on-404" in the
backend.
Hello,
You will find attached an updated release of previously submitted patch.
It polish some part and extend ACL engine to match IP and PORT parsed in
HTTP request. (and take care of comments made by Willy ! ;))
Best regards,
Alexandre
The number of possible options for a proxy has already reached
32, which is the current limit due to the fact that they are
each represented as a bit in a 32-bit word.
It's possible to move the load balancing algorithms to another
place. It will also save some space for future algorithms.
This round robin algorithm was written from trees, so that we
do not have to recompute any table when changing server weights.
This solution allows on-the-fly weight adjustments with immediate
effect on the load distribution.
There is still a limitation due to 32-bit computations, to about
2000 servers at full scale (weight 255), or more servers with
lower weights. Basically, sum(srv.weight)*4096 must be below 2^31.
Test configurations and an example program used to develop the
tree will be added next.
Many changes have been brought to the weights computations and
variables in order to accomodate for the possiblity of a server to
be running but disabled from load balancing due to a null weight.
Under some circumstances, it will be useful to be able to have
a server's effective weight bigger than the user weight, and this
is particularly true for dynamic weight-based algorithms. In order
to support this, we add a "wdiv" member to the lbprm structure
which will always be used to divide the weights before reporting
them.
Since the introduction of server weights, all load balancing algorithms
relied on a pre-computed map. Incidently, quite a bunch of map-specific
parameters were used at random places in order to get the number of
servers or their total weight. It was not architecturally acceptable
that optimizations for the map computation had impact on external parts.
For instance, during this cleanup it was found that a backend weight was
seen as 1 when only the first backup server is used, whatever its weight.
This cleanup consists in differentiating between LB-generic parameters,
such as total weights, number of servers, etc... and map-specific ones.
The struct proxy has been enhanced in order to make it easier to later
support other algorithms. The recount_servers() function now also
updates generic values such as total weights so that it's not needed
anymore to call recalc_server_map() when weights are needed. This
permitted to simplify some code which does not need to know about map
internals anymore.
By default, counters used for statistics calculation are incremented
only when a session finishes. It works quite well when serving small
objects, but with big ones (for example large images or archives) or
with A/V streaming, a graph generated from haproxy counters looks like
a hedgehog.
This patch implements a contstats (continous statistics) option.
When set counters get incremented continuously, during a whole session.
Recounting touches a hotpath directly so it is not enabled by default,
as it has small performance impact (~0.5%).
It is very convenient for SNMP monitoring to have unique process ID,
proxy ID and server ID. Those have been added to the CSV outputs.
The numbers start at 1. 0 is reserved. For servers, 0 means that the
reported name is not a server name but half a proxy (FRONTEND/BACKEND).
A remaining hidden "-" in the CSV output has been eliminated too.
Proxy listeners were very special and not very easy to manipulate.
A proto_tcp file has been created with all that is required to
manage TCPv4/TCPv6 as raw protocols, and provide generic listeners.
The code of start_proxies() and maintain_proxies() now looks less
like spaghetti. Also, event_accept will need a serious lifting in
order to use more of the information provided by the listener.