IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
The release handler used to be called twice for some time and just by
pure luck we never ended up double-freeing the data there. Add a NULL
to ensure this can never happen should a future change permit this
situation again.
This commit adds support for dumping all resolver stats. Specifically
if a command 'show stats resolvers' is issued withOUT a resolver section
id, we dump all known resolver sections. If none are configured, a
message is displayed indicating that.
When using the external-check command option HAProxy was failing to
start with a fatal error "'external-check' cannot handle unexpected
argument". When looking at the code it was looking for an incorrect
argument. Also correcting an Alert message text as spotted by by
PiBa-NL.
When first parameter to getaddrinfo() is not NULL (it is always not NULL
in str2ip()), on Linux AI_PASSIVE value for ai_flags is ignored. On
FreeBSD, when AI_PASSIVE is specified and hostname parameter is not NULL,
getaddrinfo() ignores local address selection policy, always returning
AAAA record. Pass zero ai_flags to behave correctly on FreeBSD, this
change should be no-op for Linux.
This fix should be backported to 1.5 as well, after some observation
period.
Michael Ezzell reported a bug causing haproxy to segfault during startup
when trying to send syslog message from Lua. The function __send_log() can
be called with *p that is NULL and/or when the configuration is not fully
parsed, as is the case with Lua.
This patch fixes this problem by using individual vectors instead of the
pre-generated strings log_htp and log_htp_rfc5424.
Also, this patch fixes a problem causing haproxy to write the wrong pid in
the logs -- the log_htp(_rfc5424) strings were generated at the haproxy
start, but "pid" value would be changed after haproxy is started in
daemon/systemd mode.
Only the main execution function can set the run flag, because it is
the last function before the execution time.
This patch removes the flag set by another function. It will be used
by the new lua timeout counter.
Commit e11cfcd ("MINOR: config: new backend directives:
load-server-state-from-file and server-state-file-name") introduced a bug
which can cause haproxy to crash upon startup by sending user-controlled
data in a format string when emitting a warning. Fix the way the warning
message is built to avoid this.
No backport is needed, this was introduced in 1.6-dev6 only.
Commit e11cfcd ("MINOR: config: new backend directives:
load-server-state-from-file and server-state-file-name") caused these
warnings when building with Clang :
src/server.c:1972:21: warning: comparison of unsigned expression < 0 is always false [-Wtautological-compare]
(srv_uweight < 0) || (srv_uweight > SRV_UWGHT_MAX))
~~~~~~~~~~~ ^ ~
src/server.c:1980:21: warning: comparison of unsigned expression < 0 is always false [-Wtautological-compare]
(srv_iweight < 0) || (srv_iweight > SRV_UWGHT_MAX))
~~~~~~~~~~~ ^ ~
Indeed, srv_iweight and srv_uweight are unsigned. Just drop the offending test.
The conn_sock_drain() call is only there to force the system to ACK
pending data in case of TCP_QUICKACK so that the client doesn't retransmit,
otherwise it leads to a real RST making the feature useless. There's no
point in draining the connection when quick ack cannot be disabled, so
let's move the call inside the ifdef part.
The silent-drop action is supposed to close with a TCP reset that is
either not sent or not too far. But since it's on the client-facing
side, the socket's lingering is enabled by default and the RST only
occurs if some pending unread data remain in the queue when closing.
This causes some clean shutdowns to occur with retransmits, which is
not good at all. Force linger_risk on the socket to flush all data
and destroy the socket.
No backport is needed, this was introduced in 1.6-dev6.
req.ssl_st_ext : integer
Returns 0 if the client didn't send a SessionTicket TLS Extension (RFC5077)
Returns 1 if the client sent SessionTicket TLS Extension
Returns 2 if the client also sent non-zero length TLS SessionTicket
This stops the evaluation of the rules and makes the client-facing
connection suddenly disappear using a system-dependant way that tries
to prevent the client from being notified. The effect it then that the
client still sees an established connection while there's none on
HAProxy. The purpose is to achieve a comparable effect to "tarpit"
except that it doesn't use any local resource at all on the machine
running HAProxy. It can resist much higher loads than "tarpit", and
slow down stronger attackers. It is important to undestand the impact
of using this mechanism. All stateful equipments placed between the
client and HAProxy (firewalls, proxies, load balancers) will also keep
the established connection for a long time and may suffer from this
action. On modern Linux systems running with enough privileges, the
TCP_REPAIR socket option is used to block the emission of a TCP
reset. On other systems, the socket's TTL is reduced to 1 so that the
TCP reset doesn't pass the first router, though it's still delivered to
local networks.
tcp-request connection had an inverted condition on action_ptr, resulting
in no registered actions to be usable since commit 4214873 ("MEDIUM: actions:
remove ACTION_STOP") merged in 1.6-dev5. Very few new actions were impacted.
No backport is needed.
When peers are stopped due to not being running on the appropriate
process, we want to completely release them and unregister their signals
and task in order to ensure there's no way they may be called in the
future.
Note: ideally we should have a list of all tables attached to a peers
section being disabled in order to unregister them and void their
sync_task. It doesn't appear to be *that* easy for now.
When performing a soft stop, we used to wake up every proxy's task and
each of their table's task. The problem is that since we're able to stop
proxies and peers not bound to a specific process, we may end up calling
random junk by doing so of the proxy we're waking up is already stopped.
This causes a segfault to appear during soft reloads for old processes
not bound to a peers section if such a section exists in other processes.
Let's only consider proxies that are not stopped when doing this.
This fix must be backported to 1.5 which also has the same issue.
Since commit f83d3fe ("MEDIUM: init: stop any peers section not bound
to the correct process"), it is possible to stop unused peers on certain
processes. The problem is that the pause/resume/stop functions are not
aware of this and will pass a NULL proxy pointer to the respective
functions, resulting in segfaults in unbound processes during soft
restarts.
Properly check that the peers' frontend is still valid before calling
them.
This bug also affects 1.5 so the fix must be backported. Note that this
fix is not enough to completely get rid of the segfault, the next one
is needed as well.
Introduction of the new keyword about the client's cookie name
via a new config entry.
Splits the work between the original convertor and the new fetch type.
The latter iterates through the whole list of HTTP headers but first,
make sure the request data are ready to process and set the input
to the string type rightly.
The API is then able to filter itself which headers are gueninely useful,
with a special care of the Client's cookie.
[WT: the immediate impact for the end user is that configs making use of
"da-csv" won't load anymore and will need to be modified to use either
"da-csv-conv" or "da-csv-fetch"]
This patch adds a new RFC5424-specific log-format for the structured-data
that is automatically send by __send_log() when the sender is in RFC5424
mode.
A new statement "log-format-sd" should be used in order to set log-format
for the structured-data part in RFC5424 formatted syslog messages.
Example:
log-format-sd [exampleSDID@1234\ bytes=\"%B\"\ status=\"%ST\"]
The function __send_log() iterates over senders and passes the header as
the first vector to sendmsg(), thus it can send a logger-specific header
in each message.
A new logger arguments "format rfc5424" should be used in order to enable
RFC5424 header format. For example:
log 10.2.3.4:1234 len 2048 format rfc5424 local2 info
At the moment we have to call snprintf() for every log line just to
rebuild a constant. Thanks to sendmsg(), we send the message in 3 parts:
time-based header, proxy-specific hostname+log-tag+pid, session-specific
message.
static_table_key, get_http_auth_buff and swap_buffer static variables
are now freed during deinit and the two previously new functions are
called as well. In addition, the 'trash' string buffer is cleared.
A new function introduced meant to be called during general deinit phase.
During the configuration parsing, the section entries are all allocated.
This new function free them.
The tune.maxrewrite parameter used to be pre-initialized to half of
the buffer size since the very early days when buffers were very small.
It has grown to absurdly large values over the years to reach 8kB for a
16kB buffer. This prevents large requests from being accepted, which is
the opposite of the initial goal.
Many users fix it to 1024 which is already quite large for header
addition.
So let's change the default setting policy :
- pre-initialize it to 1024
- let the user tweak it
- in any case, limit it to tune.bufsize / 2
This results in 15kB usable to buffer HTTP messages instead of 8kB, and
doesn't affect existing configurations which already force it.
This new target can be called from the frontend or the backend. It
is evaluated just before the backend choice and just before the server
choice. So, the input stream or HTTP request can be forwarded to a
server or to an internal service.
This flag is used by custom actions to know that they're called for the
first time. The only case where it's not set is when they're resuming
from a yield. It will be needed to let them know when they have to
allocate some resources.
The garbage collector is a little bit heavy to run, and it was added
only for cosockets. This patch prevent useless executions when no
cosockets are used.
When the channel is down, the applet is waked up. Before this patch,
the applet closes the stream and the unread data are discarded.
After this patch, the stream is not closed except if the buffers are
empty.
It will be closed later by the close function, or by the garbage collector.
The HAProxy Lua socket respects the Lua Socket tcp specs. these specs
are a little bit limited, it not permits to connect to unix socket.
this patch extends a little it the specs. It permit to accept a
syntax that begin with "unix@", "ipv4@", "ipv6@", "fd@" or "abns@".
Actions may yield but must not do it during the final call from a ruleset
because it indicates there will be no more opportunity to complete or
clean up. This is indicated by ACT_FLAG_FINAL in the action's flags,
which must be passed to hlua_resume().
Thanks to this, an action called from a TCP ruleset is properly woken
up and possibly finished when the client disconnects.
In HTTP it's more difficult to know when to pass the flag or not
because all actions are supposed to be final and there's no inspection
delay. Also, the input channel may very well be closed without this
being an error. So we only set the flag when option abortonclose is
set and the input channel is closed, which is the only case where the
user explicitly wants to forward a close down the chain.
This new flag indicates to a custom action that it must not yield because
it will not be called anymore. This addresses an issue introduced by commit
bc4c1ac ("MEDIUM: http/tcp: permit to resume http and tcp custom actions"),
which made it possible to yield even after the last call and causes Lua
actions not to be stopped when the session closes. Note that the Lua issue
is not fixed yet at this point. Also only TCP rules were handled, for now
HTTP rules continue to let the action yield since we don't know whether or
not it is a final call.
Since commit bc4c1ac ("MEDIUM: http/tcp: permit to resume http and tcp
custom actions"), some actions may yield and be called back when new
information are available. Unfortunately some of them may continue to
yield because they simply don't know that it's the last call from the
rule set. For this reason we'll need to pass a flag to the custom
action to pass such information and possibly other at the same time.