IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Versions of splice between 2.6.25 and 2.6.27.12 were bogus and would return EAGAIN
on incoming shutdowns. On these versions, we have to call recv() after such a return
in order to find whether splice is OK or not. Since 2.6.27.13 we don't need to do
this anymore, saving one useless recv() call after each splice() returning EAGAIN,
and we can avoid this logic by defining ASSUME_SPLICE_WORKS.
Building with linux2628 automatically enables splice and the flag above since the
kernel is safe. People enabling splice for custom kernels will be able to disable
this logic by hand too.
Since last commit introducing EPOLLRDHUP, the splicing code is able to
detect an incoming shutdown without calling splice() == 0. This avoids
one useless syscall.
epoll may report pending shutdowns using EPOLLRDHUP. Since this
flag is missing from a number of libcs despite being available
since kernel 2.6.17, let's define it ourselves.
Doing so saves one syscall by allow us to avoid the read()==0 when
the server closes with the respose.
SIGPROF is blocked then restored to default settings, which sometimes
makes profiling harder. Let's not block it by default nor restore it,
it doesn't serve any purpose anyway.
As stated in both RFC2616 and the http-bis drafts, Cache-Control:
no-transform must be looked up in the response since we're modifying
the response. However, its presence in the request is irrelevant to
any changes in the response :
7.2.1.6. no-transform
The "no-transform" request directive indicates that an intermediary
(whether or not it implements a cache) MUST NOT change the Content-
Encoding, Content-Range or Content-Type request header fields, nor
the request representation.
7.2.2.9. no-transform
The "no-transform" response directive indicates that an intermediary
(regardless of whether it implements a cache) MUST NOT change the
Content-Encoding, Content-Range or Content-Type response header
fields, nor the response representation.
Note: according to the specs, we're supposed to emit the following
response header :
Warning: 214 transformation applied
However no other product seems to do it, so the effect on user agents
is unclear.
The doc pretends that src_inc_gpc0 may be used alone without an integer
match, but this is false and has always been since its introduction in
1.5-dev1. If the ACL is called, the increment will be used, the value
returned, but it will be matched against no value so the resulting ACL
will never be true and the condition will not be met.
This means that the following config :
acl abuser src -f abusers.lst
acl blacklist src_inc_gpc0
tcp-request connection reject if abuser blacklist
Will never reject the connection and must be fixed this way :
acl abuser src -f abusers.lst
acl blacklist src_inc_gpc0 gt 0
tcp-request connection reject if abuser blacklist
Note that clr_gpc0 is trickier, as it returns the previous value which
might also be zero. Thus it's suggested to compare it against any positive
value including zero :
tcp-request connection accept if { src_clr_gpc0 ge 0 }
Some arguments were missing on the sc1/sc2 forms of most ACLs including
gpc0, so this has been fixed too.
Reinout Verkerk from Trilex reported an issue with servers recently
flapping after an haproxy upgrade. Haproxy checks a simple agent
returning an HTTP response. The issue is that if the request packet
is lost but the simple agent responds before reading the HTTP request
and closes, the server will emit a TCP RST once the request finally
reaches it.
The way checks have been ported to use connections makes the error
flag show up as a failure after the success, reporting a stupid case
where the server is said to be down with a correct response.
In order to fix this, let's ignore the connection's error flag if a
successful check has already been reported. Reinout could verify that
a patched server did not exhibit the problem anymore.
Commit 7bb68abb introduced the SI_FL_NOHALF flag in dev10. It is used
to automatically close the write side of a connection whose read side
is closed. But the patch also caused the opposite to happen, which is
that a simple shutw() call would immediately close the connection. This
is not desired because when using option abortonclose, we want to pass
the client's shutdown to the server which will decide what to do with
it. So let's avoid the close when SHUTR is not set.
option abortonclose may cause a valid connection to be aborted just
after the request has been sent. This is because we check for it
during the session establishment sequence before checking for write
activity. So if the abort and the connect complete at the same time,
the abort is still considered. Let's check for an explicity partial
write before aborting.
This fix should be backported to 1.4 too.
When a frontend is rate-limited to 1000 connections per second, the
effective rate measured from the client is 999/s, and connections
experience an average response time of 99.5 ms with a standard
deviation of 2 ms.
The reason for this inaccuracy is that when computing frequency
counters, we use one part of the previous value proportional to the
number of milliseconds remaining in the current second. But even the
last millisecond still uses a part of the past value, which is wrong :
since we have a 1ms resolution, the last millisecond must be dedicated
only to filling the current second.
So we slightly adjust the algorithm to use 999/1000 of the past value
during the first millisecond, and 0/1000 of the past value during the
last millisecond. We also slightly improve the computation by computing
the remaining time instead of the current time in tv_update_date(), so
that we don't have to negate the value in each frequency counter.
Now with the fix, the connection rate measured by both the client and
haproxy is a steady 1000/s, the average response time measured is 99.2ms
and more importantly, the standard deviation has been divided by 3 to
0.6 millisecond.
This fix should also be backported to 1.4 which has the same issue.
Released version 1.5-dev17 with the following main changes :
- MINOR: ssl: Setting global tune.ssl.cachesize value to 0 disables SSL session cache.
- BUG/MEDIUM: stats: fix stats page regression introduced by commit 20b0de5
- BUG/MINOR: stats: last fix was still wrong
- BUG/MINOR: stats: http-request rules still don't cope with stats
- BUG/MINOR: http: http-request add-header emits a corrupted header
- BUG/MEDIUM: stats: disable request analyser when processing POST or HEAD
- BUG/MINOR: log: make log-format, unique-id-format and add-header more independant
- BUILD: log: unused variable svid
- CLEANUP: http: rename the misleading http_check_access_rule
- MINOR: http: move redirect rule processing to its own function
- REORG: config: move the http redirect rule parser to proto_http.c
- MEDIUM: http: add support for "http-request redirect" rules
- MEDIUM: http: add support for "http-request tarpit" rule
The "reqtarpit" rule is not very handy to use. Now that we have more
flexibility with "http-request", let's finally make the tarpit rules
usable there.
There are still semantical differences between apply_filters_to_request()
and http_req_get_intercept_rule() because the former updates the counters
while the latter does not. So we currently have almost similar code leafs
for similar conditions, but this should be cleaned up later.
These are exactly the same as the classic redirect rules except
that they can be interleaved with other http-request rules for
more flexibility.
The redirect parser should probably be changed to stop at the condition
so that the caller puts its own condition pointer. At the moment, the
redirect rule and condition are parsed at once by build_redirect_rule()
and the condition is assigned to the http_req_rule.
We now have http_apply_redirect_rule() which does all the redirect-specific
job instead of having this inside http_process_req_common().
Also one of the benefit gained from uniformizing this code is that both
keep-alive and close response do emit the PR-- flags. The fix for the
flags could probably be backported to 1.4 though it's very minor.
The previous function http_perform_redirect() was becoming confusing
so it was renamed http_perform_server_redirect() since it only applies
to server-based redirection.
Several bugs were introduced recently due to a misunderstanding of how
this function works and what it was supposed to do. Since it's supposed
to only return the pointer to a rule which aborts further processing of
the request, let's rename it to avoid further issues.
The function was also slightly cleaned up without any functional change.
It happens that all of them call parse_logformat_line() which sets
proxy->to_log with a number of flags affecting the line format for
all three users. For example, having a unique-id specified disables
the default log-format since fe->to_log is tested when the session
is established.
Similarly, having "option logasap" will cause "+" to be inserted in
unique-id or headers referencing some of the fields depending on
LW_BYTES.
This patch first removes most of the dependency on fe->to_log whenever
possible. The first possible cleanup is to stop checking fe->to_log
for being null, considering that it always contains at least LW_INIT
when any such usage is made of the log-format!
Also, some checks are wrong. s->logs.logwait cannot be nulled by
"logwait &= ~LW_*" since LW_INIT is always there. This results in
getting the wrong log at the end of a request or session when a
unique-id or add-header is set, because logwait is still not null
but the log-format is not checked.
Further cleanups are required. Most LW_* flags should be removed or at
least replaced with what they really mean (eg: depend on client-side
connection, depend on server-side connection, etc...) and this should
only affect logging, not other mechanisms.
This patch fixes the default log-format and tries to limit interferences
between the log formats, but does not pretend to do more for the moment,
since it's the most visible breakage.
After the response headers are sent and the request processing is done,
the buffers are wiped out and the stream interface is closed. We must
then disable the request analysers, otherwise some processing will
happen on a closed stream interface and empty buffers which do not
match, causing all sort of crashes. This issue was introduced with
recent work on the stats, and was reported by Seri.
David BERARD reported that http-request add-header passes a \0 along
with the header field, which of course is not appropriate. This is
caused by build_logline() which sometimes returns the size with the
trailing zero and sometimes can return an empty string. Let's fix
this function instead of fixing the places where it's used.
Previous commit was still wrong, it broke add-header and set-header
because we don't want to leave on these actions.
The http_check_access_rule() function should be redesigned, it was
initially thought for allow/deny rules but now it is executing other
non-final rules and at the same time returning a pointer to the last
final rule. That becomes a bit confusing and will need to be addressed
before we implement redirect and return.
This commit adding http-request add-header/set-header unfortunately introduced
a regression to the handling of the stats page which is not matched anymore.
Thanks to Dmitry Sivachenko for reporting this.
Released version 1.5-dev16 with the following main changes :
- BUG/MEDIUM: ssl: Prevent ssl error from affecting other connections.
- BUG/MINOR: ssl: error is not reported if it occurs simultaneously with peer close detection.
- MINOR: ssl: add fetch and acl "ssl_c_used" to check if current SSL session uses a client certificate.
- MINOR: contrib: make the iprange tool grep for addresses
- CLEANUP: polling: gcc doesn't always optimize constants away
- OPTIM: poll: optimize fd management functions for low register count CPUs
- CLEANUP: poll: remove a useless double-check on fdtab[fd].owner
- OPTIM: epoll: use a temp variable for intermediary flag computations
- OPTIM: epoll: current fd does not count as a new one
- BUG/MINOR: poll: the I/O handler was called twice for polled I/Os
- MINOR: http: make resp_ver and status ACLs check for the presence of a response
- BUG/MEDIUM: stream-interface: fix possible stalls during transfers
- BUG/MINOR: stream_interface: don't return when the fd is already set
- BUG/MEDIUM: connection: always update connection flags prior to computing polling
- CLEANUP: buffer: use buffer_empty() instead of buffer_len()==0
- BUG/MAJOR: stream_interface: fix occasional data transfer freezes
- BUG/MEDIUM: stream_interface: fix another case where the reader might not be woken up
- BUG/MINOR: http: don't abort client connection on premature responses
- BUILD: no need to clean up when making git-tar
- MINOR: log: add a tag for amount of bytes uploaded from client to server
- BUG/MEDIUM: log: fix possible segfault during config parsing
- MEDIUM: log: change a few log tokens to make them easier to remember
- BUG/MINOR: log: add_to_logformat_list() used the wrong constants
- MEDIUM: log-format: make the format parser more robust and more extensible
- MINOR: sample: support cast from bool to string
- MINOR: samples: add a function to fetch and convert any sample to a string
- MINOR: log: add lf_text_len
- MEDIUM: log: add the ability to include samples in logs
- REORG: stats: massive code reorg and cleanup
- REORG: stats: move the HTTP header injection to proto_http
- REORG: stats: functions are now HTTP/CLI agnostic
- BUG/MINOR: log: fix regression introduced by commit 8a3f52
- MINOR: chunks: centralize the trash chunk allocation
- MEDIUM: stats: use hover boxes instead of title to report details
- MEDIUM: stats: use multi-line tips to display detailed counters
- MINOR: tools: simplify the use of the int to ascii macros
- MINOR: stats: replace STAT_FMT_CSV with STAT_FMT_HTML
- MINOR: http: prepare to support more http-request actions
- MINOR: log: make parse_logformat_string() take a const char *
- MEDIUM: http: add http-request 'add-header' and 'set-header' to build headers
These two new statements allow to pass information extracted from the request
to the server. It's particularly useful for passing SSL information to the
server, but may be used for various other purposes such as combining headers
together to emulate internal variables.
These macros (U2H, U2A, LIM2A, ...) have been used with an explicit
index for the local storage variable, making it difficult to change
log formats and causing a few issues from time to time. Let's have
a single macro with a rotating index so that up to 10 conversions
may be used in a single call.
Frontend, listener, server and backend detailed counters are now spread
over several lines in a table when the pointer hovers over the <u> area.
The values are much more readble, and the extra space gained this way
allowed to report some percentages.
Note: some incoherencies still exist between some counters. For example,
the backend's cum_conn is increased when a session is assigned instead
of increasing cum_sess.
Using titles to report detailed information is not convenient. The
browser decides to wrap the line where it wants, generally the box
quickly fades away, and it's not possible to copy-paste the text
from the box.
By using two levels, we can make a block appear/disappear depending
on whether its parent it being hovered or not, for example :
.tip { display: none; }
u:hover .tip { display: block; }
or better:
.tip { display: block; visibility: hidden; }
u:hover .tip { visibility: visible; }
Toggling visibility ensures that the place required to display the block
is always reserved. This is important to display boxes that are close to
the edges.
Then using <span class="tip">this is a box</span> will make the text
appear only when the upper <u> is hovered.
But this still adds much text. So instead we use a generic <div> tag
which we don't use anywhere else. That way we don't have to specify a
class :
div { display: block; visibility: hidden; }
u:hover div { visibility: hidden; }
This works pretty well, even in old browsers from 2005.
This commit does not change the display format, it only replaces the
title attribute with the div tag. Later commits will adjust the layout.
At the moment, we need trash chunks almost everywhere and the only
correctly implemented one is in the sample code. Let's move this to
the chunks so that all other places can use this allocator.
Additionally, the get_trash_chunk() function now really returns two
different chunks. Previously it used to always overwrite the same
chunk and point it to a different buffer, which was a bit tricky
because it's not obvious that two consecutive results do alias each
other.
The commit above improved error reporting during log parsing, but as
a result, some shared strings such as httplog_format are truncated
during parsing. This is observable upon startup because the second
proxy to use httplog emits a warning.
Let's have the logformat parser duplicate the string while parsing it.
All the functions that were called "http" or "raw" have been cleaned up
to support either only HTML and use that word in their name, or support
both HTML and CSV and be usable both from the HTTP and the CLI entry
points.
stats_http_dump() now explicitly checks for HTML mode before adding
HTML-specific headers/trailers, and has been renamed since it's now
used by both entry points.
Some more duplicated code could be removed. The patch looks big but it's
mostly due to re-indents resulting from the moves or comments updates.
The calling sequences now look like this :
cli_io_handler()
-> stats_dump_sess_to_buffer() // "show sess"
-> stats_dump_errors_to_buffer() // "show errors"
-> stats_dump_info_to_buffer() // "show info"
-> stats_dump_stat_to_buffer() // "show stat"
-> stats_dump_csv_header()
-> stats_dump_proxy_to_buffer()
-> stats_dump_fe_stats()
-> stats_dump_li_stats()
-> stats_dump_sv_stats()
-> stats_dump_be_stats()
http_stats_io_handler()
-> stats_dump_stat_to_buffer() // same as above, but used for CSV or HTML
-> stats_dump_csv_header() // emits the CSV headers (same as above)
-> stats_dump_html_head() // emits the HTML headers
-> stats_dump_html_info() // emits the equivalent of "show info" at the top
-> stats_dump_proxy_to_buffer() // same as above, valid for CSV and HTML
-> stats_dump_html_px_hdr()
-> stats_dump_fe_stats()
-> stats_dump_li_stats()
-> stats_dump_sv_stats()
-> stats_dump_be_stats()
-> stats_dump_html_px_end()
-> stats_dump_html_end() // emits HTML trailer
The HTTP header injection that are performed in dumpstats when responding
or when redirecting a POST request have nothing to do in dumpstats. They
do not use any state from the stats, and are 100% HTTP. Let's make the
headers there in the HTTP core, and have dumpstats only produce stats.
The dumpstats code looks like a spaghetti plate. Several functions are
supposed to be able to do several things but rely on complex states to
dispatch the work to independant functions. Most of the HTML output is
performed within the switch/case statements of the whole state machine.
Let's clean this up by adding new functions to emit the data and have
a few more iterators to avoid relying on so complex states.
The new stats dump sequence looks like this for CLI and for HTTP :
cli_io_handler()
-> stats_dump_sess_to_buffer() // "show sess"
-> stats_dump_errors_to_buffer() // "show errors"
-> stats_dump_raw_info_to_buffer() // "show info"
-> stats_dump_raw_info()
-> stats_dump_raw_stat_to_buffer() // "show stat"
-> stats_dump_csv_header()
-> stats_dump_proxy()
-> stats_dump_px_hdr()
-> stats_dump_fe_stats()
-> stats_dump_li_stats()
-> stats_dump_sv_stats()
-> stats_dump_be_stats()
-> stats_dump_px_end()
http_stats_io_handler()
-> stats_http_redir()
-> stats_dump_http() // also emits the HTTP headers
-> stats_dump_html_head() // emits the HTML headers
-> stats_dump_csv_header() // emits the CSV headers (same as above)
-> stats_dump_http_info() // note: ignores non-HTML output
-> stats_dump_proxy() // same as above
-> stats_dump_http_end() // emits HTML trailer
Using %[expression] it becomes possible to make the log engine fetch
some samples from the request or the response and provide them in the
logs. Note that this feature is still limited, it does not yet allow
to apply converters, to limit the output length, nor to specify the
direction which should be fetched when a fetch function works in both
directions.
However it's quite convenient to log SSL information or to include some
information that are used in stick tables.
It is worth noting that this has been done in the generic log format
handler, which means that the same information may be used to build the
unique-id header and to pass the information to a backend server.
The log-format parser reached a limit making it hard to add new features.
It also suffers from a weak handling of certain incorrect corner cases,
for example "%{foo}" is emitted as a litteral while syntactically it's an
argument to no variable. Also the argument parser had to redo some of the
job with some cases causing minor memory leaks (eg: ignored args).
This work aims at improving the situation so that slightly better reporting
is possible and that it becomes possible to extend the log format. The code
has a few more states but looks significantly simpler. The parser is now
capable of reporting ignored arguments and truncated lines.
The <type> argument was checked against LOG_FMT_* but it was passed as
LF_* which are two independant enums. It happens that the 3 first entries
in these enums do match, but this broke some experimental changes which
required another state, so let's fix this now.
Some log tokens have evolved in a way that is not completely logical.
For example, frontend tokens sometimes begin with an 'f' and sometimes
with an 'F'. Same for backend and server.
So let's change a few cases without disrupting compatibility with existing
setups :
Bi => bi
Bp => bp
Ci => ci
Cp => cp
Fi => fi
Fp => fp
Si => si
Sp => sp
cc => CC
cs => CS
st => ST
The old ones are still supported but deprecated and will be unsupported by
the 1.5 release. However, a warning message is emitted when they're encounterd
and it indicates what token should be used to replace them.
When log format arguments are specified within braces with no '+' nor '-'
prefix, the NULL string is compared with known keywords causing a crash.
This only happens during parsing so it does not affect runtime processing.