IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
This was made to support epoll on patched 2.4 kernels, and on early 2.6
using alternative libcs thanks to the arch-specific syscall definitions.
All the features we support have been around since 2.6.2 and present in
glibc since 2.3.2, neither of which are found in field anymore. Let's
simply drop this and use epoll normally.
The accept4() syscall has been present for a while now, there is no more
reason for maintaining our own arch-specific syscall implementation for
systems lacking it in libc but having it in the kernel.
This was introduced 10 years ago to squeeze a few CPU cycles per syscall
on 32-bit x86 machines and was already quite old by then, requiring to
explicitly enable support for this in the kernel. We don't even know if
it still builds, let alone if it works at all on recent kernels! Let's
completely drop this now.
I'm quite fed up with having to rebuild everything from scratch after each
and every "make reg-tests", especially during bisects. The only reason for
this is that there are no build options passed to make for reg-tests, which
modifies the .build_opts file again, resulting in a change upon next build.
Let's just keep this file out of the dependency check for make reg-tests.
Statically building on for i386/x86_64 on linux+glibc 2.18 fails in rt with
undefined references to pthread_attr_init and a few others. Let's just swap
the two libs in order to fix this.
When a panic() occurs due to a stuck thread, we'll try to dump a
backtrace of this thread if the config directive USE_BACKTRACE is
set (which is the case on linux+glibc). For this we use the
backtrace() call provided by glibc and iterate the pointers through
resolve_sym_name(). In order to minimize the output (which is limited
to one buffer), we only do this for stuck threads, and we start the
dump above ha_panic()/ha_thread_dump_all_to_trash(), and stop when
meeting known points such as main/run_tasks_from_list/run_poll_loop.
If enabled without USE_DL, the dump will be complete with no details
except that pointers will all be given relative to main, which is
still better than nothing.
The new USE_BACKTRACE config option is enabled by default on glibc since
it has been present for ages. When it is set, the export-dynamic linker
option is enabled so that all non-static symbols are properly resolved.
For a very long time we've used to build without strict aliasing due to
very few places in the stick-tables code mostly, that initially we didn't
know how to deal with. The problem of doing this is that it encourages
to write possibly incorrect code such as the few SSL sample fetch functions
that were recently fixed.
All places causing aliasing errors on x86_64, i586, armv8, armv7 and
mips were fixed so it's about time to re-enable the warning hoping to
catch such errors early in the development cycle. As a bonus, this
removed about 5kB of code.
This used to be a minor optimization on ix86 where registers are scarce
and the calling convention not very efficient, but this platform is not
relevant enough anymore to warrant all this dirt in the code for the sake
of saving 1 or 2% of performance. Modern platforms don't use this at all
since their calling convention already defaults to using several registers
so better get rid of this once for all.
As haproxy wont build on AIX 7.2 using the old "aix52" TARGET a new
TARGET was introduced which adds two special CFLAGS to prevent the
loading of AIXs xmem.h and var.h. This is done by defining the
corresponding include-guards _H_XMEM and _H_VAR. Without excluding
those headers-files the build fails because of redefinition errors:
1)
CC src/mux_fcgi.o
In file included from /usr/include/sys/uio.h:90,
from /opt/freeware/lib/gcc/powerpc-ibm-aix7.1.0.0/8.3.0/include-fixed/sys/socket.h:104,
from include/common/compat.h:32,
from include/common/cfgparse.h:25,
from src/mux_fcgi.c:13:
src/mux_fcgi.c:204:13: error: expected ':', ',', ';', '}' or '__attribute__' before '.' token
struct ist rem_addr;
^~~~~~~~
2)
CC src/cfgparse-listen.o
In file included from include/types/arg.h:31,
from include/types/acl.h:29,
from include/types/proxy.h:41,
from include/proto/log.h:34,
from include/common/cfgparse.h:30,
from src/mux_h2.c:13:
include/types/vars.h:30:8: error: redefinition of 'struct var'
struct var {
^~~
Futhermore, to enable multithreading via USE_THREAD, the atomic
library was added to the LDFLAGS. Finally, two new CPUs were added
to simplify the usage of power8 and power9 optimizations.
This TARGET was only tested on GCC 8.3 and may or may not work on
IBM's native C-compiler (XLC).
Should be backported to 2.1.
After a number of reorganization, addition of fcgi and the removal of
the legacy mode, some late files ended up being slow to build and were
slowing down the parallel build. Let's reorder them based on the build
time. Full build went down from 8.3-9.2s to 6.8s.
There were very few entries to fix and this warning, while often
wrong, can actually spot future issues. If it can help developers
adjust their code in the future to make it more robust, it's not
necessarily that bad. Let's re-enable it and see how it goes.
According to issue #294 some gcc versions suspect that developers are
having trouble dealing with string offsets and now emit another new
childish warning when mapping indexes to characters. Instead of annoying
developers each time it happens and ask them to modify their valid code,
let's just get rid of this absurd warning.
This multiplexer is only available on the backend side. It may handle
multiplexed connections if the FCGI application supports it. A FCGI application
must be configured on the backend to be used. If not redefined during the
request processing by the FCGI filter, this mux handles all mandatory
parameters.
There is a limitation on the way the requests are processed. The parameters must
be encoded into a uniq PARAMS record. It means, once encoded, all HTTP headers
and FCGI parameters must small enough to be store in a buffer. Otherwise, an
internal processing error is returned.
The FCGI application handles all the configuration parameters used to format
requests sent to an application. The configuration of an application is grouped
in a dedicated section (fcgi-app <name>) and referenced in a backend to be used
(use-fcgi-app <name>). To be valid, a FCGI application must at least define a
document root. But it is also possible to set the default index, a regex to
split the script name and the path-info from the request URI, parameters to set
or unset... In addition, this patch also adds a FCGI filter, responsible for
all processing on a stream.
To avoid code duplication in the futur mux FCGI, functions parsing H1 messages
and converting them into HTX have been moved in the file h1_htx.c. Some
specific parts remain in the mux H1. But most of the parsing is now generic.
Our circular buffers are well suited for being used as ring buffers for
not-so-structured data. The machanism here consists in making room in a
buffer before inserting a new record which is prefixed by its size, and
looking up next record based on the previous one's offset and size. We
can have up to 255 consumers watching for data (dump in progress, tail)
which guarantee that entrees are not recycled while they're being dumped.
The complete representation is described in the header file. For now only
ring_new(), ring_resize() and ring_free() are created.
The principle of this subsystem will be to support taking live traces
at various places in the code with conditional triggers, filters, and
ability to lock on some elements. The traces will support typed events
and will be sent into sinks made of ring buffers, file descriptors or
remote servers.
The principle will be to be able to dispatch events to various destinations
called "sinks". This is already done in part in logs where log servers can
be either a UDP socket or a file descriptor. This will be needed with the
new trace subsystem where we may also want to add ring buffers. And it turns
out that all such destinations make sense at all places. Logs may need to be
sent to a TCP server via a ring buffer, or consulted from the CLI. Trace
events may need to be sent to stdout/stderr as well as to remote log servers.
This patch creates a new structure "sink" aiming at addressing these similar
needs. The goal is to merge together what is common to all of them, such as
the output format, the dropped events count, etc, and also keep separately
the target identification (network address, file descriptor). Provisions
were made to have a "waiter" on the sink. For a TCP log server it will be
the task to wake up after writing to the log buffer. For a ring buffer, it
could be the list of watchers on the CLI running a "tail" operation and
waiting for new events. A lock was also placed in the struct since many
operations will require some locking, including the FD ones. The output
formats covers those in use by logs and two extra ones prepending the ISO
time in front of the message (convenient for stdio/buffer).
For now only the generic infrastructure is present, no type-specific
output is implemented. There's the sink_write() function which prepares
and formats a message to be sent, trying hard to avoid copies and only
using pointer manipulation, where the type-specific code just has to be
added. Dropped messages are already counted (for now 100% drop). The
message is put into an iovec array as it will be trivial to use with
file descriptors and sockets.
The function call tracing code is a quite old and was never ported to
support threads. It's not even sure whether it still works well, but
at least its presence creates confusion for future work so let's rename
it to calltrace.c and add a comment about its lack of thread-safety.
The old module proto_http does not exist anymore. All code dedicated to the HTTP
analysis is now grouped in the file proto_htx.c. So, to finish the polishing
after removing the legacy HTTP code, proto_htx.{c,h} files have been moved in
http_ana.{c,h} files.
In addition, all HTX analyzers and related functions prefixed with "htx_" have
been renamed to start with "http_" instead.
First of all, all legacy HTTP analyzers and all functions exclusively used by
them were removed. So the most of the functions in proto_http.{c,h} were
removed. Only functions to deal with the HTTP transaction have been kept. Then,
http_msg and hdr_idx modules were entirely removed. And finally the structure
http_msg was lightened of all its useless information about the legacy HTTP. The
structure hdr_ctx was also removed because unused now, just like unused states
in the enum h1_state. Note that the memory pool "hdr_idx" was removed and
"http_txn" is now smaller.
Solaris's default shell doesn't support substitutions at the beginning or
end of variables, which are still used to determine the version based on
git. Since we added --abbrev=0 we don't need the last one. And using cut
it's trivial to replace the first one, actually simplifying the whole
expression.
This may be backported to all stable branches.
Solaris's sed doesn't take the 'p' argument on the 's' command so
nothing is printed. Just passing ';p' fixes this without affecting
other implementations. Additionally, optional characters cannot be
matched using a question mark, which is always searched verbatim, so
the leading '#' wasn't stripped. Using \{0,1\} works fine everywhere
so let's use this instead.
The 'tr' command on Solaris doesn't conform to POSIX and requires
brackets around ranges. So the sequence '0-9' is understood as the
3 characters '0', '-', and '9'. This causes tagged versions (those
with no commit after the last commit) to be numberred as an empty
string, resulting in an error being reported while computing the
version number.
All implementations support '[:space:]' to delete heading spaces,
so let's use this instead.
This may be backported to all stable versions.
getaddrinfo() has been available since glibc 2.3.3 or so and is generally
enabled by distro packagers. The main reason for not enabling it on Linux
in the past is that it was known broken on some libc alternatives. It's
the right moment to enable it by default with glibc.
TCP Fast Open is supported on all supported Linux kernels and on all
kernels shipped in supported distros, except the older 2.6.32 that
comes with RHEL6. However the option is harmless, will not prevent
from building and smoothly falls back even if forcefully enabled, so
it makes sense to enable it by default. It's still possible to pass
"USE_TFO=" to force it disabled if really desired.
Oldest kernel found on a supported Linux distro (2.6.32 + backports on
RHEL6) supports network namespaces, so we have no reason not to enable
them by default on the linux-glibc target.
We've just removed old linux targets "linux22", "linux24", "linux24e",
"linux26" and "linux2628" and it's likely that many build scripts and
packages will still reference these. So let's have the makefile detect
these and reject with instructions instead of silently building with
incorrect options.
The linux targets have become more than confusing over time. We used to
have "linux2628" to match the features available in kernels 2.6.28 and
above, without consideration for the libc, and due to many new features
appearing later in kernels, some other options were added that are not
enabled by default in linux2628, so this target doesn't make any sense
anymore. The older ones (linux 2.2, linux 2.4, ...) do not make sense
either since these versions are not supported anymore. Let's clean things
up by creating a new "linux-glibc" target that matches what is available
by default on Linux kernels and glibc present on supported distros at the
time of release. Other libc implementation may use a custom or generic
target or be added later if needed.
All the older linux targets were removed.
The list of enable and disabled build options now appears separately
at the end of "make help". This is convenient to know what is enabled
by default on a given target. For example :
$ make help TARGET=linux2628
Enabled features for TARGET 'linux2628' (disable with 'USE_xxx=') :
EPOLL NETFILTER POLL THREAD TPROXY LINUX_TPROXY LINUX_SPLICE LIBCRYPT
CRYPT_H FUTEX ACCEPT4 CPU_AFFINITY DL RT PRCTL THREAD_DUMP
Disabled features for TARGET 'linux2628' (enable with 'USE_xxx=1') :
KQUEUE MY_EPOLL MY_SPLICE PCRE PCRE_JIT PCRE2 PCRE2_JIT PRIVATE_CACHE
PTHREAD_PSHARED REGPARM STATIC_PCRE STATIC_PCRE2 VSYSCALL GETADDRINFO
OPENSSL LUA MY_ACCEPT4 ZLIB SLZ TFO NS DEVICEATLAS 51DEGREES WURFL
SYSTEMD OBSOLETE_LINKER EVPORTS
Add a new XPRT that is used when using non-SSL handshakes, such as proxy
protocol or Netscaler, instead of taking care of it in conn_fd_handler().
This XPRT is installed when any of those is used, and it removes itself once
the handshake is done.
This should allow us to remove the distinction between CO_FL_SOCK* and
CO_FL_XPRT*.
This patch adds minimalistic definitions to implement dictionary new data structure
which is an ebtree of ebpt_node structs with strings as keys. Note that this has nothing
to see with real dictionary data structure (maps of keys in association with values).
We've been dealing with a workaround for a bug in splice that used to
affect version 2.6.25 to 2.6.27.12 and which was fixed 10 years ago
in kernel versions which are not supported anymore. Given that people
who would use a kernel in such a range would face much more serious
stability and security issues, it's about time to get rid of this
workaround and of the ASSUME_SPLICE_WORKS build option used to disable
it.
We still have quite a number of build macros which are mapped 1:1 to a
USE_something setting in the makefile but which have a different name.
This patch cleans this up by renaming them to use the USE_something
one, allowing to clean up the makefile and make it more obvious when
reading the code what build option needs to be added.
The following renames were done :
ENABLE_POLL -> USE_POLL
ENABLE_EPOLL -> USE_EPOLL
ENABLE_KQUEUE -> USE_KQUEUE
ENABLE_EVPORTS -> USE_EVPORTS
TPROXY -> USE_TPROXY
NETFILTER -> USE_NETFILTER
NEED_CRYPT_H -> USE_CRYPT_H
CONFIG_HAP_CRYPT -> USE_LIBCRYPT
CONFIG_HAP_NS -> DUSE_NS
CONFIG_HAP_LINUX_SPLICE -> USE_LINUX_SPLICE
CONFIG_HAP_LINUX_TPROXY -> USE_LINUX_TPROXY
CONFIG_HAP_LINUX_VSYSCALL -> USE_LINUX_VSYSCALL
Since threads were introduced, we've naturally had a number of bugs
related to locking issues. In addition we've also got some issues
with corrupted lists in certain rare cases not necessarily involving
threads. Not only these events cause a lot of trouble to the production
as it is very hard to detect that the process is stuck in a loop and
doesn't deliver the service anymore, but it's often difficult (or too
late) to collect more debugging information.
The patch presented here implements a lockup detection mechanism, also
known as "watchdog". The principle is that (on systems supporting it),
each thread will have its own CPU timer which progresses as the thread
consumes CPU cycles, and when a deadline is met, a signal is delivered
(SIGALRM here since it doesn't interrupt gdb by default).
The thread handling this signal (which is not necessarily the one which
triggered the timer) figures the thread ID from the signal arguments and
checks if it's really stuck by looking at the time spent since last exit
from poll() and by checking that the thread's scheduler is still alive
(so that even when dealing with configuration issues resulting in insane
amount of tasks being called in turn, it is not possible to accidently
trigger it). Checking the scheduler's activity will usually result in a
second chance, thus doubling the detecting time.
In order not to incorrectly flag a thread as being the cause of the
lockup, the thread_harmless_mask is checked : a thread could very well
be spinning on itself waiting for all other threads to join (typically
what happens when issuing "show sess"). In this case, once all threads
but one (or two) have joined, all the innocent ones are marked harmless
and will not trigger the timer. Only the ones not reacting will.
The deadline is set to one second, which already appears impossible to
reach, especially since it's 1 second of CPU usage, not elapsed time
with the CPU being preempted by other threads/processes/hypervisor. In
practice due to the scheduler's health verification it takes up to two
seconds to decide to panic.
Once all conditions are met, the goal is to crash from the offending
thread. So if it's the current one, we call ha_panic() otherwise the
signal is bounced to the offending thread which deals with it. This
will result in all threads being woken up in turn to dump their context,
the whole state is emitted on stderr in hope that it can be logged, and
the process aborts, leaving a chance for a core to be dumped and for a
service manager to restart it.
An alternative mechanism could be implemented for systems unable to
wake up a thread once its CPU clock reaches a deadline (e.g. FreeBSD).
Instead of waking the timer each and every deadline, it is possible to
use a standard timer which is reset each time we leave poll(). Since
the signal handler rechecks the CPU consumption this will also work.
However a totally idle process may trigger it from time to time which
may or may not confuse some debugging sessions. The same is true for
alarm() which could be another option for systems not having such a
broad choice of timers (but it seems that in this case they will not
have per-thread CPU measurements available either).
The feature is currently implemented only when threads are enabled in
order to keep the code clean, since the main purpose is to detect and
address inter-thread deadlocks. But if it proves useful for other
situations this condition might be relaxed.
Event ports are kqueue/epoll polling class for Solaris. Code is based
on https://github.com/joyent/haproxy-1.8/tree/joyent/dev-v1.8.8.
Event ports are available only on SunOS systems derived from
Solaris 10 and later (including illumos systems).
-fomit-frame-pointer is commonly avoided because tools like dtrace
needs frame-pointer. Remove it from Makefile and let builder's env
do the job.
This patch could be backported to 1.9.
When haproxy is built with DEBUG_DEV, the following commands are added
to the CLI :
debug dev close <fd> : close this file descriptor
debug dev delay [ms] : sleep this long
debug dev exec [cmd] ... : show this command's output
debug dev exit [code] : immediately exit the process
debug dev hex <addr> [len]: dump a memory area
debug dev log [msg] ... : send this msg to global logs
debug dev loop [ms] : loop this long
debug dev panic : immediately trigger a panic
debug dev tkill [thr] [sig] : send signal to thread
These are essentially aimed at helping developers trigger certain
conditions and are expected to be complemented over time.