IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
addresses and verify that the remote nodes have/keep a consistent view of
assigned addresses.
If a remote node has an inconsistent view of addresses visavi the recovery
master this will trigger a full ip reallocation.
(This used to be ctdb commit f3bf2ab61f8dbbc806ec23a68a87aaedd458e712)
(Based on earlier version from Ronnie which modified tdb; this one
is standalone).
When storing records in a tdb that has "automatic seqnum updates"
also check if the actual data for the record has changed or not.
If it has not changed at all, except for possibly the header,
this is likely just a dmaster migration operation in which case
we want to write the record to the tdb but we do not want the tdb
sequence number to be increased.
This resolves the problem of notify.tdb being thrashed under load:
the heuristic in smbd to only reread this when the sequence number
increases (rarely) breaks down.
Before, running nbench --num-progs=512 across 4 nodes, we saw numbers like:
512 1496 118.33 MB/sec execute 60 sec latency 0.00 msec
And turning on latency tracking, this was typical in the logs:
ctdbd: High latency 9380914.000000s for operation lockwait on database notify.tdb
After this commit:
512 2451 143.85 MB/sec execute 60 sec latency 0.00 msec
And no more latency messages...
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
(This used to be ctdb commit 9ed2f8b2fcb7e3f0d795eef22cfa317066490709)
packets, to avoid the queue to grow excessively if smbd has blocked.
This could cause traverse packets to become discarded in case the main
smbd daemon does a traverse of a database while there is a recovery
(sending a erconfigured message to smbd, causing an avalanche of unlock
messages to be sent across the cluster.)
This avalance of messages could cause also the tranversal message to be
discarded causing the main smbd process to hang indefinitely waiting
for the traversal message that will never arrive.
Bump the maximum queue length before starting to discard messages from
1000 to 1000000 and at the same time rework the queueing slightly so we
can append messages cheaply to the queue instead of walking the list
from head to tail every time.
(This used to be ctdb commit 59ba5d7f80e0465e5076533374fb9ee862ed7bb6)
This is needed because the "startup" event runs after the initial recovery,
but we need to do some actions before the initial recovery.
metze
(This used to be ctdb commit e953808449c102258abb6cba6f4abf486dda3b82)
configureable using --log-ringbuf-size=<num-entries>.
Add an entry in the sysconfig file to set this persistently.
(This used to be ctdb commit c79c2da69bc352f509e7fca4b9172a4b7f23c0f8)
We don't want ctdb stalling due to paging; this can be far worse than
scheduling delays. But if we simply do mlockall(MCL_FUTURE), it
increases the risk that mmap (ie. tdb open) or malloc will fail,
causing us to abort.
This patch is a compromise: we mlock all current pages (including
10k of future stack for expansion) and then relock when a client
asks us to open a TDB. We warn, but don't exit, if it fails.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
(This used to be ctdb commit 82f778e85440bc713d3f87c08ddc955d3cfce926)
1) It's buggy. Code needs to be carefully written (ie. no busy
loops) to handle running with it, and we fork and run scripts.[1]
2) It makes debugging harder. If ctdbd loops (as has happened recently)
it can be extremely hard to get in and see what's happening. We've already
seen the valgrind hacks.
3) We have seen recent scheduler problems. Perhaps they are unrelated,
but removing this very unusual setup is unlikely to hurt.
4) It doesn't make anything faster. Under all but the most perverse of
circumstances, 99% of the cpu gives the same performance as 100%, and
we will always preempt normal processes anyway.
[1] I made this worse in 0fafdcb8d353 "eventscript: fork() a child for
each script" by removing the switch_from_server_to_client() which
restored it, but even that was only for monitor scripts. Others were
run with RT priority.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
(This used to be ctdb commit 482c302d46e2162d0cf552f8456bc49573ae729d)
We're going to need this so ctdb can query non-monitor status.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
(This used to be ctdb commit 53bc5ca23ca55a3ac63a440051f16716944a2a51)
in memory instead of dynamically allocated ones so that we reduce the pressure
on malloc/free.
(This used to be ctdb commit c5cbb95512f034abeec515579983bf7ac55eadd9)
Wolfgang saw a talloc complaint about using freed memory in ctdb_tcp_read_cb.
His fix was to remove the talloc_free() in that function, which causes
loops when a socket is closed (as it does not get removed from the event
system), eg:
netcat 192.168.1.2 4379 < /dev/null
The real bug is that when we have more than one pending packet in the
queue, we loop calling the callback without any safeguards should that
callback free the queue (as it tends to do on invalid packets). This
can be reproduced by sending more than one bogus packet at once:
# Length word at start: 4 == empty packet (assumed little endian)
/usr/bin/printf \\4\\0\\0\\0\\4\\0\\0\\0 > /tmp/pkt
netcat 192.168.1.2 4379 < /tmp/pkt
Using a destructor we can check if the callback frees us, and exit
immediately. Elsewhere, we return after the callback anyway.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
(This used to be ctdb commit 4d0523dd94fb07e860b3e8118691f93d1ef8d0fa)
make ctdb_queue_length() cheaper by using a counter variable instead of counting the number of packets each time.
(This used to be ctdb commit 331c6e3afd96d8b5e191153a631efdbdabb6ea33)
Add a new tunable to control the maximum queue size we allow to a blocked client before we start discarding REQ_MESSAGES instead of queueing them for delivery.
This avoids having queued up very very large number of MESSAGES that samba semds
between eachother to nodes that are blocked/banned/stopped for extended periods
.
(This used to be ctdb commit f76d6fed8f9630450263b9fa4b5fdf3493fb1e11)
so we can spot if there are leaks.
plug two leaks for filedescriptors related to when sending ARP fail
and one leak when we can not parse the local address during tcp connection establish
(This used to be ctdb commit ddd089810a14efe4be6e1ff3eccaa604e4913c9e)
validate the input values used and refuse setting the debug level to an unknown value
(This used to be ctdb commit daec49cea1790bcc64599959faf2159dec2c5929)
Log this in "ctdb statistics".
Also add a varaible "RecLockLatencyMs" that will log an error everytime it takes longer than this to access the reclock file.
(This used to be ctdb commit 042377ed803bb8f7ca9d6ea1a387427b7b8ba45a)
Set sin_port or sin6_port to 0, depending on sa_family.
Michael
Signed-off-by: Michael Adam <obnox@samba.org>
(This used to be ctdb commit e0c70110e241b065c42c1c07f32c3657bac5d98b)
log the type of operation and the database name for all latencies higher
than a treshold
(This used to be ctdb commit 1d581dcd507e8e13d7ae085ff4d6a9f3e2aaeba5)
older ipv4-only version of these controls.
We need this so that we are backwardcompatible with old versions of ctdb
and so that we can interoperate with a ipv4-only recmaster during a
rolling upgrade.
(This used to be ctdb commit 6b76c520f97127099bd9fbaa0fa7af1c61947fb7)
add support to send ipv6 "gratious arp" aka neighbor solicitation packets from ctdb
Signed-off-by: Ronnie Sahlberg <ronniesahlberg@gmail.com>
(This used to be ctdb commit 0a38ea11af9237501f2951fee698a59b46f8750d)