IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
It isn't currently possible to distinguish these 2 cases.
Signed-off-by: Martin Schwenke <martin@meltin.net>
Reviewed-by: Amitay Isaacs <amitay@gmail.com>
When the number of fast path vacuuming runs is 0 then a full vacuuming
run is done. This means the first one is a full run, which is almost
certainly not what is intended.
Combine the 2 conditionals to only flag a full vacuuming run when the
count exceeds the configured limit. This means that the
full_vacuum_run flag is set in both parent and child, but this is
harmless... and is better than getting it wrong.
Also tweak the comparison to be less-than-or-equal, since the zeroth
run needs to be counted.
Signed-off-by: Martin Schwenke <martin@meltin.net>
Reviewed-by: Amitay Isaacs <amitay@gmail.com>
The lock may have been lost due to a failure in the underlying locking
mechanism. This could be due to quorum loss or similar. It is best
to call an election to confirm that this node should still be master.
At worst, the node will reelect itself, fail to take the lock and then
ban itself. This is a suitable outcome for a node that has been
partitioned from others in the cluster.
Signed-off-by: Martin Schwenke <martin@meltin.net>
Reviewed-by: Amitay Isaacs <amitay@gmail.com>
This provides finer resolution while still maintaining a reasonable
maximum. In this case the top bucket contains any hop counts
>= 16384, compared to the current situation where the top bucket contains
hop counts >= 268435456.
Signed-off-by: Martin Schwenke <martin@meltin.net>
Reviewed-by: Amitay Isaacs <amitay@gmail.com>
Since 4.9.0, the log messages can be confusing if a required database
directory does not exist. Explicitly check for database directories,
logging a clear error and exiting if one is missing.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=13696
Signed-off-by: Martin Schwenke <martin@meltin.net>
Reviewed-by: Amitay Isaacs <amitay@gmail.com>
Autobuild-User(master): Amitay Isaacs <amitay@samba.org>
Autobuild-Date(master): Mon Dec 3 06:56:41 CET 2018 on sn-devel-144
Signed-off-by: Andreas Schneider <asn@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
Autobuild-User(master): Andreas Schneider <asn@cryptomilk.org>
Autobuild-Date(master): Thu Nov 8 11:03:11 CET 2018 on sn-devel-144
Explicitly background ctdbd instead.
This has the advantage of leaving stdin open. ctdbd can then be
enhanced to exit when stdin closes, allowing better cleanup in a test
environment.
Signed-off-by: Martin Schwenke <martin@meltin.net>
Reviewed-by: Amitay Isaacs <amitay@gmail.com>
Autobuild-User(master): Amitay Isaacs <amitay@samba.org>
Autobuild-Date(master): Tue Nov 6 10:30:14 CET 2018 on sn-devel-144
popt uses an int in place of a bool, so declare an extra int and make
the conversion explicit.
Signed-off-by: Martin Schwenke <martin@meltin.net>
Reviewed-by: Amitay Isaacs <amitay@gmail.com>
ctdbd logs to stderr in interactive mode, not stdout. This way stdout
is always closed.
Signed-off-by: Martin Schwenke <martin@meltin.net>
Reviewed-by: Amitay Isaacs <amitay@gmail.com>
Logging the logging location to syslog can be useful on production
systems when the configuration goes unexpectedly missing. However, in
test mode this just adds noise to the logs on the test system.
Signed-off-by: Martin Schwenke <martin@meltin.net>
Reviewed-by: Amitay Isaacs <amitay@gmail.com>
Drop the use of ctdb_set_sockname() because it complicates the memory
allocation and this is the only place it is used. Just assign to the
relevant pointer.
Signed-off-by: Martin Schwenke <martin@meltin.net>
Reviewed-by: Amitay Isaacs <amitay@gmail.com>
... instead of applying banning credits.
There have been a couple of cases where recovery repeatedly takes just
over 2 minutes to fail. Therefore, banning credits expire between
failures and a continuously problematic node is never banned,
resulting in endless recoveries. This is because it takes 2
applications of banning credits before a node is banned, which
generally involves 2 recovery failures.
The recovery helper makes up to 3 attempts to recover each database
during a single run. If a node causes 3 failures then this is really
equivalent to 3 recovery failures in the model that existed before the
recovery helper added retries. In that case the node would have been
banned after 2 failures.
So, instead of applying banning credits to the "most failing" node,
simply ban it directly from the recovery helper.
If multiple nodes are causing recovery failures then this can cause a
node to be banned more quickly than it might otherwise have been, even
pre-recovery-helper. However, 90 seconds (i.e. 3 failures) is a long
time to be in recovery, so banning earlier seems like the best
approach.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=13670
Signed-off-by: Martin Schwenke <martin@meltin.net>
Reviewed-by: Amitay Isaacs <amitay@gmail.com>
Autobuild-User(master): Amitay Isaacs <amitay@samba.org>
Autobuild-Date(master): Mon Nov 5 06:52:33 CET 2018 on sn-devel-144
==25741== Syscall param write(buf) points to uninitialised byte(s)
==25741== at 0x4939291: write (write.c:27)
==25741== by 0x4868285: sys_write (sys_rw.c:68)
==25741== by 0x13915D: sock_queue_trigger (sock_io.c:316)
==25741== by 0x4DE6478: tevent_common_invoke_immediate_handler (in /usr/lib/x86_64-linux-gnu/libtevent.so.0.9.37)
==25741== by 0x4DE64A2: tevent_common_loop_immediate (in /usr/lib/x86_64-linux-gnu/libtevent.so.0.9.37)
==25741== by 0x4DEBE5A: ??? (in /usr/lib/x86_64-linux-gnu/libtevent.so.0.9.37)
==25741== by 0x4DEA2D6: ??? (in /usr/lib/x86_64-linux-gnu/libtevent.so.0.9.37)
==25741== by 0x4DE57E3: _tevent_loop_once (in /usr/lib/x86_64-linux-gnu/libtevent.so.0.9.37)
==25741== by 0x15D1BA: ctdb_event_script_args (eventscript.c:821)
==25741== by 0x13B437: ctdb_start_daemon (ctdb_daemon.c:1315)
==25741== by 0x110642: main (ctdbd.c:393)
==25741== Address 0x57888a4 is 100 bytes inside a block of size 144 alloc'd
==25741== at 0x48357BF: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==25741== by 0x4B9B7C0: talloc_named_const (in /usr/lib/x86_64-linux-gnu/libtalloc.so.2.1.14)
==25741== by 0x15CCC6: eventd_client_write (eventscript.c:430)
==25741== by 0x15CCC6: eventd_client_run (eventscript.c:556)
==25741== by 0x15CCC6: ctdb_event_script_run (eventscript.c:649)
==25741== by 0x15D198: ctdb_event_script_args (eventscript.c:812)
==25741== by 0x13B437: ctdb_start_daemon (ctdb_daemon.c:1315)
==25741== by 0x110642: main (ctdbd.c:393)
==25741==
BUG: https://bugzilla.samba.org/show_bug.cgi?id=13659
Pair-programmed-with: Amitay Isaacs <amitay@gmail.com>
Signed-off-by: Martin Schwenke <martin@meltin.net>
Reviewed-by: Amitay Isaacs <amitay@gmail.com>
Autobuild-User(master): Amitay Isaacs <amitay@samba.org>
Autobuild-Date(master): Mon Oct 22 09:27:15 CEST 2018 on sn-devel-144
ctdbd enters a broken state if eventd goes away. A clean shutdown is
not possible because that involves running events. Restarting eventd
is possible but this might mask a serious problem and it is possible
that eventd might keep on disappearing. Just exit.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=13659
Signed-off-by: Martin Schwenke <martin@meltin.net>
Reviewed-by: Amitay Isaacs <amitay@gmail.com>
Since no records are deleted from RB tree during step 1, there is no
need for the check. Run step 2 unconditionally.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=13641
Signed-off-by: Amitay Isaacs <amitay@gmail.com>
Reviewed-by: Martin Schwenke <martin@meltin.net>
If a node fails to delete a record in TRY_DELETE_RECORDS control during
vacuuming, then it's possible that other nodes also may fail to delete a
record. So instead of deleting the record from RB tree on first failure,
keep track of the remote failures.
Update delete_list.remote_error and delete_list.left statistics only
once per record during the delete_record_traverse.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=13641
Signed-off-by: Amitay Isaacs <amitay@gmail.com>
Reviewed-by: Martin Schwenke <martin@meltin.net>
The 3-phase deletion of vacuumed records was introduced to overcome
the problem of record(s) resurrection during recovery. This problem
is now handled by avoiding the records from recently INACTIVE nodes in
the recovery process.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=13641
Signed-off-by: Amitay Isaacs <amitay@gmail.com>
Reviewed-by: Martin Schwenke <martin@meltin.net>
This avoids unnecessary work during recovery to pull records from nodes
that were INACTIVE just before the recovery.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=13641
Signed-off-by: Amitay Isaacs <amitay@gmail.com>
Reviewed-by: Martin Schwenke <martin@meltin.net>
This allows the attempt to be cancelled if an election is lost and an
unlock is done before the attempt is completed.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=13617
Signed-off-by: Martin Schwenke <martin@meltin.net>
Reviewed-by: Amitay Isaacs <amitay@gmail.com>
Autobuild-User(master): Martin Schwenke <martins@samba.org>
Autobuild-Date(master): Tue Sep 18 02:18:30 CEST 2018 on sn-devel-144
If the recovery lock is in the process of being taken then free the
cluster mutex handle but leave the recovery lock handle in place.
This allows ctdb_recovery_lock() to fail.
Note that this isn't yet live because rec->recovery_lock_handle is
still only set at the completion of the attempt to take the lock.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=13617
Signed-off-by: Martin Schwenke <martin@meltin.net>
Reviewed-by: Amitay Isaacs <amitay@gmail.com>
This makes upcoming changes simpler.
Update to modern debug macro while touching relevant line.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=13617
Signed-off-by: Martin Schwenke <martin@meltin.net>
Reviewed-by: Amitay Isaacs <amitay@gmail.com>
... not just cluster mutex handle.
This makes the recovery lock handle long-lived and with allow the
releasing code to cancel an in-progress attempt to take the recovery
lock.
The cluster mutex handle is now allocated off the recovery lock
handle.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=13617
Signed-off-by: Martin Schwenke <martin@meltin.net>
Reviewed-by: Amitay Isaacs <amitay@gmail.com>
At the moment this is still local and is freed after the mutex is
successfully taken.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=13617
Signed-off-by: Martin Schwenke <martin@meltin.net>
Reviewed-by: Amitay Isaacs <amitay@gmail.com>
If the master changed while trying to take the lock then fail gracefully.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=13617
Signed-off-by: Martin Schwenke <martin@meltin.net>
Reviewed-by: Amitay Isaacs <amitay@gmail.com>
If SIGTERM is received and the tevent signal handler setup in the
recovery daemon is still enabled then the signal is handled and a
corresponding event is queued. The child never runs an event loop so
the signal is effectively ignored.
Resetting the SIGTERM handler isn't enough. A signal can arrive
before that.
Block SIGTERM before forking and then immediately unblock it in the
parent.
In the child, unblock SIGTERM after the signal handler is reset. An
explicit unblock is needed because according to sigprocmask(2) "the
signal mask is preserved across execve(2)".
BUG: https://bugzilla.samba.org/show_bug.cgi?id=13617
Signed-off-by: Martin Schwenke <martin@meltin.net>
Reviewed-by: Amitay Isaacs <amitay@gmail.com>
If SIGTERM is received and the tevent signal handler setup in the
recovery daemon is still enabled then the signal is handled and a
corresponding event is queued. The child never runs an event loop so
the signal is effectively ignored.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=13617
Signed-off-by: Martin Schwenke <martin@meltin.net>
Reviewed-by: Amitay Isaacs <amitay@gmail.com>
The message is incorrect because the actual failure was loading the
config file. Instead of fixing the message, drop it because
ctdb_config_load() already logs the failure.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=13589
Signed-off-by: Martin Schwenke <martin@meltin.net>
Reviewed-by: Amitay Isaacs <amitay@gmail.com>
There are numerous places in the code where errno can be lost causing
the wrong error to be printed by a caller. Change ctdb_sys_send_arp()
to always return a useful errno on error instead of returning -1 and
sometimes having errno set correctly.
Signed-off-by: Martin Schwenke <martin@meltin.net>
Reviewed-by: Amitay Isaacs <amitay@gmail.com>
Confirmation is now received from eventd that it is accepting
connections, so this is no longer needed.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=13592
Signed-off-by: Martin Schwenke <martin@meltin.net>
Reviewed-by: Amitay Isaacs <amitay@gmail.com>
The current method of retrying the connection to eventd means that
messages get logged for each failure.
Instead, pass a pipe file descriptor to eventd and wait for it to
write 0 to the pipe to indicate that it is ready to accept client
connections.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=13592
Signed-off-by: Martin Schwenke <martin@meltin.net>
Reviewed-by: Amitay Isaacs <amitay@gmail.com>
The pipe will soon be needed earlier, so initialise it earlier.
Ensure the file descriptors are closed on error.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=13592
Signed-off-by: Martin Schwenke <martin@meltin.net>
Reviewed-by: Amitay Isaacs <amitay@gmail.com>
Other errors free argv, so do it here too.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=13592
Signed-off-by: Martin Schwenke <martin@meltin.net>
Reviewed-by: Amitay Isaacs <amitay@gmail.com>
Use the "failover:disabled" option instead.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=13589
Signed-off-by: Martin Schwenke <martin@meltin.net>
Reviewed-by: Amitay Isaacs <amitay@gmail.com>
Negative options can be confusing, so switch to a positive option.
This was supposed to be done months ago but was forgotten.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=13589
Signed-off-by: Martin Schwenke <martin@meltin.net>
Reviewed-by: Amitay Isaacs <amitay@gmail.com>
This is no longer needed because inactive/disabled nodes no longer
report any available public IP addresses.
Signed-off-by: Martin Schwenke <martin@meltin.net>
Reviewed-by: Amitay Isaacs <amitay@gmail.com>
This can be done now that NoIPHostOnAllDisabled is gone and will allow
the public IP address failover logic to be simplified.
In the test code, still filter available IP addresses by node state.
This code can't currently read information about available IP
addresses but that will change in future
Signed-off-by: Martin Schwenke <martin@meltin.net>
Reviewed-by: Amitay Isaacs <amitay@gmail.com>