IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Signed-off-by: Amitay Isaacs <amitay@gmail.com>
Reviewed-by: Martin Schwenke <martin@meltin.net>
Autobuild-User(master): Martin Schwenke <martins@samba.org>
Autobuild-Date(master): Thu Nov 14 12:03:46 UTC 2019 on sn-devel-184
If any of the option parsing or command parsing fails, generate usage
message.
Signed-off-by: Amitay Isaacs <amitay@gmail.com>
Reviewed-by: Martin Schwenke <martin@meltin.net>
The type-checking is superfluous and gets in the way of readability.
Signed-off-by: Martin Schwenke <martin@meltin.net>
Reviewed-by: Amitay Isaacs <amitay@gmail.com>
Autobuild-User(master): Amitay Isaacs <amitay@samba.org>
Autobuild-Date(master): Thu Nov 14 03:45:44 UTC 2019 on sn-devel-184
Commit c68b6f96f26 changed the talloc hierarchy such that outgoing TCP sockets
while sitting in the async connect() syscall are not freed via
ctdb_tcp_shutdown() anymore, they are hanging off a longer-running structure.
Free this structure as well.
If an outgoing TCP socket leaks into a long-running child process (possibly the
recovery daemon), this connection will never be closed as seen by the
destination node. Because with recent changes incoming connections will not be
accepted as long as any incoming connection is alive, with that socket leak
into the recovery daemon we will never again be able to successfully connect to
the node that is affected by this leak. Further attempts to connect will be
discarded by the destination as long as the recovery daemon keeps this socket
alive.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=14175
RN: Avoid communication breakdown on node reconnect
Signed-off-by: Martin Schwenke <martin@meltin.net>
Signed-off-by: Volker Lendecke <vl@samba.org>
Reviewed-by: Amitay Isaacs <amitay@gmail.com>
Signed-off-by: Amitay Isaacs <amitay@gmail.com>
Reviewed-by: Martin Schwenke <martin@meltin.net>
Autobuild-User(master): Martin Schwenke <martins@samba.org>
Autobuild-Date(master): Wed Nov 13 13:31:10 UTC 2019 on sn-devel-184
We have a macro for NULLing out the pointer
Signed-off-by: Volker Lendecke <vl@samba.org>
Reviewed-by: Amitay Isaacs <amitay@gmail.com>
Autobuild-User(master): Amitay Isaacs <amitay@samba.org>
Autobuild-Date(master): Fri Nov 8 01:35:11 UTC 2019 on sn-devel-184
This can fail as follows:
--==--==--==--==--==--==--==--==--==--==--==--==--==--==--==--==--==--==--
Running test ./tests/UNIT/tool/ctdb.process-exists.003.sh (02:26:30)
--==--==--==--==--==--==--==--==--==--==--==--==--==--==--==--==--==--==--
ctdb.process-exists.003 - ctdbd process with multiple connections on node 0
Setting up fake ctdbd
<10||0|
OK
<10|PID 26107 exists
|0|
OK
==================================================
Running "ctdb -d NOTICE process-exists 26107 0x1234567812345678"
PASSED
==================================================
Running "ctdb -d NOTICE process-exists 26107 0xaebbccdd12345678"
Registered SRVID 0xaebbccdd12345678
--------------------------------------------------
Output (Exit status: 1):
--------------------------------------------------
PID 26107 with SRVID 0xaebbccdd12345678 does not exist
--------------------------------------------------
Required output (Exit status: 0):
--------------------------------------------------
PID 26107 with SRVID 0xaebbccdd12345678 exists
FAILED
connection to daemon closed, exiting
==========================================================================
TEST FAILED: ./tests/UNIT/tool/ctdb.process-exists.003.sh (status 1) (duration: 0s)
==========================================================================
This happens when dummy_client has not registered the SRVID (for its
10th connection) before the 2nd simple_test.
Change the initial wait to ensure that the SRVID is registered.
Signed-off-by: Martin Schwenke <martin@meltin.net>
Reviewed-by: Amitay Isaacs <amitay@gmail.com>
Autobuild-User(master): Amitay Isaacs <amitay@samba.org>
Autobuild-Date(master): Wed Nov 6 02:46:24 UTC 2019 on sn-devel-184
Improve quoting and indentation. Print a clear error if the cluster
goes back into recovery and doesn't come back out.
Signed-off-by: Martin Schwenke <martin@meltin.net>
Reviewed-by: Amitay Isaacs <amitay@gmail.com>
Retrying like this hides bugs. The cluster should come up first time,
every time.
Signed-off-by: Martin Schwenke <martin@meltin.net>
Reviewed-by: Amitay Isaacs <amitay@gmail.com>
This file descriptor is owned by the incoming queue. It will be
closed when the queue is torn down.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=14175
RN: Avoid communication breakdown on node reconnect
Signed-off-by: Martin Schwenke <martin@meltin.net>
Reviewed-by: Amitay Isaacs <amitay@gmail.com>
CTDB's incoming queue handling does not check whether an existing
queue exists, so can overwrite the pointer to the queue. This used to
be harmless until commit c68b6f96f26664459187ab2fbd56767fb31767e0
changed the read callback to use a parent structure as the callback
data. Instead of cleaning up an orphaned queue on disconnect, as
before, this will now free the new queue.
At first glance it doesn't seem possible that 2 incoming connections
from the same node could be processed before the intervening
disconnect. However, the incoming connections and disconnect occur on
different file descriptors. The queue can become orphaned on node A
when the following sequence occurs:
1. Node A comes up
2. Node A accepts an incoming connection from node B
3. Node B processes a timeout before noticing that outgoing the queue is writable
4. Node B tears down the outgoing connection to node A
5. Node B initiates a new connection to node A
6. Node A accepts an incoming connection from node B
Node A processes then the disconnect of the old incoming connection
from (2) but tears down the new incoming connection from (6). This
then occurs until the originally affected node is restarted.
However, due to the number of outgoing connection attempts and
associated teardowns, this induces the same behaviour on the
corresponding incoming queue on all nodes that node A attempts to
connect to. Therefore, other nodes become affected and need to be
restarted too.
As a result, the whole cluster probably needs to be restarted to
recover from this situation.
The problem can occur any time CTDB is started on a node.
The fix is to avoid accepting new incoming connections when a queue
for incoming connections is already present. The connecting node will
simply retry establishing its outgoing connection.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=14175
Signed-off-by: Martin Schwenke <martin@meltin.net>
Reviewed-by: Amitay Isaacs <amitay@gmail.com>
This makes it consistent with the reverse case. Also, in_fd will soon
be removed.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=14175
Signed-off-by: Martin Schwenke <martin@meltin.net>
Reviewed-by: Amitay Isaacs <amitay@gmail.com>
This changes the behaviour for some failures from exiting to simply
attempting to schedule the next run.
Signed-off-by: Martin Schwenke <martin@meltin.net>
Reviewed-by: Amitay Isaacs <amitay@gmail.com>
All the vacuum operations if required have an event loop to ensure
completion of pending operations. Once all the steps are complete,
there is no reason to process any more packets.
Signed-off-by: Amitay Isaacs <amitay@gmail.com>
Reviewed-by: Martin Schwenke <martin@meltin.net>
The only reason for recoverd attaching to databases was to migrate
records to the local node as part of vacuuming. Recovery daemon does
not take part in database vacuuming any more.
The actual database recovery is handled via the recovery_helper and
recovery daemon should not need to attach to the databases any more.
Signed-off-by: Amitay Isaacs <amitay@gmail.com>
Reviewed-by: Martin Schwenke <martin@meltin.net>
This is now implemented in the ctdb daemon using VACUMM_FETCH control.
Signed-off-by: Amitay Isaacs <amitay@gmail.com>
Reviewed-by: Martin Schwenke <martin@meltin.net>