IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
update the recovery test script to start all ctdb daemons with a
recovery daemon
(This used to be ctdb commit 47794e16df285cacefc30208d892d931a6e46b96)
free it and also we wont accidentally return from the function without
killing the event first
(This used to be ctdb commit e3d72d024ef7342a808e5c488fd646a39e5fac78)
trigger
this must have been a sideeffect of a different bug in the recoverd.c
code that has now been fixed
(This used to be ctdb commit 676446fd1083c371ad0ff72dd8c636ec8e6d1423)
election is primitive, it elects the lowest vnn as the recovery master
two new controls, to get/set recovery master for a node
to use recovery daemon, start one
./bin/recoverd --socket=ctdb.socket*
for each ctdb daemon
it has been briefly tested by deleting and adding nodes to a 4 node
cluster but needs more testing
(This used to be ctdb commit 541d1cc49d46d44042a31a8404d521412ef2fdb3)
recovery or not that all active nodes are in normal mode.
If we discover that some node is still in recoverymode it may indicate
that a previous recovery ended prematurely and thus we should start a
new recovery
(This used to be ctdb commit c15517872e6c98c8c425a8d47d2b348ecb0620b0)
we really should kill the event in case the call completed before the
timeout so that we can also make timed_out non-static
(This used to be ctdb commit f297eed589b1d4e188f77f195683365cf91d0e62)
since if the command times out and we return from ctdb_control we may
have events that can trigger later which will overwrite data that is no
longer in our stackframe
(This used to be ctdb commit 93942543092be618c0bd8ef68b470b0789bad7ad)
ctdb_ctrl_ calls are timedout due to nodes arriving or leaving the
cluster it crashes the recovery daemon afterwards with a SEGV but no
useful stack backtrace
(This used to be ctdb commit cd3abc7349e86555ccd87cd47a1dcc2adad2f46c)
start the daemons with explicit socketnames and explicit ip address/port
remove all --socket= from all ctdb_control calls since they are not
needed anymore
(This used to be ctdb commit 593a959d428f5b4a913117a9b5c8fe65a3eb950e)
of traversing the full cluster.
this makes it easier to debug recovery
update the test script for recovery to reflect the newish signatures to
ctdb_control
the catdb control does still segfault however when there are missing
nodes in the cluster as there are toward the end of the recovery test
(This used to be ctdb commit 8de2a97c14a444f817ceb36461314f10c9601ecc)