IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
use this control from the recovery daemon to ensure that the recmaster
always have a higher rsn than andy other node for the records after
recovery completes
(This used to be ctdb commit 6fb6a8b981a804bfcc460c4481c51c7c647230f6)
update the recovery test script to start all ctdb daemons with a
recovery daemon
(This used to be ctdb commit 47794e16df285cacefc30208d892d931a6e46b96)
election is primitive, it elects the lowest vnn as the recovery master
two new controls, to get/set recovery master for a node
to use recovery daemon, start one
./bin/recoverd --socket=ctdb.socket*
for each ctdb daemon
it has been briefly tested by deleting and adding nodes to a 4 node
cluster but needs more testing
(This used to be ctdb commit 541d1cc49d46d44042a31a8404d521412ef2fdb3)
this program is a client to the local ctdb daemon
every second it pulls all vnnmap and nodemaps from all nodes that are
available and checks if a recovery is required
a recovery is required if :
* all nodes do NOT have an identical vnnmap and generation
* all nodes do NOT have an identical nodemap
* there are active nodes that are NOT in the nodemap
* there are nodes in the nodemap that are NOT active
During recovery, the recovery tool will also make sure that all nodes
know about and have created all databases.
(This used to be ctdb commit 2f2650467bac7e8954de7c17cb34f46b0bdbcd26)
dont explicitely free the vnnmap pointer in the getvnnmap control this
is freed by the mem_ctx instead
add code to the recoverd to detect when/if recovery is required
veiry that the number of active nodes, the nodemap and the vnn map is
consistent across the entire cluster and if not trigger a recovery
(which right now just prints "we need to do recovery" to the screen.
(This used to be ctdb commit 2b0a207a3748bdb3394dc9fd0d1c344ee1bb0bb5)
for the time being
remove all the [de]marshalling and just pass a structure around instead
(This used to be ctdb commit b1169555ab7015976c0135ff51121cc238f5887c)
this is not optimized at all and copies/merges all records between
databases instead of only those records for which a certain node is
lmaster. (step 7 should later be enhanced to a, delete the database,
push only those records for which the node is lmaster)
(This used to be ctdb commit 509d2c71169e96a8610f9db91293dc7a73c2cc10)
attach to databases after the protocol has started. The daemon
broadcasts information on new databases to the other daemons.
This also eliminates the need for the client to know about the hash
between db name and db_id.
(This used to be ctdb commit 3bad91a9d987d4c09fe3322eac23c2733660ad08)
for 2 days.
The main bug was in smbd, but there was a secondary (and more subtle)
bug in ctdb that the bug in smbd exposed. When we get send a dmaster
reply, we have to correctly update the dmaster in the recipient even
if the original requst has timed out, otherwise ctdbd can get into a
loop fighting over who will handle a key.
This patch also cleans up the packet allocation, and makes ctdbd
become a real daemon.
(This used to be ctdb commit 59405e59ef522b97d8e20e4b14310a217141ac7c)
while recovery is in progress the daemon will discard all CTDB_REQ_CALL
and rely on clients retransmitting them
add new controls to get/set the recovery mode
(This used to be ctdb commit 41458a61577885ac49150f830e92e93e634c5411)
it does not yet work since ctdb_control can right now only be called
from client context and the pull is implemented as the target ctdb node
itself using a get_keys to pull the keys from the source node thus
ctdb daemon needs to ctdb_control to a remote node
(This used to be ctdb commit a55c7c64b4ff87f54b90649c9f469b1ff36dc9ea)