IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
race between the ctdb tool and the recovery daemon both at once
trying to push flag changes across the cluster.
(This used to be ctdb commit a9a1156ea4e10483a4bf4265b8e9203f0af033aa)
Fix a slow memory leak in the recovery daemon if there is a recoery
triggered during the public ip reassignment process
(This used to be ctdb commit 0aca4daf908b76d6013ff3dfad41beb9114fc1a3)
we currently only monitor that the dameons are running by kill(0, pid)
and verifying the the domain socket between them is ok.
this is not sufficient since we can have a situation where the recovery
daemon is hung.
this new code monitors that the recovery daemon is operating.
if the recovery hangs, we log this and shut down the main daemon
(This used to be ctdb commit cd69d292292eaab3aac0e9d9fc57cb621597c63c)
This file creates additional locking stress on the backend filesystem and we may not need it anyway.
(This used to be ctdb commit 84236e03e40bcf46fa634d106903277c149a734f)
change one element from private to private_data
Signed-off-by: Ronnie Sahlberg <ronniesahlberg@gmail.com>
(This used to be ctdb commit 0de79352c9b36c118e36905f08ebbe38ecbb957e)
but the rest of the nodes are not frozen.
at this stage an election is called by the new node.
Since in this case the nodes are not froze, we can not modify the recmaster
of the nodes so it is expected that this control would fail.
Add a boolean to send_election_request() to make it not
try to set the recmaster locally for the case where we are in an election phase
while not frozen.
(This used to be ctdb commit c5035657606283d2e35bea40992505e84ca8e7be)
This reverts commit bfba5c7249eff8a10a43b53c1b89dd44b625fd10.
revert the waitpid changes. we need to waitpid for some childredn so should
refactor the approach completely
(This used to be ctdb commit 702ced6c2fe569c01fe96c60d0f35a7e61506a96)
so we should not call it from the main daemon.
1, set SIGCHLD to SIG_DFL to make sure we ignore this signal
2, get rid of all waitpid() calls
3, change reporting of event script status code from _exit()/waitpid() to write()/read() one byte across the pipe.
(This used to be ctdb commit bfba5c7249eff8a10a43b53c1b89dd44b625fd10)
since this is already done implicitely when we changed recovery mode
back to normal
(This used to be ctdb commit af1f6cf7561fe9cb5c97f940d4458c83bdd8e2a0)
make ctdb uptime print how long the recovery took
in the recovery daemon when we check that the public ip address
allocation on the local node is correct (we have the ips we should have
and we dont have any we shouldnt have) use ctdb uptime and check the
recovery start/stop times and make sure we dont check for ip allocation
inconsistencies during a recovery where the ip address allocation is in flux.
(This used to be ctdb commit f86551580349b7f662f9a07e4eb0c1189e38e429)
this callback is called for every node where the control failed (or timed out)
when we issue the start recovery control from recovery master,
set any node that fails as a culprit so it will eventually be banned
(This used to be ctdb commit 72f89bac13cbe8c3ca3e7a942469cd2ff25abba2)
ctdb_attach() so that we can pass TDB_NOSYNC when we attach to
a persistent database and want fast unsafe writes instead of
slow but safe tdb_transaction writes.
enhance the ctdb_persistent test suite to test both safe and unsafe writes
(This used to be ctdb commit 4948574f5a290434f3edd0c052cf13f3645deec4)
since this event wont run unless the recovery mode is normal but we
can not know what the recovery mode will be in the future on a remote node
so since we issue these commands that will execute in the future at some other node
it is pointless to try to check if it worked or not
in particular if "failure to successfully run the eventscript" would then trigger a full new recovery which is disruptive and expensive.
(This used to be ctdb commit 2c292039a0139dcf5bb2bd964eb6f8902d094c50)
This attempts to fix the problem of ctdb event scripts blocking due to
attempted access to the ctdb databases during recovery. The changes are:
- now only the 'shutdown' and 'startrecovery' events can be called
with the databases locked in recovery. The event scripts must ensure
that for these two events no database access is attempted
- the recovered, takeip and releaseip events could previously be called
inside a recovery. The code now ensures that this doesn't happen, delaying
the events till after recovery has finished
- the 50.samba event script now avoids using testparm unless it is really
needed
This needs extensive testing.
(This used to be ctdb commit e3cdb8f2be6a44ec877efcd75c7297edb008a80b)
If we shutdown the transport and CTDB later decides to send a command out
for queueing, the call to ctdb->methods->allocate_pkt() will SEGV.
This could trigger for example when we are in the process of shuttind down CTDBD and have already shutdown the transport but we are still waiting for the
"shutdown" eventscripts to finish.
If the event scripts now take much much longer to execute for some reason, this
race condition becomes much more probable.
Decorate all dereferencing of ctdb->methods-> with a check that ctdb->menthods is non-NULL
(This used to be ctdb commit c4c2c53918da6fb566d6e9cbd6b02e61ae2921e7)
Remove a bogus check inside the recovery daemon that ONLY redistributed public addresses IFF the local node had/served public addresses.
This was a valid optimization long ago when we enforced that all nodes must use the same public addresses file but is invalid today where we can have different public addresses configs on all nodes and even have some nodes that do NOT use public addresses at all.
(This used to be ctdb commit 5833e6b99d9afaf35dc8354df8676b9115418b23)
Should always use type safe talloc functions when possible. In this case we were allocating bytes instead of uint32_t
(This used to be ctdb commit cb14ee57dd0a589242da1ac2830bb7939df460a5)
This allows us to use the async framework also for controls that return
outdata.
Add a "capabilities" field to the ctdb_node structure. This field is
only initialized and kept valid inside the recovery daemon context and not
inside the main ctdb daemon.
change the GET_CAPABILITIES control to return the capabilities in outdata instead of in the res return variable.
When performing a recovery inside the recovery daemon, read the capabilities from all connected nodes and update the ctdb->nodes list of nodes.
when building the new vnnmap after the database rebuild in recovery, do not include any nodes which lack the LMASTER capability in the new vnnmap.
Unless there are no available connected node that sports the LMASTER capability in which case we let the local node (recmaster) take on the lmaster role temporarily (i.e. become a member of the vnnmap list)
(This used to be ctdb commit 0f1883c69c689b28b0c04148774840b2c4081df6)
handle failure to get/hold the reclock pnn file better and just
treat it as a transient backend filesystem error and try again later
instead of shutting down the recovery daemon
when we have lost the pnn file and if we are recmaster
release the recmaster role so that someone else can become recmaster isntead
(This used to be ctdb commit e513277fb09b951427be8351d04c877e0a15359d)
and a ctdb command to pull the talloc memory map from a recovery daemon
ctdb rddumpmemory
(This used to be ctdb commit d23950be7406cf288f48b660c0f57a9b8d7bdd05)
of connected nodes
num_active only contains the number of active nodes and would thus not count
banned nodes
(This used to be ctdb commit 06d3ce470766ef0b60d68ccd84de5437146cc147)
once every such interval :
* the recovery master on each node will uppdate the "connected" count in the
reclock count file (ctdb getreclock)
* if the node thinks it is a recovery master but it detects another node
that is DISCONNECTED but which still holds a lock to the reclock count file
this may mean that we have a split cluster.
if that other node that is DISCONNECTED but still holds the lock on hte reclock
pnn count file, is MORE connected than the local node,
yield the recmaster role and let the other half of the lcuster take over
this add a second, last chance mechanism to detect split clusters.
IF the cluster is split but GPFS is not yet split, this mechanism makes
the largest half of the cluster become the active half.
(This used to be ctdb commit 07af425f444531942cce8abff112c1524228d287)