IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
The previous transaction code was fast as long as you didn't do too
many writes within the transaction. The new code is a bit slower for
very small numbers of writes, but scales linearly as the number of
writes increases. The old code scaled as O(N^2) with the number of
writes, making it unusable for large N.
After testing, this needs to be merged into the Samba version of tdb,
along with many of the other recent tdb changes in the ctdb tree.
(This used to be ctdb commit bef8fe3d3ba80c7c660972c5357407f5278f7e26)
ctdb_recoverd.c
Always handle banning/unbanning locally on the node that is being
banned/unbanned instead of on the recovery master.
This means that if a ban request comes in to the recovery master for a
remote node, we pass the request on to the remote node instead of
setting up the ban and ban timeouts locally.
ctdb.c
send ban/unban requests to the node being banned/unbanned instead of to
the recmaster
(This used to be ctdb commit 880dd9f5fd0b91e450da93e195cc5c62cb1dcd6e)
the banned_nodes array and not the rec structure so that ban_state is
destroyed when the banned_nodes array gets destroyed
(and so that when this struct is destroyed, that any pending
ctdb_ban_timeout events are also destroyed.)
othervise we may end up with multiple ban_timeout timed events going in
parallell since we destroy/recreate the banned_nodes structure during
election but we never destroy/recreate the rec structure.
(This used to be ctdb commit fbd663d56a2a4421a5c0e541962c87e2e9c7cd82)
flag.
change calling of the recovered/takeip/releaseip event scripts to use
these enable/disable functions instead of stopping/starting monitoring.
when we disable monitoring we want all events to still be running
in particular the events to monitor for dead nodes and we only want to
supress running the monitor event scripts
(This used to be ctdb commit a006dcc4f75aba950dd701ad7d1a84e89df285e8)
monitoring should always be enabled
(though a node may want to temporarily disable running the "monitor"
event scripts but can do so internally without the need for this
control)
(This used to be ctdb commit e3a33618026823e6af845fd8513cddb08e6b5584)
Check whether monitoring is enabled or not before creating new events
and log why the event is not set up othervise
(This used to be ctdb commit 2f352b2606c04a65ce461fc2e99e6d6251ac4f20)
control, instead call ctdb_start/stop_monitoring()
ctdb_stop_monitoring() dont allocate a new monitoring context, leave it
NULL. Also set the monitoring_mode in this function so that
ctdb_stop/start_monitoring() and ->monitoring_mode are kept in sync.
Add a debug message to log that we have stopped monitoring.
ctdb_start_monitoring() check whether monitoring is already active and
make the function idempotent.
Create the monitoring context when monitoring is started.
Update ->monitoring_mode once the monitoring has been started.
Add a debug message to log that we have started monitoring.
When we temporarily stop monitoring while running an event script,
restart monitoring after the event script wrapper returns instead of in
the event script callback.
Let monitoring_mode start out as DISABLED and let it be enabled once we call ctdb_start_monitoring.
dont check for MONITORING_DISABLED in check_fore_dead_nodes(). If
monitoring is disabled, this event handler will not be called.
(This used to be ctdb commit 3a93ae8bdcffb1adbd6243844f3058fc742f76aa)
when we are the recmaster and we update the local flags for all the
nodes, if one of the nodes fail to respond and give us his flags,
set that node as a "culprit"
as one of the first things to do in the monitor_cluster loop, check if
the current culprit has caused too many (20) failures and if so ban that
node.
this is for the situation where a remote node may still be CONNECTED but
it fails to respond to the getnodemap control causing the recovery
master to loop in monitor_cluster aborting the monitoring when the
node fails to respond but before anything will trigger a call to
do_recovery().
If one or more of the databases or nodes are frozen at this stage, this
would lead to smbd being blocked for potentially a longish time.
(This used to be ctdb commit 83b0261f2cb453195b86f547d360400103a8b795)