IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
This ensures that when shutting down CTDB, all the timed events
associated with monitoring recoverd are destroyed and recoverd
is not restarted.
Signed-off-by: Amitay Isaacs <amitay@gmail.com>
(This used to be ctdb commit 7393e2b290f9879ff72d5c5a9ce933034129f0e8)
When CTDB is shutting down, recovery daemon is stopped, but the
event that checks if recovery daemon is still alive is not destroyed.
So recovery master is restarted during shutdown if CTDB daemon takes
longer to shutdown.
There are two processes that check if recovery daemon is working.
1. ctdb_check_recd() - which checks every 30 seconds if the recovery
daemon process exists.
2. ctdb_recd_ping_timeout() - which is triggered when recovery daemon
fails to ping CTDB daemon.
Both the events are periodic and need to be destroyed when shutting down.
Signed-off-by: Amitay Isaacs <amitay@gmail.com>
(This used to be ctdb commit 746168df2e691058e601016110fae818c6a265c3)
The record-by-record mode of recovery deletes empty records.
For persistent databases, this can lead to data corruption
by deleting records that should be there:
- Assume the cluster has been running for a while.
- A record R in a persistent database has been created and
deleted a couple of times, the last operation being deletion,
leaving an empty record with a high RSN, say 10.
- Now a node N is turned off.
- This leaves the local database copy of D on N with the empty
copy of R and RSN 10. On all other nodes, the recovery has deleted
the copy of record R.
- Now the record is created again while node N is turned off.
This creates R with RSN = 1 on all nodes except for N.
- Now node N is turned on again. The following recovery will chose
the older empty copy of R due to RSN 10 > RSN 1.
==> Hence the record is gone after the recovery.
On databases like Samba's registry, this can damage the higher-level
data structures built from the various tdb-level records.
This patch fixes that problem by not deleting empty records in recoveries
for persistent databases.
Signed-off-by: Michael Adam <obnox@samba.org>
(This used to be ctdb commit 6860c79aea416f56cfd7a6af790bbdf495dbc54e)
If any of the nodes fail takeover run (either due to timeout or failure
to complete within takeover_timeout interval) from main loop, recovery
master will give up trying takeover run with following message:
"Unable to setup public takeover addresses. Try again later"
And as a side-effect the monitoring is disabled on all the nodes. Before
ctdb_takeover_run() is called from main loop, monitoring get disabled via
startrecovery event. Since ctdb_takeover_run() fails, it never runs
recovered event and monitoring does not get re-enabled.
In main_loop, ctdb_takeover_run() is called with a takeover_fail_callback.
This callback will get called if any of the nodes fail in handling
takeip/releaseip/ipreallocated events in ctdb_takeover_run().
Signed-off-by: Amitay Isaacs <amitay@gmail.com>
(This used to be ctdb commit a5c6bb1fffb8dc3960af113957a1fd080cc7c245)
These support getting and clearing logs from the ring-buffer in the
recovery daemon.
Signed-off-by: Martin Schwenke <martin@meltin.net>
(This used to be ctdb commit cbca233d1e03b2410e0bb63b936328d4a8b3c7b4)
Currently it checks for unhosted IPs among the known IPs rather than
available IPs. This means that a takeover run can be flagged even
when that takeover run will be unable to assign a known, unhosted IP.
Pair-programmed-with: Amitay Isaacs <amitay@gmail.com>
Signed-off-by: Martin Schwenke <martin@meltin.net>
(This used to be ctdb commit 3cc878bc97fdac764a60ed805f64d649eaab06e8)
Pair-programmed-with: Amitay Isaacs <amitay@gmail.com>
Signed-off-by: Martin Schwenke <martin@meltin.net>
(This used to be ctdb commit 9550c497e6d6ef5ee44826c4bd9ed5ad65174263)
Disable for TakeoverTimeout seconds.
Otherwise the the recovery daemon can get overzealous and start trying
to add/delete addresses that it thinks are missing but where the
eventscript just hasn't finished. This didn't used to matter so much
but it is more important now that concurrent takeip/releaseip/updateip
generate error - we want to avoid spamming the log.
Signed-off-by: Martin Schwenke <martin@meltin.net>
(This used to be ctdb commit 56fcee3c7730cb12fa666072d5400949af6e5f7c)
Not just stopped nodes. In reality, this means that banned nodes will
also yield, since nodes in the other inactive states won't be running
a daemon.
This seems sensible since if another node notices that an inactive
node is the recovery master then it will force an election anyway.
Signed-off-by: Martin Schwenke <martin@meltin.net>
(This used to be ctdb commit fc18188b7b63eb0dafbc47e3abf80e306e1dfc31)
An inactive node can't become the recovery master. So if an inactive
node notices that the recovery master is inactive, it shouldn't force
an election for recovery master and nominate itself as a candidate.
This can cause the recovery master to flip-flop between nodes when all
nodes are inactive.
If there is actually an active node then it will trigger the election.
This is fairly cosmetic but is a step along the way towards ironing
out weirdness when all nodes are stopped.
Also, fix a related comment.
Signed-off-by: Martin Schwenke <martin@meltin.net>
(This used to be ctdb commit e7dc10da3ced54ea9d719ad167ee42dcca8dce75)
Doing these checks is pointless and potentially causes unnecessary log
messages.
Signed-off-by: Martin Schwenke <martin@meltin.net>
(This used to be ctdb commit a0c30c820fd47d4f8620dc060c825be10754f5d1)
If CTDB starts in STOPPED state then it thinks it is in the middle of
a recovery. rec->ifaces is also NULL and an early exit further down
(that checks to see if a recovery is in process) means that it stays
that way.
However, each time this function is entered the need for a takeover
run is re-flagged. The takeover run never happens due to the the
early exit, causing a couple of unneeded messages to be logged each
time.
This is avoided by moving the code that sets rec->ifaces so that it is
executed earlier and, in this case, in the middle of a recovery.
Signed-off-by: Martin Schwenke <martin@meltin.net>
(This used to be ctdb commit f586e8a2911fc6e7f6698f516653145d8fd45dad)
Change this to instead preallocate , by default, 10MByte chunks to the data buffer.
This significantly reduces the number of potential reallocate and move operations that may be required.
Create a tunable to override/change how much preallocation should be used.
(This used to be ctdb commit 1f262deaad0818f159f9c68330f7fec121679023)
Wrap all creation of child processes inside ctdb_fork() which is used to track all processes we have spawned.
Capture SIGCHLD to track also which child processes have terminated.
Wrap kill() inside ctdb_kill() and make sure that we never send a !0 signal to a child process pid that has already terminated (and might have been replaced with a
(This used to be ctdb commit f73a4b1495830bcdd094a93732a89dd53b3c2f78)
Also add a method to use the recovery master/daemon to reload the public ips on all nodes in the cluster.
Reloading the public ips on all node sin the cluster is only suported if all nodes in the cluster are available and healthy.
(This used to be ctdb commit 05603e914f8c12618d7e06943c0f7df207f645b0)
The implementation of DisableIPFailover got intermingled with
--nopublicipcheck. This just looks wrong - Ronnie must have been
having a bad day. :-)
Signed-off-by: Martin Schwenke <martin@meltin.net>
(This used to be ctdb commit 5083b266dd68b292c4275505f3d1b878dbf12f11)
Add a new tunable that changes the mode how persistent databases are recovered.
RecoveryPDBBySeqNum
When set to 1, persistent databases will be recovered in whole from the node which
has the highest "__db_sequence_number__" record.
This record is managed by samba for those databases where we do persistent writes and have
inter-record relations.
For these databases we do not want the usual "blend records from all nodes based
on individual record RSN" but instead a mode where we pick one instance of the persistent database.
If no node was found with a "__db_sequence_number__" record at all, we fail back to the original "recover records independently based on record RSN".
Some persistent databases do not contain record interrelations and as such does not
contain this special record at all.
(This used to be ctdb commit 502150c764298a9fa8c4d8aa445bf7d85d4ee9dc)
metze
(cherry picked from commit 6ba8af28f8a8f79db65120a97d7157dcc5c7e083)
Signed-off-by: Michael Adam <obnox@samba.org>
(This used to be ctdb commit ccd67cf7f26713e695000d89d9ce8cfa78bfe00f)
compared to old 1.0 branches
This must have been mistakenly applied to master when you intended to push
for a different branch i guess.
Revert "recoverd: try to become the recovery master if we have the capability, but the current master doesn't"
This reverts commit a97d417aba85e901540147a4dff4794249442939.
(This used to be ctdb commit c19cb751077b78cf4b6e28a1e3746d4ffedbfd68)
the persistent flag.
This is the same size as the original boolean but allows ut to add additional flags for the database
(This used to be ctdb commit 7462761638d25880ad46024ad4ef21667eb99a98)
Reduce an infomational message about not performing ip reallocation
from NOTICE(the default) to INFO.
These messages are normal during startup or when stopped/banned when
we will be in recovery mode for a while.
Remove a messager in the loop waiting for initial startup to complete about
the generation being invalid. It is always invalid at this stage before we have
finished initial recovery.
Rate-limit the informational messages for CTDB_WAIT_UNTIL_RECOVERED
so that we only print them once per second for the first 60 seconds and after that only once per 10 minutes.
These messages are normal during startup, but we should not be logging them every second for cases where we will remain in recovery mode during startup for an extended period of time.
Such as if suspended or permabanned.
CQ S1023302
(This used to be ctdb commit 3a0af8780dc595acbed880f288fcbc4f62c862fb)
This way, the records coming in via this handler, can be treated appropriately.
Namely, they can be deleted instead of being stored when the meet the fast-path
vacuuming criteria (empty, never migrated with data...)
(This used to be ctdb commit fb5d832104970320359b3e474eb291ca3d629380)
Those records that are kept after recovery, are non-empty, and
stored identically on all nodes. So this is as if they had been
migrated with data.
Pair-Programmed-With: Stefan Metzmacher <metze@samba.org>
(This used to be ctdb commit 101be642e492a3a54231e2e3e6553a59380fe702)
While it does not address the reason for recovery daemon shutting down, it reduces the impact of such issues and makes the system more robust.
(This used to be ctdb commit 0566ef3d6cef809bda204877c493c80ff9eb2c40)
has failed.
We dont need to rebuild the databases in this situation, we just
need to try again to sort out the ip address allocations.
(This used to be ctdb commit 044c398ffea23d36ee033c8ddf07d11028197346)
scheduler for the child.
Use ctdb_fork() from callers where we dont want the child to be running
at real-time privilege.
(This used to be ctdb commit 58795a4c9e0624e20fa3e0023b65127053edd103)
but thinks it is still unassigned (-1).
add code to the recovery daemon to detect this case and trigger a reallocation
so that the ip gets covered
and change the takeip code to allow for this condition, taking on an ip address that is
already hosted.
cq s1021073
(This used to be ctdb commit 9020baf27cab7821c9094cda185206fb7af0fee7)