IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Inactive can also mean stopped. To add information, just print the
flags instead.
Signed-off-by: Martin Schwenke <martin@meltin.net>
(This used to be ctdb commit a8605f7e06076e7edf84e0cc160fd3d9ab5c4b64)
Signed-off-by: Michael Adam <obnox@samba.org>
Reviewed-By: Amitay Isaacs <amitay@gmail.com>
(This used to be ctdb commit 61f17e53576197def46bc61fdf0cdb5282333a3e)
Problem:
Recovery can under certain circumstances lead to old record copies
resurrecting: Recovery selects the newest record copy purely by RSN. At
the end of the recovery, the recovery master is the dmaster for all
records in all (non-persistent) databases. And the other nodes locally
hold the complete copy of the databases. The bug is that the recovery
process does not increment the RSN on the recovery master at the end of
the recovery. Now clients acting directly on the Recovery master will
directly change a record's content on the recmaster without migration
and hence without RSN bump. So a subsequent recovery can not tell that
the recmaster's copy is newer than the copies on the other nodes, since
their RSN is the same. Hence, if the recmaster is not node 0 (or more
precisely not the active node with the lowest node number), the recovery
will choose copies from nodes with lower number and stick to these.
Here is how to reproduce:
- assume we have a cluster with at least 2 nodes
- ensure that the recmaster is not node 0
(maybe ensure with "onnode 0 ctdb setrecmasterrole off")
say recmaster is node 1
- choose a new database name, say "test1.tdb"
(make sure it is not yet attached as persistent)
- choose a key name, say "key1"
- all clustere nodes should ok and no recovery running
- now do the following on node 1:
1. dbwrap_tool test1.tdb store key1 uint32 1
2. dbwrap_tool test1.tdb fetch key1 uint32
==> 1
3. ctdb recover
4. dbwrap_tool test1.tdb store key1 uint32 2
5. dbwrap_tool test1.tdb fetch key1 uint32
==> 2
4. ctdb recover
7. dbwrap_tool test1.tdb fetch key1 uint32
==> 1
==> BUG
This is a very severe bug, since when applied to Samba's locking.tdb
database, it means that for SMB clients on clustered Samba there is
the potential for locking out oneself from previously opened files
or even worse, data corruption:
Case 1: locking out
- client on recmaster opens file
- recovery propagates open file handle (entry in locking.tdb) to
other nodes
- client closes file
- client opens the same file
- recovery resurrects old copy of open file record in locking.tdb
from lower node
- client closes file but fails to delete entry in locking.tdb
- client tries to open same file again but fails, since
the old record locks it out (since the client is still connected)
Case 2: data corruption
- clien1 on recmaster opens file
- recovery propagates open file info to other nodes
- client1 closes the file and disconnects
- client2 opens the same file
- recovery resurrects old copy of locking.tdb record,
where client2 has no entry, but client1 has.
- but client2 believes it still has a handle
- client3 opens the file and succees without
conflicting with client2
(the detached entry for client1 is discarded because
the server does not exist any more).
=> both client2 and client3 believe they have exclusive
access to the file and writing creates data corruption
Fix:
When storing a record on the dmaster, bump its RSN.
The ctdb_ltdb_store_server() is the central function for storing
a record to a local tdb from the ctdbd server context.
So this is also the place where the RSN of the record to be stored
should be incremented, when storing on the dmaster.
For the case of the record migration, this is currently done in
ctdb_become_dmaster() in ctdb_call.c, but there are other places
such as in recovery, where we should bump the RSN, but currently
don't do it.
So moving the RSN incrementation into ctdb_ltdb_store_server fixes
the recovery-record-resurrection bug.
Signed-off-by: Michael Adam <obnox@samba.org>
Reviewed-By: Amitay Isaacs <amitay@gmail.com>
(This used to be ctdb commit feb1d40b21a160737aead22e398f3c34ff3be8de)
This can improve performance and stop clients from having to chase a rapidly migrating/bouncing record
(This used to be ctdb commit d0d98f7e45e5084b81335b004d50bddc80cdc219)
Move identical copies of ctdb_null_func(), ctdb_fetch_func(),
ctdb_fetch_with_header_func() from ctdb_client.c and
ctdb_ltdb_server.c to somewhere common.
This is in the context of wanting to run CCAN-style tests where most
of the ctdbd code is just included in the test program.
Signed-off-by: Martin Schwenke <martin@meltin.net>
(This used to be ctdb commit 126cb0d369b2b1aed63801dc4ba0554399e8b7e4)
The if statement uses ret but means to use ret2.
Signed-off-by: Martin Schwenke <martin@meltin.net>
(This used to be ctdb commit f40101a615f8b9826a484e4697bfea6ee2b9ba88)
When multiple clients fetch the same record concurrently, send only one single
fetch across the network and deferr all other fetches locally.
This improves performance for hot records and reduces cpu load on ctdb.
(This used to be ctdb commit 82d6946ad8b3348e8b9d3d971f24925ade02d1be)
When set to 0, clients will not be able to attach to databases
via the db_attach control. This might can be useful for maintenance
where ctdb should be kept running but clients should not be able
to modify databases.
(This used to be ctdb commit ddfeecda87955b4e46777599f678e6926d37f4c4)
let all databases default to not support this until enabled through this control
(This used to be ctdb commit 908a07c42e5135a3ba30a625fc4f4e4916de197a)
Assume all databases will support readonly mode for now and se thte flag for all databases. At later stage we will add support to control on a per database level whether delegations will be supported or not.
(This used to be ctdb commit 502f86f79944df4bac9094f716e54110c511dc24)
This function differs from the old FETCH in that this function will also fetch the record header and not just the record data
(This used to be ctdb commit c7196d16e8e03bb2a64be164d15a7502300eae0e)
Othwervise the da context can be timed out and talloc_free()d
but the event for this already freed object will still trigger,
causing a talloc error and shutdown.
CQ S1022515
(This used to be ctdb commit 2fd27bdedb1e0d6558c07e1b74fc8e70ddf593dc)
the initial recovery phase
CQ S1022412
Signed-off-by: Michael Adam <obnox@samba.org>
(This used to be ctdb commit e02bbd915b7151c615ff64f09ad9abc9720bef7d)
Do not delete empty records that carry this flag but store
them and schedule them for deletetion. Do not store the flag
in the ltdb though, since this is internal only and should not
be visible to the client.
(This used to be ctdb commit f898ff21fa338358179e79381215b13a6bc77c53)
When the record has been obtained by the lmaster as part of the vacuuming-fetch
handler and it is empty and never been migrated with data, then such records
are deleted instead of being stored. These records have automatically been
deleted when leaving the former dmaster, so that they vanish for good when
hitting the lmaster in this way. This will reduces the load on traditional
vacuuming.
Pair-Programmed-With: Stefan Metzmacher <metze@samba.org>
(This used to be ctdb commit c9b65f3602f51bcbf0e6d82c12076c31e4aebe38)
When storing a record that is being migrated off to another node
and has never been migrated with data, then we can safely delete it
from the local tdb instead of storing the record with empty data.
Note: This record is not deleted if we are its lmaster or dmaster.
Pair-Programmed-With: Stefan Metzmacher <metze@samba.org>
(This used to be ctdb commit 3cca0d4b48325d86de2cb0b44bb7811a30701352)
This is realized by adding a ctdb_ltdb_store_fn function pointer to the db
context and filling it in the attach procedure for non-persistent dbs.
(This used to be ctdb commit df49ec44de80affa5ccc637dec12a20a26e8706e)
This is supposed to contain logic for deleting records that are safe
to delete and scheduling records for deletion. It will be called in
server context for non-persistent databases instead of the standard
ctdb_ltdb_store() function.
(This used to be ctdb commit 23631ffc152486aed9ce5b69a391e52bc4947833)
refuse a db attach during recovery IF we can associate the request from a
genuine real client instead of deciding this on whether client_id is zero or
This will suppress/avoid messages like these :
DB Attach to database %s refused. Can not match clientid...
(This used to be ctdb commit b05ccf366df985e0a3365aacc75761ebd438deaf)
Just treat as a nop.
When the database is created later it will get its priority set properly.
(This used to be ctdb commit 05c934b10ad2690be9d75c9033a0b849bf16455d)
SRVID for the control to attach to a database is used to pass
tdb flags from samba to ctdb when samba attached to a database.
This has been used earlier for TDB_NOSYNC flag.
Add TDB_INCOMPATIBLE_HASH as a supported tdb flag to store in the
SRVID field when attaching to a database.
This allows samba to control if ctdb should create databases using the
new jenkins hash, or using the old hash.
This only affects new databases when they are initially created.
Existing databases remain using the old hash when attached to.
(This used to be ctdb commit e0eda175ac979828b376e8a6779b4608af52eb32)
In Samba this is now called "tevent", and while we use the backwards
compatibility wrappers they don't offer EVENT_FD_AUTOCLOSE: that is now
a separate tevent_fd_set_auto_close() function.
This is based on Samba version 7f29f817fa.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
(This used to be ctdb commit 85e5e760cc91eb3157d3a88996ce474491646726)
We don't want ctdb stalling due to paging; this can be far worse than
scheduling delays. But if we simply do mlockall(MCL_FUTURE), it
increases the risk that mmap (ie. tdb open) or malloc will fail,
causing us to abort.
This patch is a compromise: we mlock all current pages (including
10k of future stack for expansion) and then relock when a client
asks us to open a TDB. We warn, but don't exit, if it fails.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
(This used to be ctdb commit 82f778e85440bc713d3f87c08ddc955d3cfce926)
The do_setsched was being tested for whether to mmap tdbs: let's make it
explicit. We can also happily move the kill-child eventscript hack under
this flag.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
(This used to be ctdb commit 2ee86cc1f311d7b7504c7b14d142b9c4f6f4b469)
Depending on --max-persistent-check-errors we allow ctdb
to start with unhealthy persistent databases.
The default is 0 which means to reject a startup with
unhealthy dbs.
The health of the persistent databases is checked after each
recovery. Node monitoring and the "startup" is deferred
until all persistent databases are healthy.
Databases can become healthy automaticly by a completely
HEALTHY node joining the cluster. Or by an administrator
with "ctdb backupdb/restoredb" or "ctdb wipedb".
metze
(This used to be ctdb commit 15f133d5150ed1badb4fef7d644f10cd08a25cb5)