IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
this callback is called for every node where the control failed (or timed out)
when we issue the start recovery control from recovery master,
set any node that fails as a culprit so it will eventually be banned
(This used to be ctdb commit 72f89bac13cbe8c3ca3e7a942469cd2ff25abba2)
note that we dont actually send the ipv6 "gratious arp" on the wire just yet.
(since ipv6 doesnt use arp)
but all the infrastructure is there when we implement sending raw neig.disc. packets
(This used to be ctdb commit b87fab857bc9b3537527be93b7f68484502d6b84)
ctdb_attach() so that we can pass TDB_NOSYNC when we attach to
a persistent database and want fast unsafe writes instead of
slow but safe tdb_transaction writes.
enhance the ctdb_persistent test suite to test both safe and unsafe writes
(This used to be ctdb commit 4948574f5a290434f3edd0c052cf13f3645deec4)
since these two controls (UPDATE_RECORD and PERSISTENT_STORE) can respond
asynchronously to the control, we can not allocate the state variable as a child off ctdb_req_control instead we must allocate state as a child off ctdb itself
and steal ctdb_req_control so it becomes a child of state.
othervise both ctdb_req_control and also state will be released immediately after we have finished setting up the async reply and returned.
(This used to be ctdb commit 6f6de0becd179be9eb9a6bf70562b090205ce196)
With these patches, ctdbd will enforce and (by default) always use
tdb_transactions when updating/writing records to a persistent database.
This might come with a small performance degratation since transactions
are slower than no transactions at all.
If a client, such as samba wants to use a persistent database but does NOT
want to pay the performance penalty, it can specify TDB_NOSYNC as the
srvid parameter in the ctdb_control() for CTDB_CONTROL_DB_ATTACH_PERSISTENT.
In this case CTDBD will remember that "this database is not that important"
so I can use unsafe (no transaction) tdb_stores to write the updates.
It will be faster than the default (always use transaction) but less crash safe.
(This used to be ctdb commit 3d85d2cf669686f89cacdc481eaa97aef1ba62c0)
for stores into persistent databases, ALWAYS use a lockwait child take out the lock for the record and never the daemon itself.
(This used to be ctdb commit 7fb6cf549de1b5e9ac5a3e4483c7591850ea2464)
the address might be a public address on a different node so no need to fiull up the logs with thoise messages
(This used to be ctdb commit c8181476748395fe6ec5284c49e9d37b882d15ea)
just disable the monitoring during the "startrecovery" event and enable it again once recovery has completed
(This used to be ctdb commit 68029894f80804c9f31fc90ed0c1b58f75812c3d)
since this event wont run unless the recovery mode is normal but we
can not know what the recovery mode will be in the future on a remote node
so since we issue these commands that will execute in the future at some other node
it is pointless to try to check if it worked or not
in particular if "failure to successfully run the eventscript" would then trigger a full new recovery which is disruptive and expensive.
(This used to be ctdb commit 2c292039a0139dcf5bb2bd964eb6f8902d094c50)
This attempts to fix the problem of ctdb event scripts blocking due to
attempted access to the ctdb databases during recovery. The changes are:
- now only the 'shutdown' and 'startrecovery' events can be called
with the databases locked in recovery. The event scripts must ensure
that for these two events no database access is attempted
- the recovered, takeip and releaseip events could previously be called
inside a recovery. The code now ensures that this doesn't happen, delaying
the events till after recovery has finished
- the 50.samba event script now avoids using testparm unless it is really
needed
This needs extensive testing.
(This used to be ctdb commit e3cdb8f2be6a44ec877efcd75c7297edb008a80b)
This enhances the framework for sending tcp tickles to be able to send ipv6 tickles as well.
Since we can not use one single RAW socket to send both handcrafted ipv4 and ipv6 packets, instead of always opening TWO sockets, one ipv4 and one ipv6 we get rid of the helper ctdb_sys_open_sending_socket() and just open (and close) a raw socket of the appropriate type inside ctdb_sys_send_tcp().
We know which type of socket v4/v6 to use based on the sin_family of the destination address.
Since ctdb_sys_send_tcp() opens its own socket we no longer nede to pass a socket
descriptor as a parameter. Get rid of this redundant parameter and fixup all callers.
(This used to be ctdb commit 406a2a1e364cf71eb15e5aeec3b87c62f825da92)
If a transaction could be started, do safe transaction store when updating the record inside the daemon.
If the transaction could not be started (maybe another samba process has a lock on the database?) then just do a normal store instead (instead of blocking the ctdb daemon).
The client can "signal" ctdb that updates to this database should, if possible, be done using safe transactions by specifying the TDB_NOSYNC flag when attaching to the database.
The TDB flags are passed to ctdb in the "srvid" field of the control header when attaching using the CTDB_CONTROL_DB_ATTACH_PERSISTENT.
Currently, samba3.2 does not yet tell ctdbd to handle any persistent databases using safe transactions.
If samba3.2 wants a particular persistent database to be handled using
safe transactions inside the ctdbd daemon, it should pass
TDB_NOSYNC as the flags to the call to attach to a persistent database
in ctdbd_db_attach() it currently specifies 0 as the srvid
(This used to be ctdb commit 8d6ecf47318188448d934ab76e40da7e4cece67d)
If we shutdown the transport and CTDB later decides to send a command out
for queueing, the call to ctdb->methods->allocate_pkt() will SEGV.
This could trigger for example when we are in the process of shuttind down CTDBD and have already shutdown the transport but we are still waiting for the
"shutdown" eventscripts to finish.
If the event scripts now take much much longer to execute for some reason, this
race condition becomes much more probable.
Decorate all dereferencing of ctdb->methods-> with a check that ctdb->menthods is non-NULL
(This used to be ctdb commit c4c2c53918da6fb566d6e9cbd6b02e61ae2921e7)
Remove a bogus check inside the recovery daemon that ONLY redistributed public addresses IFF the local node had/served public addresses.
This was a valid optimization long ago when we enforced that all nodes must use the same public addresses file but is invalid today where we can have different public addresses configs on all nodes and even have some nodes that do NOT use public addresses at all.
(This used to be ctdb commit 5833e6b99d9afaf35dc8354df8676b9115418b23)
Should always use type safe talloc functions when possible. In this case we were allocating bytes instead of uint32_t
(This used to be ctdb commit cb14ee57dd0a589242da1ac2830bb7939df460a5)
This allows us to use the async framework also for controls that return
outdata.
Add a "capabilities" field to the ctdb_node structure. This field is
only initialized and kept valid inside the recovery daemon context and not
inside the main ctdb daemon.
change the GET_CAPABILITIES control to return the capabilities in outdata instead of in the res return variable.
When performing a recovery inside the recovery daemon, read the capabilities from all connected nodes and update the ctdb->nodes list of nodes.
when building the new vnnmap after the database rebuild in recovery, do not include any nodes which lack the LMASTER capability in the new vnnmap.
Unless there are no available connected node that sports the LMASTER capability in which case we let the local node (recmaster) take on the lmaster role temporarily (i.e. become a member of the vnnmap list)
(This used to be ctdb commit 0f1883c69c689b28b0c04148774840b2c4081df6)
handle failure to get/hold the reclock pnn file better and just
treat it as a transient backend filesystem error and try again later
instead of shutting down the recovery daemon
when we have lost the pnn file and if we are recmaster
release the recmaster role so that someone else can become recmaster isntead
(This used to be ctdb commit e513277fb09b951427be8351d04c877e0a15359d)
Define two capabilities :
can be recmaster
can be lmaster
Default both capabilities to YES
Update the ctdb tool to read capabilities off a node
(This used to be ctdb commit 50f1255ea9ed15bb8fa11cf838b29afa77e857fd)
remove the transaction stuff and push so that the git tree will work
This reverts commit 539bbdd9b0d0346b42e66ef2fcfb16f39bbe098b.
(This used to be ctdb commit 876d3aca18c27c2239116c8feb6582b3a68c6571)