IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
store the idr as the high 16 bits and use a rotating counter for the low
16 bits.
(This used to be ctdb commit 7c763b7b5e6ca54a6df4586893ddaf1b508b4c22)
lmaster instead of what the nodes think is the dmaster (which might be
stale) improve performance.
(This used to be ctdb commit f535f79e6a2a6c6d07141b96e0b957fa93c684f4)
dmaster request stage, and instead directly send a dmaster
reply. This avoids a race condition where a new call comes in for
the same record while processing the dmaster request
- don't keep any redirect records during a ctdb call. This prevents a
memory leak in case of a redirect storm
(This used to be ctdb commit 59889ca0fd606c7d2156839383a09dfc5a2e4853)
processed immediately, but the input routines indirectly assumed
they were being called as a new event (for example, a calling
routine might queue the packet, then afterwards modify the ltdb
record). The solution was to make self packets queue via a zero
timeout.
- fixed unlinking of the socket in a exit in the lockwait code. Needed
an _exit instead of exit so atexit() doesn't trigger
- print latency of lockwait delays
(This used to be ctdb commit 1b0684b4f6a976f4c5fe54394ac54d121810b298)
ctdb_ltdb_lock_requeue() and a small wrapper
- use ctdb_ltdb_lock_requeue() to fix a possible hang in
ctdb_reply_dmaster(), where the ctdb_ltdb_store() could hang waiting
for a client. We now requeue the reply_dmaster packet until we have
the lock
(This used to be ctdb commit 97cd7aa09ce3abbb5e3e965c5c81668e0c0133a5)
CTDB_FLAG_TORTURE, which forces some race conditions to be much more
likely. For example a 20% chance of not getting the lock on the
first try in the daemon
- abstraced the ctdb_ltdb_lock_fetch_requeue() code to allow it to
work with both inter-node packets and client->daemon packets
- fixed a bug left over in ctdb_call from when the client updated the
header on a call reply
- removed CTDB_FLAG_CONNECT_WAIT flag (not needed any more)
(This used to be ctdb commit 7559dcd184666c3853127e3c8f5baef4fea327c4)
version. The client version is different enough that this is
worthwhile
- enable local shortcut for client version of ctdb_call
- add idr_find_type(), with better error reporting in case of type
mismatch
(This used to be ctdb commit 2388094c5f7b6ce003e86b05620c06217d63b49c)
us to put memory directly in the right context, avoiding quite a few
talloc_steal calls, and simplifying the code
- make the fetch lock code in the daemon fully async
(This used to be ctdb commit d98b4b4fcadad614861c0d44a3854d97b01d0f74)
remove the parameter from the ctdbd script
remove the store_unlock from ctdbd_test since there is no and will be no pdu for this
CTDB_REPLY_FETCH_LOCK no longer return the data for the record since the client is assumed to read this itself from the local tdb. remove some variables that no longer exists.
(This used to be ctdb commit 77c43479e1932b27387fc2f85a3cb6538633b481)
a client held the chainlock, and the daemon received a dmaster reply
at the same time. The daemon would not be able to process the dmaster
reply, due to the lock, but the fetch lock cannot make progres until
the dmaster reply is processed.
The solution is to not hold the lock in the client while talking to
the daemon. The client has to retry the lock after the record has
migrated. This means that forward progress is not guaranteed. We'll
have to see if that matters in practice.
(This used to be ctdb commit 737e5a1253cb048222c595a474aff71c99fc554f)
It ensures that when the client context goes away (such as when the client exits)
that any timed events and partially completed requests from that client also go away
(This used to be ctdb commit 45a45a0a44d4da9b45719aac72b0ae4bd4c74462)