IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Typicall, when we watch a record, we wait for a process to give up some
resource. Be it an oplock, a share mode or the g_lock. If everything goes well,
the blocker sends us a message. If the blocker dies hard, we want to also be
informed immediately.
Signed-off-by: Volker Lendecke <vl@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
Signed-off-by: Volker Lendecke <vl@samba.org>
Reviewed-by: Ira Cooper <ira@samba.org>
Autobuild-User(master): Volker Lendecke <vl@samba.org>
Autobuild-Date(master): Sun Mar 6 19:34:42 CET 2016 on sn-devel-144
This is in preparation to support handing flags to backends,
in particular activating read only record support for ctdb
databases. For a start, this does nothing but adding the
parameter, and all databases use DBWRAP_FLAG_NONE.
Signed-off-by: Michael Adam <obnox@samba.org>
Reviewed-by: Stefan Metzmacher <metze@samba.org>
If for some reason the cleanup of dbwrap_watch_send does not work
properly, we might starve indefinitely. Make the lock routine more
robust by retrying every 5-10 seconds. g_lock_trylock will clean up
orphaned entries.
Signed-off-by: Christian Ambach <ambi@samba.org>
Autobuild-User(master): Christian Ambach <ambi@samba.org>
Autobuild-Date(master): Thu Aug 16 19:44:00 CEST 2012 on sn-devel-104
This simplifies the g_lock implementation. The new implementation tries to
acquire a lock. If that fails due to a lock conflict, wait for the g_lock
record to change. Upon change, just try again. The old logic had to cope with
pending records and an ugly hack into ctdb itself. As a bonus, we now get a
really clean async g_lock_lock_send/recv that can asynchronously wait for a
global lock. This would have been almost impossible to do without the
dbwrap_record_watch infrastructure.
sys_poll() is only needed if the signal pipe is set up and used, but as
no signal handler ever writes to the pipe, this can all be removed.
signal based events are now handled via tevent.
Andrew Bartlett
Signed-off-by: Jeremy Allison <jra@samba.org>
This will be used to enforce a lock hierarchy between the databases. We have
seen deadlocks between locking.tdb, brlock.tdb, serverid.tdb and notify*.tdb.
These should be fixed by refusing a dbwrap_fetch_locked that does not follow a
defined lock hierarchy.
Also start new folder lib/dbwrap/ where dbwrap_open.c is stored and
make the fallbacke implementation functoins non-static and create a
dbwrap_private.h header file that contains their prototypes.
TDBs are not executable, so do not create the file with
the execution bit set
Autobuild-User: Christian Ambach <ambi@samba.org>
Autobuild-Date: Mon Jun 27 17:09:12 CEST 2011 on sn-devel-104
Without clustering we don't have an fd to listen on, and sys_poll
needs one element of space
Autobuild-User: Volker Lendecke <vlendec@samba.org>
Autobuild-Date: Wed Mar 30 18:36:50 CEST 2011 on sn-devel-104
TDB_CLEAR_IF_FIRST tdb's. For tdb's like gencache where we open
without CLEAR_IF_FIRST and then with CLEAR_IF_FIRST if corrupt
this is still safe to use as if opening an existing tdb the new
hash will be ignored - it's only used on creating a new tdb not
opening an old one.
Jeremy.
This shrinks include/includes.h.gch by the size of 7 MB and reduces build time
as follows:
ccache build w/o patch
real 4m21.529s
ccache build with patch
real 3m6.402s
pch build w/o patch
real 4m26.318s
pch build with patch
real 3m6.932s
Guenther
In g_lock_unlock we have a little race between the process_exists and
messaging_send call: We only send to 5 waiters now, they all might have died
between us checking their existence and sending the message. This change makes
g_lock_lock retry at least once every minute.
Only notify the first 5 pending lock waiters. This avoids a thundering herd
problem that is really nasty in a cluster. It also makes acquiring a lock a bit
more FIFO, lock waiters are added to the end of the array.
Only check the existence of the lock owner in g_lock_parse, check the rest of
the records only when we got the lock successfully. This reduces the load on
process_exists which can involve a network roundtrip in the clustered case.
This made smbd crash in g_lock_lock() when trying to start a
transaction on a db with an already started transaction,
e.g. in a tcon_and_X where the share_info.tdb was not yet
initialized but share_info.tdb was already locked by another
process or writing acces to the winreg rpc pipe where the
registry tdb was already locked by another process.
What we really _want_ to do here by design is to react to
MSG_DBWRAP_G_LOCK_RETRY messages that are either sent
by a client doing g_lock_unlock or by ourselves when
we receive a CTDB_SRVID_SAMBA_NOTIFY or
CTDB_SRVID_RECONFIGURE message from ctdbd, i.e. when
either a client holding a lock or a complete node
has died.
Doing this properly involves calling tevent_loop_once(),
but doing this here with the main ctdbd messaging context
creates a nested event loop when g_lock_lock() is called
from the main event loop.
So as a quick fix, we act a little corasely here: we do
a select on the ctdb connection fd and when it is readable
or we get EINTR, then we retry without actually parsing
any ctdb packages or dispatching messages. This means that
we retry more often than necessary and intended by design,
but this does not harm and it is unobtrusive. When we have
finished, the main loop will pick up all the messages and
ctdb packets. The only extra twist is that we cannot use
timed events here but have to handcode a timeout for select.
Michael
This is the basis to implement global locks in ctdb without depending on a
shared file system. The initial goal is to make ctdb persistent transactions
deterministic without too many timeouts.