IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Since execve passed only NULL as environ, we had lost all environment vars on
restart - thus actually running 'different' clvmd then the one at start.
Preserving environ allows to restart clvmd with the same settings
(i.e. LD_LIBRARY_PATH)
Add test for second restart.
Add named cluster_ops to easily learn the name of the active cluster manager,
so we are able to restart singlenode manager in testing.
Add simple test for clvmd -S (restart) and -R (refresh)
(though it needs some extensions).
Read 2 environmental vars to learn about overide position for
CLVMD and LVM binaries.
We support LVM_BINARY in other script - and this way we could easily
test restart in our test-suite.
Patch fixes Clang warnings about possible access via lv_name NULL pointer.
Replaces allocation of memory (strdup) with just pointer assignment
(since execve is being called anyway).
Checks for !*lv_name only when lv_name is defined.
(and as I'm not quite sure what state this really is - putting a FIXME
around - as this rather looks suspicios ??).
Add debug print of passed clvmd args.
Thanks to CLVMD_CMD_SYNC_NAMES propagation fix the message passing started
to work. So starts to send a message before the VG is unlocked.
Removing also implicit sync in VG unlock from clmvd as now the message
is delievered and processed in do_command().
Also add support for this new message into external locking
and mask this event from further processing.
Instead of implicitly syncing udev operation in clustered and
file locking code - call synchronization directly in lock_vol() when
the operation unlocks VG
The problem is missing implicit fs_unlock() in the no_locking code.
This is used with --sysinit on read-only filesystem locking dir.
In this case vgchange -ay could exit before all udev nodes are properly
synchronised and may cause problems with accessing such node right after
vgchange --sysinint command is finished.
Add test case for vgchange --sysinit.
return lockspace reference (even if lockspace already exists)
and thus increases DLM lockspace count. It means that after
clvmd restart the lockspace is still in use.
(The only way to clean environment to enable clean cluster
shutdown is call "dlm_tool leave clvmd" several times.)
Because only one clvmd can run in time, we can use simpler logic,
try to open lockspace with dlm_open_lockspace() and only if it fails
try to create new one. This way the lockspace reference doesn not
increase.
Very easily reproducible with "clvmd -S" command.
Patch also fixes return code when clvmd_restart fails and fixes
double free if debug option was specified during restart.
Fixes https://bugzilla.redhat.com/show_bug.cgi?id=612862
Stop calling fs_unlock() from lv_de/activate().
Start using internal lvm fs cookie for dm_tree.
Stop directly calling dm_udev_wait() and
dm_tree_set/get_cookie() from activate code -
it's now called through fs_unlock() function.
Add lvm_do_fs_unlock()
Call fs_unlock() when unlocking vg where implicit unlock solves the
problem also for cluster - thus no extra command for clustering
environment is required - only lvm_do_fs_unlock() function is added
to call lvm's fs_unlock() while holding lvm_lock mutex in clvmd.
Add fs_unlock() also to set_lv() so the command waits until devices
are ready for regular open (i.e. wiping its begining).
Move fs_unlock() prototype to activation.h to keep fs.h private
in lib/activate dir and not expose other functions from this header.
As ternary operator has lower priority then add operation, this check
was not doing what seemed to be expected.
So enclose the test in braces and check for NULL in *buf.
The signalling code (pthread_cond_signal/pthread_cond_wait) in the
pre_and_post_thread was using the wait mutex (see man pthread_cond_wait)
incorrectly, and this could cause clvmd to deadlock when timing was
right. Detailed explanation of the problem follows.
There is a single mutex around (L for Lock, U for Unlock), a signal (S) and a
wait (W). C for pthread_create. Time flows from left to right, each arrow is a
thread.
So first the "naive" scenario, with no mutex (PPT = pre_and_post_thread, MCT =
main clvmd thread; well actually the thread that does read_from_local_sock). I
will also use X, for a moment when MCT actually waits for something to happen
that PPT was supposed to do.
MCT -----C ------S--X-----S----X----------------------S------XXXXXXXXX
| everything OK up to this --> <-- point...
PPT -----WWW-----WWWW------------------------------WWWWWWWWWWWWW
Ok, so pthread API actually does not let you use W/S like that. It goes out of
its way to tell you that you need a mutex to protect the W so that the above
cannot happen. *But* if you are creative and just lock around the W's and S's,
this happens:
MCT ----C-----LSU----X-----LSU----X------------LSU------XXXXXXX
|
PPT ---LWWWU-------LWWWWU-----------------------LWWWWWWWWW
Ooops. Nothing changed (the above is what actually was done by clvmd before
this satch). So let's do it differently, holding L locked *all* the time in
PPT, unless we are actually in W (this is something that the pthread API does
itself, see the man page).
MCT ----C-----LSU------X---LSU---X-----LLLLLLLSU----X----
| (and they live happily ever after)
PPT L---WWWWW---------WWWW----------------W----------
So W actually ensures that L is unlocked *atomically* together with entering
the wait. That means that unless PPT is actually waiting, it cannot be
signalled by MCT. So if MCT happens to signal it too soon (it wasn't waiting
yet), it (MCT) will be blocked on the mutex (L), until PPT is actually ready to
do something.
Code is mixing up internal DLM and LVM definitions of lock
modes and flags.
OpenAIS and singlenode locking do not depend on DLM but
code currently cannot be compiled without libdlm.h!
LCK_* flags is LVM abstraction, used through all the code.
Only low-level backend (clvmd-cman etc) should use DLM definitions,
also this code should do all needed conversions.
Because there are two DLM flags used in generic code
(NOQUEUE, CONVERT) we define it similar way like lock modes.
(So all needed binary-compatible flags are on one place in locking.h)
(Further code cleaning still needed, though:-)
- allocate environment dynamically (still missing some limit?)
- try to recover, if destroy failed (do not destroy lvm here) and free memory
- check strdup() return codes
- report failure to log
- do not print NULL in exclusive lock loop
Lock mode is int masked by LCK_TYPE_MASK, always.
Patch also remove uneccessary masking lock flag on sender side,
if masking is needed, it is don on client side already.
- do_command and lock_vg expect flags (no change here)
Bug fixes:
- lock_vg should check for NONBLOCK on lock_cmd, flags have this bit masked-out
- do_pre/post_command expect do not mask flag at all, this causes that
the code inside is never run! (see following patches, these functions
expect plain command without flags)
Current code, when need to ensure that volume is not
active on remote node, it need to try to exclusive
activate volume.
Patch adds simple clvmd command which queries all nodes
for lock for given resource.
The lock type is returned in reply in text.
(But code currently uses CR and EX modes only.)
Run backup of metadata on remote nodes in the
same place like local node - when calling backup().
Introduce backup_locally() which calls only
local backup if needed.
Remote backup is now trigerred by LCK_VG_BACKUP flag
combination (special VG lock).
This lock type will call check_current_backup()
(including backup_locally() call) and updates
metadata on all nodes.
(Patch fixes non-functional remote backup,
current call during VG lock never triggers.)