IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Simply running concurrent copies of 'pvscan | true' is enough to make
clvmd freeze: pvscan exits on the EPIPE without first releasing the
global lock.
clvmd notices the client disappear but because the cleanup code that
releases the locks is triggered from within some processing after the
next select() returns, and that processing can 'break' after doing just
one action, it sometimes never releases the locks to other clients.
Move the cleanup code before the select.
Check all fds after select().
Improve some debug messages and warn in the unlikely event that
select() capacity could soon be exceeded.
- closer to the recommendation of man-pages (7) if possible
- Add crossrefs
- Sort options and crossrefs
- Fix default timeout (60 secs) of -t
- Documents -I[auto]
Signed-off-by: Stéphane Aulery <saulery@free.fr>
LVM2.2.02.112/daemons/clvmd/clvmd.c:1131: warning[arrayIndexOutOfBoundsCond]: Array 'row[8]' accessed at index 8, which is out of bounds. Otherwise condition 'j==8' is redundant.
This code:
int i,j = 0;
...
for (i = 0; i < len; ++i) {
...
if ((j == 8) || (i + 1 == len)) {
for (;j < 8; ++j) {
...
}
...
j = 0;
}
}
Indeed - j is 0 at the beginning, then iterating till j < 8,
then always zeroed at the end of the outer loop - so "j" never
reaching value of 8 - the j == 8 condition is redundant.
Prior adding new reply to the list, check
if the reply thread is not already finished.
In that case discard adding message
(which would otherwise be leaked).
Use mutex to access localsock values, so check
num_replies when the thread is not yet finished.
Check for threadid prior the mutex taking
(though this check is probably not really needed)
Added complexity with extra reply mutex is not worth the troubles.
The only place which may slightly benefit from this mutex is timeout
and since this is rather error case - let's convert it to
localsock.mutex and keep it simple.
Move the pthread mutex and condition creation and destroy
to correct place right after client memory is allocatedd
or is going to be released.
In the original place it's been in race with lvm thread
which could have still unlock mutex while it's been already
destroyed.
When TEST_MODE flag is passed around the cluster,
it's been use in thread unprotected way, so it may have
influenced behaviour of other running parallel lvm commands
(activation/deactivation/suspend/resume).
Fix it by set/query function only under lvm mutex.
For hold_un/lock function calls check lock_flags bits directly.
Operate with lvm_thread_exit while holding lvm_thread_mutex.
Don't leave unfinished work in the lvm thread queue
and always finish all queued tasks before exit,
so no cmd struct is left in the list.
(in-release fix)
We have to close cluster in some predicatable way,
otherwise we may access released memory from different
threads.
So move closedown till the point we know all thread
are closed. New messages from cluster are discarded.
When multiple threads act on the same 'quit' variable
the order of exit becomes unpredictable.
So let the main_loop() finish first and then clean up
all queued lvm jobs.
Do not add any new work, when lvm_thread_exit is set.
Properly clean 'client' structure only for LOCAL_SOCK type.
(Fixes bug from commit 460c19df62)
(in release fix)
Also cleanup-up associated pthreads by using cleanup_zombie() function.
Since this function may change the list, restart scanning always from
the list header.
Note: couple following patches are necessary to make this working properly.
This patch will releases allocated private resources from
startup. Needs previous dm_zalloc patch to ensure unset
private pointer is NULL.
TODO: check on real cluster.
This fixes a bug in commit 19baf842 where verify_message
was rejecting the CLVMD_FLAG_REMOTE flag. It was missed
since the patch was ported from an lvm version where that
flag does not exist.