IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
LCK_CACHE is defined as 0x100 so it cannot be passed through
unsigned char parameter - remove it from the sprintf code.
If the LCK_CLUSTER should be printed here - lot of code need
to be reworked - so adding FIXME comment.
The management threads (main_loop, the socket thread) could close a single fd
twice in a row sometimes. At least one other thread can be running at the same
time as the threads doing the double close. That one running thread also
happens to do some IO (namely, open /proc/devices, read from it, close it). If
there was enough "demand" for the local socket, this could happen:
- a connection to clvmd is about to finish, let's say the fd is 13 (it often
happens to be in my test script, don't ask why)
- the local_sock thread calls close(13)
- the lvm thread calls open("/proc/devices"...) and gets 13
- the main_loop thread calls close(13) [OOPS!]
- new connection arrives, and is accept'd by a (new) local_sock thread
- the accept gives an fd of 13 (since it's the lowest free fd at this point)
- the lvm thread gets around to read from it's /proc/devices handle... 13,
again
- the lvm thread hangs forever trying to read from the socket instead of
/proc/devices
Signed-off-by: Petr Rockai <prockai@redhat.com>
Reviewed-by: Milan Broz <mbroz@redhat.com>
Function pull_stateo() checks for NULL 'buf' - but return for this error
path was missing. cmirror code never calls this function with NULL 'buf',
so this fix has no effect on current code base, but makes clang happier.
It's quite new feature which is not supported by older compilers.
So until some better macros are introduced into LVM code - hotfix current
compilation problems and compile this code only for __clang__ defining compilers.
We cast (char*) to (uint32_t*) that changes alignment requierements.
For our case the code has been correct as alloca() returns properly
aligned buffer, however this patch make it cleaner and more readable
and avoids warning generation.
The signalling code (pthread_cond_signal/pthread_cond_wait) in the
pre_and_post_thread was using the wait mutex (see man pthread_cond_wait)
incorrectly, and this could cause clvmd to deadlock when timing was
right. Detailed explanation of the problem follows.
There is a single mutex around (L for Lock, U for Unlock), a signal (S) and a
wait (W). C for pthread_create. Time flows from left to right, each arrow is a
thread.
So first the "naive" scenario, with no mutex (PPT = pre_and_post_thread, MCT =
main clvmd thread; well actually the thread that does read_from_local_sock). I
will also use X, for a moment when MCT actually waits for something to happen
that PPT was supposed to do.
MCT -----C ------S--X-----S----X----------------------S------XXXXXXXXX
| everything OK up to this --> <-- point...
PPT -----WWW-----WWWW------------------------------WWWWWWWWWWWWW
Ok, so pthread API actually does not let you use W/S like that. It goes out of
its way to tell you that you need a mutex to protect the W so that the above
cannot happen. *But* if you are creative and just lock around the W's and S's,
this happens:
MCT ----C-----LSU----X-----LSU----X------------LSU------XXXXXXX
|
PPT ---LWWWU-------LWWWWU-----------------------LWWWWWWWWW
Ooops. Nothing changed (the above is what actually was done by clvmd before
this satch). So let's do it differently, holding L locked *all* the time in
PPT, unless we are actually in W (this is something that the pthread API does
itself, see the man page).
MCT ----C-----LSU------X---LSU---X-----LLLLLLLSU----X----
| (and they live happily ever after)
PPT L---WWWWW---------WWWW----------------W----------
So W actually ensures that L is unlocked *atomically* together with entering
the wait. That means that unless PPT is actually waiting, it cannot be
signalled by MCT. So if MCT happens to signal it too soon (it wasn't waiting
yet), it (MCT) will be blocked on the mutex (L), until PPT is actually ready to
do something.
to lvm.conf in the activation section: 'snapshot_autoextend_threshold' and
'snapshot_autoextend_percent', that define how to handle automatic snapshot
extension. The former defines when the snapshot should be extended: when its
space usage exceeds this many percent. The latter defines how much extra space
should be allocated for the snapshot, in percent of its current size.
can be opprobriously slow if created with '--nosync'.
One of the ways cluster mirrors coordinate I/O and recovery
amoung the different machines is by the use of the log
function 'is_remote_recovering()' which lets nodes know if
a region they wish to perform a write on is currently being
recovered on another node. If the region is being recovered,
the I/O is delayed.
The 'is_remote_recovering' routine has been optimized to
avoid the deluge of requests that would be issued to the
userspace log server by maintaining a marker of how far
the recovery has gotten. It can then immediately return
'not recovering' if the region being inquired about is
less than this mark. Additionally, if the region of
concern is greater than the mark, the function will
limit the number of transmissions to userspace by assuming
the region /is/ being recovered when skipping the
transmission. This limits the amount of processing
and updates the mark in 1/4 sec time steps.
This patch fixes a problem where 'the mark' is not being
updated because of faulty logic in the userspace log
daemon. When '--nosync' is used to create a cluster
mirror, the userspace log daemon never has a chance
to update the mark in the normal way. The fix is to set
the mark to "complete" if the mirror was created with
the --nosync flag.
In all top vg read functions only LCK_VG_READ/WRITE can be used.
All other vg lock definitions are low-level backend machinery.
Moreover, LCK_WRITE cannot be tested through bitmask.
This patch fixes these mistakes.
For _recover_vg() we do not need lock_flags, it can be only
two of above and we always upgrading to LCK_VG_WRITE lock there.
(N.B. that code is racy)
There is no functional change in code (despite wrong masking
it produces correct bits:-)
The lvm repair issues I believe are the superficial symptoms of this
bug - there are worse issues that are not as clearly seen. From my
inline comments:
* If the mirror was successfully recovered, we want to always
* force every machine to write to all devices - otherwise,
* corruption will occur. Here's how:
* Node1 suffers a failure and marks a region out-of-sync
* Node2 attempts a write, gets by is_remote_recovering,
* and queries the sync status of the region - finding
* it out-of-sync.
* Node2 thinks the write should be a nosync write, but it
* hasn't suffered the drive failure that Node1 has yet.
* It then issues a generic_make_request directly to
* the primary image only - which is exactly the device
* that has suffered the failure.
* Node2 suffers a lost write - which completely bypasses the
* mirror layer because it had gone through generic_m_r.
* The file system will likely explode at this point due to
* I/O errors. If it wasn't the primary that failed, it is
* easily possible in this case to issue writes to just one
* of the remaining images - also leaving the mirror inconsistent.
*
* We let in_sync() return 1 in a cluster regardless of what is
* in the bitmap once recovery has successfully completed on a
* mirror. This ensures the mirroring code will continue to
* attempt to write to all mirror images. The worst that can
* happen for reads is that additional read attempts may be
* taken.
Ignore snapshots when performing mirror recovery beneath an origin.
Pass LCK_ORIGIN_ONLY flag around cluster.
Add suspend_lv_origin and resume_lv_origin using LCK_ORIGIN_ONLY.
corruption bug in cmirror. 'dm_bit' is only ever used as a boolean operation
within LVM, but it can return a range of values. If the bit is set, a power of
2 is returned. If the bit is unset, 0 is returned.
'log_test_bit' (a function in the cluster mirror log daemon code) has switched
to using the dm bit operations in rhel6. There are two places in the daemon
code where 'log_test_bit' is not used merely as a boolean, but rather the
return value is used as the return value for the log functions 'is_clean' and
'in_sync' - having assumed that 'dm_bit' was returning 0 or 1 only.
One place the 'in_sync' function is utilized is in 'dm_rh_get_state' - a
function that informs the mirroring code how to treat I/O and which devices to
read/write from. 'dm_rh_get_state' was checking if the return value of
'in_sync' was 1 to determine if the region was DM_RH_CLEAN. Since 'dm_bit'
(and by extension 'log_test_bit' and 'in_sync') was returning powers of 2,
DM_RH_CLEAN was rarely being reported as it should have been. Thinking the
region was out-of-sync, the mirroring code would write only to the primary
device. When the primary device was failed, all of those writes were lost -
leaving the entire mirror corrupted.
Switch dmeventd to use dm_create_lockfile and drop duplicate code.
Allow clvmd pidfile to be configurable.
Switch cmirrord and clvmd to use dm_create_lockfile.
Moreover, in current mirror handling, when it calls activate
on removed but suspended detached log this counter drops below zero
and confuses debug log.
When a mirror is being downconverted in a cluster, a series of suspends and
resumes is executed.
With the change to using UUIDs in dev_manager instead of names, the behaviour
has changed with regards to including an _mlog in the deptree of a logical
volume. In the old (pre-UUID-enabled) code, the _mlog would appear in a deptree
of any volume purely based on a name match: a linear volume foo would include
foo_mlog in its dependencies if that happened to exist. This behaviour was
fixed and the mlog is now only included for mirrors.
By a coincidence, this mlog bug had been hiding a different bug in clvmd. When
a mirror is being dismantled (and converted to a linear volume), it is first
suspended as a whole, then later resumed in parts. Nevertheless, the overall
memlock balance is maintained in this operation. The problem kicks in, because
even though the mirror log was suspended as part of the mirror, when the
dismantled mirror is resumed again, it is no longer a mirror and therefore the
mirror log stays suspended. This would not be a problem in itself, since
_delete_lv (from metadata/mirror.c) is called on it subsequently, which does an
activate/deactivate cycle and removes the LV. The activate/deactivate cycle
correctly prompts clvmd to resume the device: however, in doing this, it will
issue an unpaired resume operation (the suspend that caused the mirror log to
be suspended is paired with resuming the dismantled mirror later). We have
concluded that the path in clvmd should never affect memlock_count, since there
should never be an unmatched explicit suspend preceding this resume.
Because execve stops the command loop,
we never receive response (only socket close) for clvmd -S,
so waiting for response here makes no sense.
But if the calling process (clvmd -S) exits too early, connection
is closed from client side, clvmd takes this as an error and
never run restart code.
Ugly hack(TM).
linux/kdev_t.h even though it wasn't needed. Strangely, it seems
to be causing problems on various architectures (i686) in the
function daemons/cmirrord/functions.c:disk_status_info()->sprintf.
I'm not sure why this is a problem since none of the macros in
kdev_t.h are used in that code, but it certainly doesn't hurt to
pull an unnecessary header and it seems to fix the problem.
Code is mixing up internal DLM and LVM definitions of lock
modes and flags.
OpenAIS and singlenode locking do not depend on DLM but
code currently cannot be compiled without libdlm.h!
LCK_* flags is LVM abstraction, used through all the code.
Only low-level backend (clvmd-cman etc) should use DLM definitions,
also this code should do all needed conversions.
Because there are two DLM flags used in generic code
(NOQUEUE, CONVERT) we define it similar way like lock modes.
(So all needed binary-compatible flags are on one place in locking.h)
(Further code cleaning still needed, though:-)
- allocate environment dynamically (still missing some limit?)
- try to recover, if destroy failed (do not destroy lvm here) and free memory
- check strdup() return codes
- report failure to log
- do not print NULL in exclusive lock loop