IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Handles non-clustered as well as clustered. For clustered,
the best we can do is try exclusive local activation. If this
succeeds, we know it is not active elsewhere in the cluster.
Otherwise, we assume it is active elsewhere.
This bug has been around for a long time as far as I can tell.
Without this fix, a vgsplit would unconditionally move the
'hidden/internal' snapshot LVs, and result in corrupted metadata
in the following case:
vg1: contains lv1, lv1snap, both on pvset1
vg1: contains lv2, on pvset2
"vgsplit vg1 vg2 pvset2"
would result in "snapshot0" hidden LV being moved to vg2, and
the origin and cow being left in vg1. The tools detect the
corruption in vg2, but not in vg1.
When vg_lock_and_read() calls were added, they were done so incorrectly for
the destination VG (vg_to). This resulted in the VG lock not obtained when
a new VG was the destination (vg_lock_and_read() would fail in the vg_read()
clause, which would then release the lock before returning NULL), and could
result in corrupted destination VG.
The fix was to put back the original lock_vol() and vg_read() calls for 'vg_to'.
The failure of vg_read() indicates "vg does not exist", and we key off that
to determine whether we are dealing with a new or existing VG as the
destination.
The first two error messages were also the result of the incorrect
vg_lock_and_read() calls:
Volume group "new" not found
cluster request failed: Invalid argument
New volume group "new" successfully split from "vg"
Fixes https://bugzilla.redhat.com/show_bug.cgi?id=438249