IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
The warning is bogus and is only seen on certain versions of gcc.
However using the enum does seem to clarify the intent of the code - only
3 possible md minor superblock versions.
Related compiler warning:
device/dev-md.c:53: warning: 'sb_offset' may be used uninitialized in this function
* configure.in (LVM2CMD_LIB): Define if --enable-cmdlib.
* dmeventd/mirror/Makefile.in (CLDFLAGS): Use $(LVM2CMD_LIB) rather
than hard-coding -llvm2cmd.
* dmeventd/snapshot/Makefile.in (CLDFLAGS): Likewise.
* configure.in: Define READLINE_SUPPORT not when processing
--enable-readline or --disable-readline, but rather only after
determining that readline support is desired and the readline
library is available/usable.
Related compiler warning:
log/log.c:242: warning: declaration of 'error_message_produced' shadows a global declaration
../include/log.h:98: warning: shadowed declaration is here
The new error checking code caught some commands that were returning '0' as
an exit status for success. This is incorrect and resulted in a benign error
message displayed (see below). As of today, all commands should return a
value defined in lib/commands/errors.h (1-5). This results in an exit code of
0 on success, or > 0 on failure (as stated in the lvm.8 man page).
Before change:
1. Make sure no PVs are on the system
2. Run 'pvs'
Command failed with status code 0.
After change:
<no output>
Specific test case:
1. pvcreate /dev/loop1; vgcreate vg1 /dev/loop1; lvcreate -L 64M -n lv1 vg1
2. vgremove vg1 (will prompt user)
3. CTRL-C
Code will exit with:
Do you really want to remove volume group "vg2" containing 2 logical volumes? [y/n]:
Volume group "vg2" not removed
Command failed with status code 5.
Internal error: Volume Group vg2 was not unlocked
Device '/dev/loop1' has been left open.
After change:
Do you really want to remove volume group "vg2" containing 2 logical volumes? [y/n]:
Volume group "vg2" not removed
Command failed with status code 5.
which is not used since the switch away from async version saLck
. num_nodes should equal to member_list_entries, i.e.
joined_list_entires is 0 when a node leaves the group.
Thanks to Xinwei Hu for the patch.
It does 2 things.
1. The cpg_deliver_callback make a compare between target_nodeid and our_nodeid.
It turns out openais set target_nodeid to 0 sometimes. for broadcasting ? I change the behavior so that lvm will process_remote also on target_nodeid == 0
2. The joined_list passed to cpg_confchg_callback doesn't include the already exist nodes in the group, which leads to an incomplete node_hash. I simply add all other nodes in member_list to node_hash also.
Thanks to Xinwei Hu for this patch.
This bug has been around for a long time as far as I can tell.
Without this fix, a vgsplit would unconditionally move the
'hidden/internal' snapshot LVs, and result in corrupted metadata
in the following case:
vg1: contains lv1, lv1snap, both on pvset1
vg1: contains lv2, on pvset2
"vgsplit vg1 vg2 pvset2"
would result in "snapshot0" hidden LV being moved to vg2, and
the origin and cow being left in vg1. The tools detect the
corruption in vg2, but not in vg1.
When vg_lock_and_read() calls were added, they were done so incorrectly for
the destination VG (vg_to). This resulted in the VG lock not obtained when
a new VG was the destination (vg_lock_and_read() would fail in the vg_read()
clause, which would then release the lock before returning NULL), and could
result in corrupted destination VG.
The fix was to put back the original lock_vol() and vg_read() calls for 'vg_to'.
The failure of vg_read() indicates "vg does not exist", and we key off that
to determine whether we are dealing with a new or existing VG as the
destination.
The first two error messages were also the result of the incorrect
vg_lock_and_read() calls:
Volume group "new" not found
cluster request failed: Invalid argument
New volume group "new" successfully split from "vg"
Fixes https://bugzilla.redhat.com/show_bug.cgi?id=438249
BEFORE:
tools/lvm lvresize -l +4 vg22/lv1linear
Volume group "vg22" not found
Volume group vg22 doesn't exist
AFTER:
tools/lvm lvresize -l +4 vg22/lv1linear
Volume group "vg22" not found