IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
The MD raid6 personality being used to drive lvm raid6 LVs does
read-modify-write updates to any stripes and thus relies on correct
P and Q Syndromes being written during initial synchronization or
it may fail reconstructing proper user data in case of SubLVs failing.
We may not allow the '--nosync' option on
creation of raid6 LVs for that reason.
Update/fix 'man lvcreate' in that regard.
add lvcreate-raid-nosync.sh test script.
- Resolves rhbz1358532
We don't need to refresh whole cmd context if we drop profile after
processing LVM command - just like we don't refresh cmd context when
we're applying the profile. It's because profiles contain only safe
subset of settings which do not require complete cmd context refresh.
This patch calls process_profilable_config instead of
refresh_toolcontext if there was profile applied for the LVM
command only, not --config which requires toolcontext refresh.
The process_profilable_config just sets proper values based on
values of profilable settings, but it does not do complete
reinitialization of various parts (e.g. filters, logging etc.).
'lvchange --resync LV' or 'lvchange --syncaction repair LV' request the
RAID layout specific parity blocks in raid4/5/6 to be recreated or the
mirrored blocks to be copied again from the master leg/copy for raid1/10,
thus not allowing a rebuild of a particular PV.
Introduce repeatable option '--[raid]rebuild PV' to allow to request
rebuilds of specific PVs in a RaidLV which are known to contain corrupt
data (e.g. rebuild a raid1 master leg).
Add test lvchange-rebuild-raid.sh to test/shell doing rebuild
variations on raid1/10 and 5; add aux function check_status_chars
to support the new test.
- Resolves rhbz1064592
Prepare for new segment type conversion functionality in cases that
currently fail. In the short-term, we need to do this while limiting
the changes to the code paths for the conversions that are already
supported.
on any thin snap external origin LV which caused a segfault
when none existed as exposed by the vgsplit-thin.sh test.
Only call lv_is_on_pvs() if an external origin LV actually
exists and correct the related splitting logic.
General RAID and RAID segment type specific checks are added
to merge.c. New static _check_raid_seg() is called on each segment
of a RaidLV (which have just one) from check_lv_segments().
New checks caught some unititialized segment members
which are addressed here as well:
- initialize seg->region_size to 0 in lvcreate.c for raid0/raid0_meta
- initialize list seg->origin_list in lv_manip.c
Add matching support for -Z option also we doing full conversion
to cache-pool.
Extending coversion message to show which pool type is created
and whether the metadata will be wiped or remain unmodified.
Follow-up to 27a767d5e8cedf9cac31eb3562cf8fdd4aa88b7c.
Tunning behavior in a way we always prompt when option --zero is NOT specified.
Without -Z lvm expects user wants to 'reset' cache-pool metadata
(they could have been splitted from some cached LV)
If user doesn't want to zero metadata he needs to specify -Zn.
User may also avoid prompting for zeroing by using -Zy for
cache-pool (basically equals using --yes without -Z being given)
(unlike full convert case, there is no cache-pool being converted,
so there is not 'uncoditional' prompt in this case).
When volume was lvconvert-ed to a thin-volume with external origin,
then in case thin-pool was in non-zeroing mode
it's been printing WARNING about not zeroing thin volume - but
this is wanted and expected - so nothing to warn about.
So in this particular use case WARNING needs to be suppressed.
Adding parameter support for lvcreate_params.
So now lvconvert creates 'normal thin LV' in read-only mode
(so any read will 'return 0' for a moment)
then deactivate regular thin LV and reacreate in 'final R/RW' mode
thin LV with external origin and activate again.
When cache pool is reused for a new cached volume, there is
normally no need to 'keep' old cache-pool metadata as this
could cause major data lose.
Unlike with 'lvcreate -H -LX --cachepool' conversion, this lvconvert
path left the metadata unzeroed - partly for making easier some
debugging, but this was rather a bug.
So to keep possible reattach of 'unzeroed' metadata, user
now has to use 'lvconvert -Zn' for such conversion. In this case
the prompt will appear about possibe data loss and to proceed,
user has to confirm such operation. Without -Zn metadata are wiped.
Commit 3928c96a37941d765bf467d82502cd2aec7fd809 introduced
new defaults for raid number of stripes, which may cause
backwards compatibility issues with customer scripts.
Adding configurable option 'raid_stripe_all_devices' defaulting
to '0' (i.e. off = new behaviour) to select the old behaviour
of using all PVs in the VG or those provided on the command line.
In case any scripts rely on the old behaviour, just set
'raid_strip_all_devices = 1'.
- resolves rhbz1354650
The --uuid, --major and --alldevices arguments were incorrectly tested
after confirming argc is > 0, in a branch that only executes if argc
== 0 (i.e. they were unreachable).
Move all device checks before the test for argc and log an appropriate
error before returning.
raid0/raid0_meta type LVs don't have a default number of stripes when
created without '-i/--stripes Stripes' whereas other raid types have one.
Patch sets the default for raid0/raid0_meta to 2 stripes.
The default amount of stripes for raid4/5/10 is changed to 2 and for raid6 to 3
rather than using all PVs in the VG or those provided on the command line.
This is to avoid unintended high number of stripes in case of many PVs.
To select a different amount of stripes from the default,
use 'lvcreate -i/--stripes Stripes'.
- resolves rhbz1354650