IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
The code seems to work fine for the most trivial case - moving a
simple cache LV. However, it can cause problems when trying to
split out other LVs on different VGs and there hasn't been sufficient
testing for LV stacks that contain cache to enable the code. So,
we actively disable what is already broken and wait for the next
release to fix it.
Start to convert percentage size handling in lvresize to the new
standard. Note in the man pages that this code is incomplete.
Fix a regression in non-percentage allocation in my last check in.
This is what I am aiming for:
-l<extents>
-l<percent> LV/ORIGIN
sets or changes the LV size based on the specified quantity
of logical logical extents (that might be backed by
a higher number of physical extents)
-l<percent> PVS/VG/FREE
sets or changes the LV size so as to allocate or free the
desired quantity of physical extents (that might amount to a
lower number of logical extents for the LV concerned)
-l+50%FREE - Use up half the remaining free space in the VG when
carrying out this operation.
-l50%VG - After this operation, this LV should be using up half the
space in the VG.
-l200%LV - Double the logical size of this LV.
-l+100%LV - Double the logical size of this LV.
-l-50%LV - Reduce the logical size of this LV by half.
Reorder detection of cmirrord. Now if cmirrord is not
running, target will not try to load kernel log module,
for communication with cmirrord.
Whole check for attrs now also happens just once.
Test raid10 availability as a target feature (instead of doing
it in all the places where raid10 should be checked).
TODO: activation needs runtime validation - so metadata with raid10
are skipped from activation in user-friendly way in lvm2.
Parsing vg structure during supend/commit/resume may require a lot of
memory - so move this into vg_write.
FIXME: there are now multiple cache layers which our doing some thing
multiple times at different levels. Moreover there is now different
caching path with and without lvmetad - this should be unified
and both path should use same mechanism.
Reuse _node_send_messages for just checking
for valid transaction_id with preload.
This allows earlier detection of incosistent thin pool.
Code does the same thing, except for sending messages.
Improve testing of transation_id to not allow other difference
then either kernel TID is equal or is lower by oned and there
are queued messages for transaction.
Mark messages as submitted if the transaction_id is already matching.
Do not try to deactivate node on failure here and leave it on
proper error path of the caller.
Deactivation of top level node has to happen,
before traversing subtree.
Swap list logic and rather append new nodes to the head
and then use normal iteration.
(in-release update)
Skip over LVs that have a cache LV in their tree of LV dependencies
when performing a pvmove.
This means that users cannot move a cache pool or a cache LV's origin -
even when that cache LV is used as part of another LV (e.g. a thin pool).
The new test (pvmove-cache-segtypes.sh) currently builds up various LV
stacks that incorporate cache LVs. pvmove tests are then performed to
ensure that cache related LVs are /not/ moved. Once pvmove is enabled
for cache, those tests will switch to ensuring that the LVs /are/
moved.
Several fixes for the recent changes that treat allocation percentages
as upper limits.
Improve messages to make it easier to see what is happening.
Fix some cases that failed with errors when they didn't need to.
Fix crashes when first_seg() returns NULL.
Remove a couple of log_errors that were actually debugging messages.
Test currently fails with make check_cluster - so uses 'should'
CLVMD[4f435880]: Feb 19 23:27:36 Send local reply
format_text/archiver.c:230 WARNING: This metadata update is NOT backed up
metadata/mirror.c:1105 Failed to initialize log device
metadata/mirror.c:1145 <backtrace>
lvconvert.c:1547 <backtrace>
lvconvert.c:3084 <backtrace>
Remove 'skip' argument passed into the function.
We always used '0' - as this is the only supported
option (-K) and there is no complementary option.
Also add some testing for behaviour of skipping.
We already have /dev/disk/by-id/dm-uuid-... (which encompasses the
VG UUID and LV UUID in case of LVs since the mapping's UUID is
VG+LV UUID together) and /dev/disk/by-id/dm-name-... (which encompasses
the VG and LV name in case of LVs).
This patch addds /dev/disk/by-id/lvm-pv-uuid-<PV_UUID> that completes
this scheme and makes navigation a bit easier using PV UUIDs since
one can navigate using PV UUIDs only and there's no need to do extra
PV UUID <--> kernel name matching (the PV UUID is stable across reboots).
This may come in handy in various scripts.
Since we already have the PV UUID stored in udev database (as a result
of blkid call - returned in ID_FS_UUID blkid's variable), this operation
is very cheap indeed, just creating the extra one symlink.
There are typically 2 functions for the more advanced segment types that
deal with parameters in lvcreate.c: _get_*_params() and _check_*_params().
(Not all segment types name their functions according to this scheme.)
The former function is responsible for reading parameters before the VG
has been read. The latter is for sanity checking and possibly setting
parameters after the VG has been read.
This patch adds a _check_raid_parameters() function that will determine
if the user has specified 'stripe' or 'mirror' parameters. If not, the
proper number is computed from the list of PVs the user has supplied or
the number that are available in the VG. Now that _check_raid_parameters()
is available, we move the check for proper number of stripes from
_get_* to _check_*.
This gives the user the ability to create RAID LVs as follows:
# 5-device RAID5, 4-data, 1-parity (i.e. implicit '-i 4')
~> lvcreate --type raid5 -L 100G -n lv vg /dev/sd[abcde]1
# 5-device RAID6, 3-data, 2-parity (i.e. implicit '-i 3')
~> lvcreate --type raid6 -L 100G -n lv vg /dev/sd[abcde]1
# If 5 PVs in VG, 4-data, 1-parity RAID5
~> lvcreate --type raid5 -L 100G -n lv vg
Considerations:
This patch only affects RAID. It might also be useful to apply this to
the 'stripe' segment type. LVM RAID may include RAID0 at some point in
the future and the implicit stripes would apply there. It would be odd
to have RAID0 be able to auto-determine the stripe count while 'stripe'
could not.
The only draw-back of this patch that I can see is that there might be
less error checking. Rather than informing the user that they forgot
to supply an argument (e.g. '-i'), the value would be computed and it
may differ from what the user actually wanted. I don't see this as a
problem, because the user can check the device count after creation
and remove the LV if they have made an error.