IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Commit e80884cd08 tried to dump filters
for them to be reevaluated when creating a PV to avoid overwriting
any existing signature that may have been created after last
scan/filtering.
However, we need to call refresh_filters instead of
persistent_filter->dump since dump requires proper rescannig to fill
up the persistent filter again. However, this is true only for pvcreate
but not for vgcreate with PV creation where the scanning happens before
this PV creation and hence the next rescan (if not full scan), does not
fill the persistent filter.
Also, move refresh_filters so that it's called sooner and only for
pvcreate, vgcreate already calls lvmcache_label_scan(cmd, 2) which
then calls refresh_filters itself, so no need to reevaluate this again.
This caused the persistent filter (/etc/lvm/cache/.cache file) to be
wrong and contain only the PV just being processed with
vgcreate <vg_name> <pv_name_to_create>.
This regression caused other block devices to be filtered out in case
the vgcreate with PV creation was used and then the persistent filter
is used by any other LVM command afterwards.
Fix get_pool_params to only read params.
Add poolmetadataspare option to get_pool_params.
Move all profile code into update_pool_params.
Move recalculate code into pool_manip.c
Revert logic and rename new arg_ functions to:
arg_from_list_is_set()
arg_outside_list_is_set()
When err_found is given, log_error message is automaticaly
printed.
Support --repair and --use-policies with mirrors.
(fixes another regression from lvconvert change for thin and cache).
TODO: the code path for mirror needs update.
Major update of lvconvert code to handle cache and thin.
related targets.
Code tries to unify handling of cache and thin pools.
Better supports lvm2 syntax:
lvconvert --type cache --cachepool vg/pool vg/cache
lvconvert --type thin --thinpool vg/pool vg/extorg
lvconvert --type cache-pool vg/pool
lvconvert --type thin-pool vg/pool
as well as:
lvconvert --cache --cachepool vg/pool vg/cache
lvconvert --thin --thinpool vg/pool vg/extorg
lvconvert --cachepool vg/pool
lvconvert --thinpool vg/pool
While catching much more command line errors.
(Yet couple paths still needs more tests)
Detects as much cmdline errors prior opening VG.
Uses single lvconvert_name_params to convert LV names.
Detects as much incompatibilies in VG prior prompting.
Uses single prompt to confirm whole conversion.
TODO: still the code needs fixes...
Since the type passed LV is changed and content of data detroyed,
query user with prompt to confirm this operation.
Also add a proper wiping of header.
Using '--yes' will skip this prompt:
lvconvert -s --yes vg/lv vg/lvcow
If the lv_info call fails for whatever reason/INFO dm ioctl fails or
the dm driver communication is disabled (--driverloaded n), make
sure we always display "unknown" for LVSINFO fields as that's exactly
what happens - we don't know the state.
Before the patch:
$ lvs -o name,device_open --driverloaded n
WARNING: Activation disabled. No device-mapper interaction will be attempted.
Command failed with status code 5.
With this patch applied:
$ lvs -o name,device_open --driverloaded n
WARNING: Activation disabled. No device-mapper interaction will be attempted.
LV DevOpen
lvol1 unknown
Currently, we have two modes of activation, an unnamed nominal mode
(which I will refer to as "complete") and "partial" mode. The
"complete" mode requires that a volume group be 'complete' - that
is, no missing PVs. If there are any missing PVs, no affected LVs
are allowed to activate - even RAID LVs which might be able to
tolerate a failure. The "partial" mode allows anything to be
activated (or at least attempted). If a non-redundant LV is
missing a portion of its addressable space due to a device failure,
it will be replaced with an error target. RAID LVs will either
activate or fail to activate depending on how badly their
redundancy is compromised.
This patch adds a third option, "degraded" mode. This mode can
be selected via the '--activationmode {complete|degraded|partial}'
option to lvchange/vgchange. It can also be set in lvm.conf.
The "degraded" activation mode allows RAID LVs with a sufficient
level of redundancy to activate (e.g. a RAID5 LV with one device
failure, a RAID6 with two device failures, or RAID1 with n-1
failures). RAID LVs with too many device failures are not allowed
to activate - nor are any non-redundant LVs that may have been
affected. This patch also makes the "degraded" mode the default
activation mode.
The degraded activation mode does not yet work in a cluster. A
new cluster lock flag (LCK_DEGRADED_MODE) will need to be created
to make that work. Currently, there is limited space for this
extra flag and I am looking for possible solutions. One possible
solution is to usurp LCK_CONVERT, as it is not used. When the
locking_type is 3, the degraded mode flag simply gets dropped and
the old ("complete") behavior is exhibited.
There was missing lv_info call for situations where there were
mixed PV/LV segment fields together with LVSINFO fields which
require extra lv_info call for LV device status. This ended up
with NULL lvinfo passed to the field reporting functions, hence
the segfault.
The --binary option, if used, causes all the binary values reported
in reporting commands to be displayed as "0" or "1" instead of descriptive
literal values (value "unknown" is still used for values that could not be
determined).
Also, add report/binary_values_as_numeric lvm.conf option with the same
functionality as the --binary option (the --binary option prevails
if both --binary cmd option and report/binary_values_as_numeric lvm.conf
option is used at the same time). The report/binary_values_as_numeric is
also profilable.
This makes it easier to use and check lvm reporting command output in scripts.
LVSINFO is exactly the same as existing LVS report type,
but it has the "struct lvinfo" populated in addition for
use - this is useful for fields that display the status
of the LV device itself (e.g. suspended state, tables
present/missing...).
Currently, such properties are reported within the "lv_attr"
field so separation is unnecessary - the "lvinfo" call
to populate the "struct lvinfo" is directly a part of the
field reporting function - _lvstatus_disp/lv_attr_dup.
With upcoming patches, we'd like the lv_attr field bits
to be separated into their own fields. To avoid calling
"lvinfo" fn as many times as there are fields requiring
the "lv_info" structure to be populated while reporting
one row related to one LV, we're separating former LVS
into LVS and LVSINFO report type. With this, there's
just one "lvinfo" call for one report row and LV reporting
fields will take the info needed from this struct then,
hence reusing it and not calling "lvinfo" fn on their own.
Mention parent LV as well as the LV triggering the warning.
Still leaves some confusing cases but its not worth fixing them
at the moment.
(Thin pool inactive but a thin volume active => deactivate thin vol.
Inactive mirror/raid with pvmove in progress => complete pvmove and
active&deactivate mirror/raid.
If new VG already exists it requires some LVs to be inactive
unnecessarily.)
mirror_or_raid_type_requested really checks for mirror type.
Convert macros mirror_or_raid_type_requested() and
snapshot_type_requested() into inline functions.