IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Because of the recent change to process_each_pv(), the vg is always
provided when the pv is in a vg. is_pv(pv) means the pv is in a vg,
which means that the vg arg will not be NULL, which means the removed
block of code is not needed.
When we are given an existing LV name - it needs to be allowed
to pass in even restricted name as the LV could have existed
long before we introduced some new restriction on prefix/suffix.i
Fix the regression on name limits and drop restriction to be applied
on any existing LVs - only the new created LV names have to be
complient with current name restrictions.
FIXME: we are currently using restricted names incorrectly in few
other places - device_is_usable() skips restricted names,
and udev flags are also incorrectly set for restricted names
so these LVs are not getting links properly.
The warnings arg was used to enable logging of warnings
when reading a PV. This arg is turned into a set of flags
with the WARN_PV_READ flag matching the existing behavior.
A new flag WARN_INCONSISTENT is added that will cause
vg_read_internal() to log the "VG is not consistent"
warning so the various callers do not need to log
this warning themselves.
A new vg_read flag READ_WARN_INCONSISTENT is used from
reporting to enable the WARN_INCONSISTENT flag in
vg_read_internal.
[Committed by agk with cosmetic changes and tweaks.]
Process PVs by iterating through VGs, then iterating through
devices if the command needs to process non-PV devices.
The process_single function can always use the VG and PV args.
[Committed by agk with cosmetic changes and tweaks.]
Copy the same form as the new process_each_vg.
Replace unused struct cmd_vg and cmd_vg_read() replicator
code with struct vg and vg_read() directly.
The failed_lvnames arg is no longer used since the
cmd_vg replicator wrapper was removed.
[Committed by agk with cosmetic changes and tweaks.]
Split VG argument collection from processing.
This allows the two different loops through VGs to
be replaced by a single loop.
Replace unused struct cmd_vg and cmd_vg_read() replicator
code with struct vg and vg_read() directly.
[Committed by agk with cosmetic changes and tweaks.]
There are actually three filter chains if lvmetad is used:
- cmd->lvmetad_filter used when when scanning devices for lvmetad
- cmd->filter used when processing lvmetad responses
- cmd->full_fiilter (which is just cmd->lvmetad_filter + cmd->filter chained together) used
for remaining situations
This patch adds the third one - "cmd->full_filter" - currently this is
used if device processing does not fall into any of the groups before,
for example, devices which does not have the PV label yet and we're just
creating a new one or we're processing the devices where the list of the
devices (PVs) is not returned by lvmetad initially.
Currently, the cmd->full_filter is used exactly in these functions:
- lvmcache_label_scan
- _pvcreate_check
- pvcreate_vol
- lvmdiskscan
- pvscan
- _process_each_label
If lvmetad is used, then simply cmd->full_filter == cmd->filter because
cmd->lvmetad_filter is NULL in this case.
Split internals of extract_vgname into _extract_vgname.
This common code will be used for other similar function.
Reuse skip_dev_dir() instead of less mature coded to skip
device dir.
Instead of duplicating full vg/lv name - allocate string
only vg portion of lv name.
Use lv_is_* macros throughout the code base, introducing
lv_is_pvmove, lv_is_locked, lv_is_converting and lv_is_merging.
lv_is_mirror_type no longer includes pvmove.
When ignoring 'listed' volume, print info message.
(So the final command error message is a bit less confusing,
i.e. when user tries to deactive virtual origin:
> lvchange -an vg/lvol2_vorigin
Ignoring virtual origin logical volume vg/lvol2_vorigin.
One or more specified logical volume(s) not found.
Fix get_pool_params to only read params.
Add poolmetadataspare option to get_pool_params.
Move all profile code into update_pool_params.
Move recalculate code into pool_manip.c
Major update of lvconvert code to handle cache and thin.
related targets.
Code tries to unify handling of cache and thin pools.
Better supports lvm2 syntax:
lvconvert --type cache --cachepool vg/pool vg/cache
lvconvert --type thin --thinpool vg/pool vg/extorg
lvconvert --type cache-pool vg/pool
lvconvert --type thin-pool vg/pool
as well as:
lvconvert --cache --cachepool vg/pool vg/cache
lvconvert --thin --thinpool vg/pool vg/extorg
lvconvert --cachepool vg/pool
lvconvert --thinpool vg/pool
While catching much more command line errors.
(Yet couple paths still needs more tests)
Detects as much cmdline errors prior opening VG.
Uses single lvconvert_name_params to convert LV names.
Detects as much incompatibilies in VG prior prompting.
Uses single prompt to confirm whole conversion.
TODO: still the code needs fixes...
Currently, we have two modes of activation, an unnamed nominal mode
(which I will refer to as "complete") and "partial" mode. The
"complete" mode requires that a volume group be 'complete' - that
is, no missing PVs. If there are any missing PVs, no affected LVs
are allowed to activate - even RAID LVs which might be able to
tolerate a failure. The "partial" mode allows anything to be
activated (or at least attempted). If a non-redundant LV is
missing a portion of its addressable space due to a device failure,
it will be replaced with an error target. RAID LVs will either
activate or fail to activate depending on how badly their
redundancy is compromised.
This patch adds a third option, "degraded" mode. This mode can
be selected via the '--activationmode {complete|degraded|partial}'
option to lvchange/vgchange. It can also be set in lvm.conf.
The "degraded" activation mode allows RAID LVs with a sufficient
level of redundancy to activate (e.g. a RAID5 LV with one device
failure, a RAID6 with two device failures, or RAID1 with n-1
failures). RAID LVs with too many device failures are not allowed
to activate - nor are any non-redundant LVs that may have been
affected. This patch also makes the "degraded" mode the default
activation mode.
The degraded activation mode does not yet work in a cluster. A
new cluster lock flag (LCK_DEGRADED_MODE) will need to be created
to make that work. Currently, there is limited space for this
extra flag and I am looking for possible solutions. One possible
solution is to usurp LCK_CONVERT, as it is not used. When the
locking_type is 3, the degraded mode flag simply gets dropped and
the old ("complete") behavior is exhibited.
The list of strings is used quite frequently and we'd like to reuse
this simple structure for report selection support too. Make it part
of libdevmapper for general reuse throughout the code.
This also simplifies the LVM code a bit since we don't need to
include and manage lvm-types.h anymore (the string list was the
only structure defined there).
There were two bugs before when using pvcreate --restorefile together
with data alignment and its offset specified:
- the --dataalignment was always ignored due to missing braces in the
code when validating the divisibility of supplied --dataalignment
argument with pe_start which we're just restoring:
if (pp->rp.pe_start % pp->data_alignment)
log_warn("WARNING: Ignoring data alignment %" PRIu64
" incompatible with --restorefile value (%"
PRIu64").", pp->data_alignment, pp->rp.pe_start);
pp->data_alignment = 0
The pp->data_alignment should be zeroed only if the pe_start is not
divisible with data_alignment.
- the check for compatibility of restored pe_start was incorrect too
since it did not properly count with the dataalignmentoffset that
could be supplied together with dataalignment
The proper formula is:
X * dataalignment + dataalignmentoffset == pe_start
So it should be:
if ((pp->rp.pe_start % pp->data_alignment) != pp->data_alignment_offset) {
...ignore supplied dataalignment and dataalignment offset...
}
Since commit f12ee43f2e call destroy,
it start to check all VGs are unlocked. However when we become_daemon,
we simply reset locking (since lock is still kept by parent process).
So implement a simple 'reset' flag.
This patch allows users to convert existing logical volumes into
cache pool LVs. Since cache pool LVs consist of data and metadata
sub-LVs, there is also the '--poolmetadata' (similar to thin_pool)
which allows for the specification of the metadata device.
Boolean algebra changes for process_each_lv_in_vg().
1st.
Drop process_lv variable since it's not needed.
2nd.
process_lv was always initilized to 0 - so the condition was always true.
It the condition (!tags_supplied && !lvargs_supplied) evaluates as "true",
process_all is already set to 1, so skip vg tags evaluation.
3rd.
Move check for matching lv name in the front of lv tags check
since this check can't be skipped for lvargs_matched counter.
If this filter evaluates to true, skip lv tags evaluation.
For merging thin snapshot we have to do couple extra
checks before we allow this operation.
We pretend thin-snapshot and thin-origin
are tied together and we have to properly
maintain locking.
Add a PV create which takes a paramters object that
has get/set method to configure PV creation.
Current get/set operations include:
- size
- pvmetadatacopies
- pvmetadatasize
- data_alignment
- data_alignment_offset
- zero
Reference: https://bugzilla.redhat.com/show_bug.cgi?id=880395
Signed-off-by: Tony Asleson <tasleson@redhat.com>