IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Implementation described in doc/lvm2-raid.txt.
Basic support includes:
- ability to create RAID 1/4/5/6 arrays
- ability to delete RAID arrays
- ability to display RAID arrays
Notable missing features (not included in this patch):
- ability to clean-up/repair failures
- ability to convert RAID segment types
- ability to monitor RAID segment types
pv_manip.c to properly account for case when pe_start=0 and the first
physical extent is to be released (currently skip the first extent to
avoid discarding the PV label).
Add new function dm_task_set_add_node() to select between 2 types
of node creation in device directory.
DM_ADD_NODE_ON_RESUME is now default and ensures node is created on
resume. Old original behavior is accessible with DM_ADD_NODE_ON_CREATE.
In this case node would be created during dmsetup create --notable.
For the user 2 new options for dmsetup create are added:
[{--addnodeonresume | --addnodeoncreate }]
Properly working node creation on resume is needed for proper operation
stacking and ability to correctly check in which state the device should
after whole udev transation.
page.
Add ->target_name to segtype_handler to allow a more specific target
name to be returned based on the state of the segment.
Result of trying to merge a snapshot using a kernel that doesn't have
the snapshot-merge target:
Before:
# lvconvert --merge vg/snap
Can't expand LV lv: snapshot target support missing from kernel?
Failed to suspend origin lv
After:
# lvconvert --merge vg/snap
Can't process LV lv: snapshot-merge target support missing from kernel?
Failed to suspend origin lv
Unable to merge LV "snap" into it's origin.
re-add a physical volume that has gone missing previously, due to a transient
device failure, without re-initialising it.
Signed-off-by: Petr Rockai <prockai@redhat.com>
Reviewed-by: Alasdair Kergon <agk@redhat.com>
Introduce --norestorefile to allow user to override the new requirement.
This can also be overridden with "devices/require_restorefile_with_uuid"
in lvm.conf -- however the default is 1.
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
pvchange: Add --metadataignore description
vgchange: Fix minor formatting
pvcreate: Update metadataignore description to refer to pvchange
lvm.conf: Refer to pvcreate and pvchange for metadata options.
Allow parsing of --vgmetadatacopies for vgcreate. Accept
--metadatacopies as a synonym for --vgmetadatacopies.
Signed-off-by: Dave Wysochanski <dwysocha@redhat.com>
This patch adds a vgmetadatacopies parameter for metadata balancing.
This parameter provides a simple way for users to create a policy for
placing metadata on PVs automatically by LVM. The behavior is implemented
inside LVM by managing the 'ignore' mda bits. We chose the name
'vgmetadatacopies' as this is a natural extension to the existing parameter
'pvmetadatacopies' / 'metadatacopies' in pvcreate.
This is a first step at VG parameter based metadata balancing. Most users
will probably want to state that they want a certain number of PVs to contain
metadata, and they may be less concerned about a specific number of metadata
copies in the volume group. However, for default values (pvmetadatacopies
is 1 by default), the number of metadatacopies in the volume group, and the
number of PVs with metadata are the same. In the future we could add
vgmetadatacopiespvs to define more specifically the number of pvs in the
VG that contain metadata, but for now we start with this parameter.
Another possible future extension would be to define a specific pv tag
to mark the set of PVs that should be used for metadata balancing. This
tag based approach could be used in conjunction with 'vgmetadatacopies'.
Signed-off-by: Dave Wysochanski <dwysocha@redhat.com>