1
0
mirror of git://sourceware.org/git/lvm2.git synced 2024-12-22 17:35:59 +03:00
Commit Graph

216 Commits

Author SHA1 Message Date
David Teigland
c609dedc2f Allow system.devices to be automatically created on first boot
An OS installer can create system.devices for the system and
disks, but an OS image cannot create the system-specific
system.devices.  The OS image can instead configure the
image so that lvm will create system.devices on first boot.

Image preparation steps to enable auto creation of system.devices:
- create empty file /etc/lvm/devices/auto-import-rootvg
- remove any existing /etc/lvm/devices/system.devices
- enable lvm-devices-import.path
- enable lvm-devices-import.service

On first boot of the prepared image:
- udev triggers vgchange -aay --autoactivation event <rootvg>
- vgchange activates LVs in the root VG
- vgchange finds the file /etc/lvm/devices/auto-import-rootvg,
  and no /etc/lvm/devices/system.devices, so it creates
  /run/lvm/lvm-devices-import
- lvm-devices-import.path is run when /run/lvm/lvm-devices-import
  appears, and triggers lvm-devices-import.service
- lvm-devices-import.service runs vgimportdevices --rootvg --auto
- vgimportdevices finds /etc/lvm/devices/auto-import-rootvg,
  and no system.devices, so it creates system.devices containing
  PVs in the root VG, and removes /etc/lvm/devices/auto-import-rootvg
  and /run/lvm/lvm-devices-import

Run directly, vgimportdevices --rootvg (without --auto), will create
a new system.devices for the root VG, or will add devices for the
root VG to an existing system.devices.
2024-05-21 16:29:12 -05:00
Zdenek Kabelac
84b084c9b6 configure.ac: define DEFAULT_PROC_DIR in one place
Let's move proc into include/configure.h so this define can
be easily used across the source base.

Configure defines it - but ATM we do not provide any configure
option to change it - there should be no need to ever change it.
2024-04-15 13:38:44 +02:00
David Teigland
e59027e4f7 devices file: back up each version
Create backup copies of system.devices in /etc/lvm/devices/backup
named system.devices-YYYYMMDD.HHMMSS.NNNN.  NNNN is the version
counter from the file.

Each time that an lvm command writes a new system.devices file,
it also writes the same file in the backup directory.

A new comment line is added to system.devices with HASH=<num>
where <num> is a crc calculated from the uncommented lines in
system.devices.  This lets lvm detect if the file has been
modified outside of lvm itself.

If system.devices is edited directly, the next time a command
reads the file, the crc will not match the HASH value.  The
command will then rewrite system.devices with the correct HASH
value, and create a backup reflecting the edits.

A default limit of 50 backup files is kept, configurable by
lvm.conf devicesfile_backup_limit (set to 0 to disable backups.)
2024-02-15 11:40:37 -06:00
David Teigland
35cefa50fa device_id: change default search_for_devnames to all
Problematic scenario:
. the device for a PV has no wwid, so it's identified in system.devices
  with IDTYPE=devname IDNAME=/dev/foo
. user adds/enables a wwid for the device
. on reboot, the device name changes, e.g. now /dev/bar
. the code that searches for the new device name includes an
  optimization to skip looking on devs that have a wwid, on
  the basis that a device with a wwid won't have IDTYPE=devname
. this optimization causes lvm to not look for the PV on /dev/bar
  since that device now has a wwid, so the PV is not found
. the optimization is enabled by search_for_devnames="auto"
. change the default to search_for_devnames="all" which does not
  use the problematic optimization
2023-11-02 13:00:50 -05:00
Zdenek Kabelac
4d2311655b lvm-exec: refactor code
Add prepare_exec_args() for reading option list for
thin/cache_repair, thin/cache_check.
2023-07-17 12:44:23 +02:00
Zdenek Kabelac
fec3087eef thin: add cfg support for thin_restore and cache_restore
Add configurable paths for thin_restore and cache_restore.
2023-06-23 18:06:22 +02:00
David Teigland
79e67fc5e4 device id: add new types using values from vpd_pg83
The new device_id types are: wwid_naa, wwid_eui, wwid_t10.
The new types use the specific wwid type in their name.
lvm currently gets the values for these types by reading
the device's vpd_pg83 sysfs file (this could change in the
future if better methods become available for reading the
values.)

If a device is added to the devices file using one of these
types, prior versions of lvm will not recognize the types
and will be unable to use the devices.

When adding a new device, lvm continues to first use sys_wwid
from the sysfs wwid file.  If the device has no sysfs wwid file,
lvm now attempts to use one of the new types from vpd_pg83.

If a devices file entry with type sys_wwid does not match a
given device's sysfs wwid file, the sys_wwid value will also
be compared to that device's other wwids from its vpd_pg83 file.
If the kernel changes the wwid type reported from the sysfs
wwid file, e.g. from a device's t10 id to its naa id, then lvm
should still be able to match it correctly using the vpd_pg83
data which will include both ids.
2022-10-10 11:47:29 -05:00
David Teigland
726dd25969 add hints interface to the pvs_online file information
The information in /run/lvm/pvs_online/<pvid> files can
be used to build a list of devices for a given VG.

The pvscan -aay command has long used this information to
activate a VG while scanning only devices in that VG, which
is an important optimization for autoactivation.

This patch implements the same thing through the existing
device hints interface, so that the optimization can be
applied elsewhere.  A future patch will take advantage of
this optimization in vgchange -aay, which is now used in
place of pvscan -aay for event activation.
2021-11-04 10:58:16 -05:00
Zdenek Kabelac
b73e1cd4b3 configure: updates 2021-10-14 23:34:11 +02:00
David Teigland
062ea3c418 fix syslog setting
Just setting lvm.conf level=N should not send messages to
syslog (now the journal by default.)

Sending messages to syslog should require setting lvm.conf
log { syslog=1 level=N }.
2021-10-11 17:11:01 -05:00
David Teigland
6c22392a3f config: change default use_devicesfile to 1 2021-10-07 12:06:49 -05:00
David Teigland
9048565093 devices: rework libudev usage
related to config settings:
  obtain_device_info_from_udev (controls if lvm gets
    a list of devices from readdir /dev or from libudev)
  external_device_info_source (controls if lvm asks
    libudev for device information)

. Make the obtain_device_list_from_udev setting
  affect only the choice of readdir /dev vs libudev.
  The setting no longer controls if udev is used for
  device type checks.

. Change obtain_device_list_from_udev default to 0.
  This helps avoid boot timeouts due to slow libudev
  queries, avoids reported failures from
  udev_enumerate_scan_devices, and avoids delays from
  "device not initialized in udev database" errors.
  Even without errors, for a system booting with 1024 PVs,
  lvm2-pvscan times improve from about 100 sec to 15 sec,
  and the pvscan command from about 64 sec to about 4 sec.

. For external_device_info_source="none", remove all
  libudev device info queries, and use only lvm
  native device info.

. For external_device_info_source="udev", first check
  lvm native device info, then check libudev info.

. Remove sleep/retry loop when attempting libudev
  queries for device info.  udev info will simply
  be skipped if it's not immediately available.

. Only set up a libdev connection if it will be used by
  obtain_device_list_from_udev/external_device_info_source.

. For native multipath component detection, use
  /etc/multipath/wwids.  If a device has a wwid
  matching an entry in the wwids file, then it's
  considered a multipath component.  This is
  necessary to natively detect multipath
  components when the mpath device is not set up.
2021-07-13 11:11:23 -05:00
Zdenek Kabelac
2c6a2b6e86 vdo: support vdo_pool_header_size
Add profilable configurable setting for vdo pool header size, that is
used as 'extra' empty space at the front and end of vdo-pool device
to avoid having a disk in the system the may have same data is real
vdo LV.

For some conversion cases however we may need to allow using '0' header size.

TODO: in this case we may eventually avoid adding 'linear' mapping layer
in future - but this requires further modification over lvm code base.
2021-06-28 20:41:07 +02:00
David Teigland
83fe6e720f device usage based on devices file
The LVM devices file lists devices that lvm can use.  The default
file is /etc/lvm/devices/system.devices, and the lvmdevices(8)
command is used to add or remove device entries.  If the file
does not exist, or if lvm.conf includes use_devicesfile=0, then
lvm will not use a devices file.  When the devices file is in use,
the regex filter is not used, and the filter settings in lvm.conf
or on the command line are ignored.

LVM records devices in the devices file using hardware-specific
IDs, such as the WWID, and attempts to use subsystem-specific
IDs for virtual device types.  These device IDs are also written
in the VG metadata.  When no hardware or virtual ID is available,
lvm falls back using the unstable device name as the device ID.
When devnames are used, lvm performs extra scanning to find
devices if their devname changes, e.g. after reboot.

When proper device IDs are used, an lvm command will not look
at devices outside the devices file, but when devnames are used
as a fallback, lvm will scan devices outside the devices file
to locate PVs on renamed devices.  A config setting
search_for_devnames can be used to control the scanning for
renamed devname entries.

Related to the devices file, the new command option
--devices <devnames> allows a list of devices to be specified for
the command to use, overriding the devices file.  The listed
devices act as a sort of devices file in terms of limiting which
devices lvm will see and use.  Devices that are not listed will
appear to be missing to the lvm command.

Multiple devices files can be kept in /etc/lvm/devices, which
allows lvm to be used with different sets of devices, e.g.
system devices do not need to be exposed to a specific application,
and the application can use lvm on its own set of devices that are
not exposed to the system.  The option --devicesfile <filename> is
used to select the devices file to use with the command.  Without
the option set, the default system devices file is used.

Setting --devicesfile "" causes lvm to not use a devices file.

An existing, empty devices file means lvm will see no devices.

The new command vgimportdevices adds PVs from a VG to the devices
file and updates the VG metadata to include the device IDs.
vgimportdevices -a will import all VGs into the system devices file.

LVM commands run by dmeventd not use a devices file by default,
and will look at all devices on the system.  A devices file can
be created for dmeventd (/etc/lvm/devices/dmeventd.devices)  If
this file exists, lvm commands run by dmeventd will use it.

Internal implementaion:

- device_ids_read - read the devices file
  . add struct dev_use (du) to cmd->use_devices for each devices file entry
- dev_cache_scan - get /dev entries
  . add struct device (dev) to dev_cache for each device on the system
- device_ids_match - match devices file entries to /dev entries
  . match each du on cmd->use_devices to a dev in dev_cache, using device ID
  . on match, set du->dev, dev->id, dev->flags MATCHED_USE_ID
- label_scan - read lvm headers and metadata from devices
  . filters are applied, those that do not need data from the device
  . filter-deviceid skips devs without MATCHED_USE_ID, i.e.
    skips /dev entries that are not listed in the devices file
  . read lvm label from dev
  . filters are applied, those that use data from the device
  . read lvm metadata from dev
  . add info/vginfo structs for PVs/VGs (info is "lvmcache")
- device_ids_find_renamed_devs - handle devices with unstable devname ID
  where devname changed
  . this step only needed when devs do not have proper device IDs,
    and their dev names change, e.g. after reboot sdb becomes sdc.
  . detect incorrect match because PVID in the devices file entry
    does not match the PVID found when the device was read above
  . undo incorrect match between du and dev above
  . search system devices for new location of PVID
  . update devices file with new devnames for PVIDs on renamed devices
  . label_scan the renamed devs
- continue with command processing
2021-02-23 16:43:32 -06:00
Zdenek Kabelac
b4212be2e7 thin: improve 16g support for thin pool metadata
Initial support for thin-pool used slightly smaller max size 15.81GiB
for thin-pool metadata. However the real limit later settled at 15.88GiB
(difference is ~64MiB - 16448 4K blocks).

lvm2 could not simply increase the size as it has been using hard cropping
of the loaded metadata device to avoid warnings printing warning of kernel
when the size was bigger (i.e. due to bigger extent_size).

This patch adds the new lvm.conf configurable setting:
allocation/thin_pool_crop_metadata
which defaults to 0 -> no crop of metadata beyond 15.81GiB.
Only user with these sizes of metadata will be affected.

Without cropping lvm2 now limits metadata allocation size to 15.88GiB.
Any space beyond is currently not used by thin-pool target.
Even if i.e. bigger LV is used for metadata via lvconvert,
or allocated bigger because of to large extent size.

With cropping enabled (=1) lvm2 preserves the old limitation
15.81GiB and should allow to work in the evironement with
older lvm2 tools (i.e. older distribution).

Thin-pool metadata with size bigger then 15.81G is now using CROP_METADATA
flag within lvm2 metadata, so older lvm2 recognizes an
incompatible thin-pool and cannot activate such pool!

Users should use uncropped version as it is not suffering
from various issues between thin_repair results and allocated
metadata LV as thin_repair limit is 15.88GiB
Users should use cropping only when really needed!

Patch also better handles resize of thin-pool metadata and prevents resize
beoyond usable size 15.88GiB. Resize beyond 15.81GiB automatically
switches pool to no-crop version. Even with existing bigger thin-pool
metadata command 'lvextend -l+1 vg/pool_tmeta' does the change.

Patch gives better controls 'coverted' metadata LV and
reports less confusing message during conversion.

Patch set also moves the code for updating min/max into pool_manip.c
for better sharing with cache_pool code.
2021-02-01 12:06:13 +01:00
Zdenek Kabelac
bc39d5bec6 pool: zero metadata
To avoid polution of metadata with some 'garbage' content or eventualy
some leak of stale data in case user want to upload metadata somewhere,
ensure upon allocation the metadata device is fully zeroed.

Behaviour may slow down allocation of thin-pool or cache-pool a bit
so the old behaviour can be restored with lvm.conf setting:
allocation/zero_metadata=0

TODO: add zeroing for extension of metadata volume.
2020-06-24 15:01:03 +02:00
Zdenek Kabelac
aad91330fe vdo: raise VDO default bio threads to 4
Since 'vdo create' tends to use this setting,
update lvm2 to provide same default.
2019-10-04 17:31:55 +02:00
David Teigland
db98a6e362 Additional MD component checking
If udev info is missing for a device, (which would indicate
if it's an MD component), then do an end-of-device read to
check if a PV is an MD component.  (This is skipped when
using hints since we already know devs in hints are good.)

A new config setting md_component_checks can be used to
disable the additional end-of-device MD checks, or to
always enable end-of-device MD checks.

When both hints and udev info are disabled/unavailable,
the end of PVs will now be scanned by default.  If md
devices with end-of-device superblocks are not being
used, the extra I/O overhead can be avoided by setting
md_component_checks="start".
2019-06-07 13:27:16 -05:00
Zdenek Kabelac
1cc690e911 thin: max thin 2019-03-20 14:37:44 +01:00
David Teigland
7edbf8a441 io: increase the default io memory from 4 to 8 MiB
This is the default bcache size that is created at the
start of the command.  It needs to be large enough to
hold a single copy of metadata for a given VG, or the
VG cannot be read or written (since the entire VG would
not fit into available memory.)

Increasing the default reduces the chances of anyone
needing to increase the default to use their VG.

The size can be set in lvm.conf global/io_memory_size;
the lower limit is 4 MiB and the upper limit is 128 MiB.
2019-03-04 12:14:06 -06:00
David Teigland
dd8d083795 config: add new setting io_memory_size
which defines the amount of memory that lvm will allocate
for bcache.  Increasing this setting is required if it is
smaller than a single copy of VG metadata.
2019-03-04 11:36:21 -06:00
David Teigland
9aea6ae956 logging: add command[pid] and timestamp to file and verbose output
Without this, the output from different commands in a single
log file could not be separated.

Change the default "indent" setting to 0 so that the default
debug output does not include variable spaces in the middle
of debug lines.
2019-02-26 10:03:44 -06:00
David Teigland
7be6791e70 config: change scan_lvs default to 0
so that lvm does not scan LVs for PVs by default.
2019-02-20 13:30:46 -06:00
David Teigland
6620dc9475 add device hints to reduce scanning
Save the list of PVs in /run/lvm/hints.  These hints
are used to reduce scanning in a number of commands
to only the PVs on the system, or only the PVs in a
requested VG (rather than all devices on the system.)
2019-01-15 10:23:47 -06:00
Zdenek Kabelac
3320ab8334 lib: move towards v2 version of VDO format
Drop very old original format of VDO target and focus on V2 version.
So some variables were renamed or replaced.
There is no compatibility preserved (with assumption so far this is
experimental feature and there is no real user).

Note - version currently VDO calls this version 6.2.
2018-12-20 13:26:55 +01:00
David Teigland
904e1e3d26 Place the first PE at 1 MiB for all defaults
. When using default settings, this commit should change
  nothing.  The first PE continues to be placed at 1 MiB
  resulting in a metadata area size of 1020 KiB (for
  4K page sizes; slightly smaller for larger page sizes.)

. When default_data_alignment is disabled in lvm.conf,
  align pe_start at 1 MiB, based on a default metadata area
  size that adapts to the page size.  Previously, disabling
  this option would result in mda_size that was too small
  for common use, and produced a 64 KiB aligned pe_start.

. Customized pe_start and mda_size values continue to be
  set as before in lvm.conf and command line.

. Remove the configure option for setting default_data_alignment
  at build time.

. Improve alignment related option descriptions.

. Add section about alignment to pvcreate man page.

Previously, DEFAULT_PVMETADATASIZE was 255 sectors.
However, the fact that the config setting named
"default_data_alignment" has a default value of 1 (MiB)
meant that DEFAULT_PVMETADATASIZE was having no effect.

The metadata area size is the space between the start of
the metadata area (page size offset from the start of the
device) and the first PE (1 MiB by default due to
default_data_alignment 1.)  The result is a 1020 KiB metadata
area on machines with 4KiB page size (1024 KiB - 4 KiB),
and smaller on machines with larger page size.

If default_data_alignment was set to 0 (disabled), then
DEFAULT_PVMETADATASIZE 255 would take effect, and produce a
metadata area that was 188 KiB and pe_start of 192 KiB.
This was too small for common use.

This is fixed by making the default metadata area size a
computed value that matches the value produced by
default_data_alignment.
2018-11-26 16:36:50 -06:00
David Teigland
ca66d52032 io: use sync io if aio fails
io_setup() for aio may fail if a system has reached the
aio request limit.  In this case, fall back to using
sync io.  Also, lvm use of aio can be disabled entirely
with config setting global/use_aio=0.

The system limit for aio requests can be seen from
  /proc/sys/fs/aio-max-nr

The current usage of aio requests can be seen from
  /proc/sys/fs/aio-nr

The system limit for aio requests can be increased by
setting fs.aio-max-nr using sysctl.

Also add last-byte limit to the sync io code.
2018-11-20 09:13:20 -06:00
David Teigland
bfcecbbce1 filter: add config setting to skip scanning LVs
devices/scan_lvs (default 1) determines whether lvm
will scan LVs for layered PVs.  The lvm behavior has
always been to scan LVs, but it's rare for LVs to have
layered PVs, and much more common for there to be many
LVs that substantially slow down scanning with no benefit.

This is implemented in the usable filter, and has the
same effect as listing all LVs in the global_filter.
2018-08-30 09:59:50 -05:00
Zdenek Kabelac
faa126882a dmeventd: lvm vdo support 2018-07-09 15:29:16 +02:00
Zdenek Kabelac
0d9a4c6989 lib: new vdo segment configurable options
Configurable for vdo segment with their default values.
Also specify their ranges with minimal and maximal values.
2018-07-09 15:28:35 +02:00
David Teigland
328303d4d4 Remove unused device error counting 2018-06-15 14:04:39 -05:00
David Teigland
d9a77e8bb4 lvmcache: simplify metadata cache
The copy of VG metadata stored in lvmcache was not being used
in general.  It pretended to be a generic VG metadata cache,
but was not being used except for clvmd activation.  There
it was used to avoid reading from disk while devices were
suspended, i.e. in resume.

This removes the code that attempted to make this look
like a generic metadata cache, and replaces with with
something narrowly targetted to what it's actually used for.

This is a way of passing the VG from suspend to resume in
clvmd.  Since in the case of clvmd one caller can't simply
pass the same VG to both suspend and resume, suspend needs
to stash the VG somewhere that resume can grab it from.
(resume doesn't want to read it from disk since devices
are suspended.)  The lvmcache vginfo struct is used as a
convenient place to stash the VG to pass it from suspend
to resume, even though it isn't related to the lvmcache
or vginfo.  These suspended_vg* vginfo fields should
not be used or touched anywhere else, they are only to
be used for passing the VG data from suspend to resume
in clvmd.  The VG data being passed between suspend and
resume is never modified, and will only exist in the
brief period between suspend and resume in clvmd.

suspend has both old (current) and new (precommitted)
copies of the VG metadata.  It stashes both of these in
the vginfo prior to suspending devices.  When vg_commit
is successful, it sets a flag in vginfo as before,
signaling the transition from old to new metadata.

resume grabs the VG stashed by suspend.  If the vg_commit
happened, it grabs the new VG, and if the vg_commit didn't
happen it grabs the old VG.  The VG is then used to resume
LVs.

This isolates clvmd-specific code and usage from the
normal lvm vg_read code, making the code simpler and
the behavior easier to verify.

Sequence of operations:

- lv_suspend() has both vg_old and vg_new
  and stashes a copy of each onto the vginfo:
  lvmcache_save_suspended_vg(vg_old);
  lvmcache_save_suspended_vg(vg_new);

- vg_commit() happens, which causes all clvmd
  instances to call lvmcache_commit_metadata(vg).
  A flag is set in the vginfo indicating the
  transition from the old to new VG:
  vginfo->suspended_vg_committed = 1;

- lv_resume() needs either vg_old or vg_new
  to use in resuming LVs.  It doesn't want to
  read the VG from disk since devices are
  suspended, so it gets the VG stashed by
  lv_suspend:
  vg = lvmcache_get_suspended_vg(vgid);

If the vg_commit did not happen, suspended_vg_committed
will not be set, and in this case, lvmcache_get_suspended_vg()
will return the old VG instead of the new VG, and it will
resume LVs based on the old metadata.
2018-04-20 11:22:45 -05:00
Joe Thornber
00f1b208a1 [io paths] Unpick agk's aio stuff 2018-04-20 11:03:58 -05:00
Alasdair G Kergon
3e29c80122 device: Queue any aio beyond defined limits. 2018-02-08 20:15:37 +00:00
Alasdair G Kergon
8c7bbcfb0f device: Basic config and setup to support async I/O. 2018-02-08 20:15:14 +00:00
Zdenek Kabelac
db5938a4f8 cleanup: define really uses KB
Cleanup also units for DEFAULT_THIN_POOL_OPTIMAL_METADATA_SIZE define
(128MB) and update calcs for it.
2017-06-09 21:49:19 +02:00
Zdenek Kabelac
ba3d3210d7 cleanup: use DM limit define
For calculation use already defined size in libdm, which give better
estimation of maximal size of thin pool metadata.
2017-06-08 11:07:58 +02:00
Zdenek Kabelac
719d099693 cleanup: rename internal define
More descriptive name of #define.
2017-06-08 11:07:18 +02:00
Heinz Mauelshagen
5ae7a016b8 lvcreate: raise default raid regionsize to 2MiB
Related: rhbz1392947.
2017-04-13 16:10:49 +02:00
Zdenek Kabelac
3018cdcaa7 fsadm: support configurable full path
Just like with other tools lvm2 is using allow to define
fully configurable path.

Default is selected by $PREFIX/sbin/fsadm
2017-04-12 21:34:08 +02:00
Zdenek Kabelac
4a394f410d cache: introduce allocation/cache_metadata_format
Add new profilable configation setting to let user select
which metadata format of a created cache pool he wish to use.

By default the 'best' available format is autodetected at runtime,
but user may enforce format 1 or 2 ATM.

Code also detects availability for metadata2 supporting cache target.

In case of troubles user may easily Disable usage of this feature
by placing 'metadata2' into global/cache_disabled_features list.
2017-03-10 19:33:01 +01:00
Heinz Mauelshagen
64a2fad5d6 lvconvert/lvcreate: raise maximum number of raid images
Because of contraints in renaming shifted rimage/rmeta LV names
the current RaidLV limit is a maximum of 10 SubLV pairs.

With the previous introduction of reshaping infratructure that
constriant got removed.

Kernel supports 253 since dm-raid target 1.9.0, older kernels 64.

Raise the maximum number of RaidLV rimage/rmeta pairs to 64.
If we want to raise past 64, we have to introdce a check for
the kernel supporting it in lvcreate/lvconvert.

Related: rhbz834579
Related: rhbz1191935
Related: rhbz1191978
2017-02-24 05:20:58 +01:00
Heinz Mauelshagen
34caf83172 lvconvert: add infrastructure for RaidLV reshaping support
In order to support striped raid5/6/10 LV reshaping (change
of LV type, stripesize or number of legs), this patch
introduces the changes to call the reshaping infratructure
from lv_raid_convert().

Changes:
- add reshaping calls from lv_raid_convert()
- add command definitons for reshaping to tools/command-lines.in
- fix raid_rimage_extents()
- add 2 new test scripts lvconvert-raid-reshape-linear_to_striped.sh
  and lvconvert-raid-reshape-striped_to_linear.sh to test
  the linear <-> striped multi-step conversions
- add lvconvert-raid-reshape.sh reshaping tests
- enhance lvconvert-raid-takeover.sh with new raid10 tests

Related: rhbz834579
Related: rhbz1191935
Related: rhbz1191978
2017-02-24 05:20:58 +01:00
Zdenek Kabelac
04a9cad499 config: new option dmeventd/thin_command
This setting will allowing configuring which command gets executed
when thin-pool fullness goes from 50%..100%
2017-01-20 23:53:26 +01:00
Zdenek Kabelac
f8234d6e5f libdm: add human R|readable units
When showing sizes with 'H|human' units we do use standard rounding.
This however is confusing users from time to time,
when the printed number uses some biger units i.e. GiB and there is just
tiny fraction of space missing.

So here is some real-life example with new 'r' unit.

$lvs

  LV    VG Attr       LSize  Pool Origin
  lvol0 vg -wi-a-----  1.99g
  lvol1 vg -wi-a----- <2.00g
  lvol2 vg -wi-a----- <2.01g

Meaning is - lvol1 has 'slightly' less then 2.00g - from sign '<' user
can be aware the LV doesn't have full 2.00GiB in size so he
will be less surpriced allocation of 2G volume will not succeed.

$ vgs
  VG #PV #LV #SN Attr   VSize  VFree
  vg   2   2   0 wz--n- <6,00g <2,01g

For uses needing  'old'  undecorated human unit simply will continue
to use 'H|h' units.

The new R|r  may further change when we would recongnize some
other way how to improve readability.
2017-01-20 23:52:17 +01:00
Zdenek Kabelac
b493811968 cache: introduce cache_pool_max_chunks
Introduce 'hard limit' for max number of cache chunks.
When cache target operates with too many chunks (>10e6).

When user is aware of related possible troubles he
may increase the limit in lvm.conf.

Also verbosely inform user about possible solution.

Code works for both lvcreate and lvconvert.

Lvconvert fully supports change of chunk_size when caching LV
(and validates for compatible settings).
2016-08-29 20:47:31 +02:00
Heinz Mauelshagen
a185a2bea2 lvcreate/lvconvert: fix validation of maximum mirrors/stripes
Enforce mirror/raid0/1/10/4/5/6 type specific maximum images when
creating LVs or converting them from mirror <-> raid1.

Document those maxima in the lvcreate/lvconvert man pages.

- resolves rhbz1366060
2016-08-12 19:14:28 +02:00
Heinz Mauelshagen
7eb7909193 lvcreate: raid0 needs default number of stripes
Commit 3928c96a37 introduced
new defaults for raid number of stripes, which may cause
backwards compatibility issues with customer scripts.

Adding configurable option 'raid_stripe_all_devices' defaulting
to '0' (i.e. off = new behaviour) to select the old behaviour
of using all PVs in the VG or those provided on the command line.

In case any scripts rely on the old behaviour, just set
'raid_strip_all_devices = 1'.

- resolves rhbz1354650
2016-07-20 17:20:15 +02:00
Alasdair G Kergon
79446ffad7 raid: Infrastructure for raid takeover. 2016-06-28 02:42:30 +01:00
Peter Rajnoha
1127b090bd conf: add log/command_log_selection config setting 2016-06-20 11:33:43 +02:00