1
0
mirror of git://sourceware.org/git/lvm2.git synced 2024-12-22 17:35:59 +03:00
Commit Graph

591 Commits

Author SHA1 Message Date
Zdenek Kabelac
fc477192e5 cleanup: drop unused code
Code was related to long time obsoleted  vgconvert
for lvm1 to lvm2 conversion.
This code likely was missed to be removed.
2024-03-28 18:18:37 +01:00
Zdenek Kabelac
27bdd038a8 thin: validate usable volume for external origin
When creating external origin via 'lvcreate --type thin'
add the validation for LV being usable as external origin
since certain LVs cannot be really used this way.
Also call this function early during lvcreate cmdline arg
validation se we do not need to do unecesary operation.
2024-03-16 10:40:54 +01:00
Zdenek Kabelac
55937f9c8e vdo: read VDO stats via dm message
As the sysfs interface is seen somewhat obsolete, swith code
to prefer using DM 'stats' message to get our currently needed
info for VDO pool target.
2024-02-19 14:20:39 +01:00
Zdenek Kabelac
9d9e43b87f vdo: correct vdo header size
Previous patch that introduced support for thinpool with vdo
not correctly handled header size - as this part is not fully usable
yet.  We are going to try to use the 0, but current state of code is not
yet compliant to this logic so keep vdo_header_size during conversion
and alos correctly pass through virtual_extents to vdo formating.
2024-01-17 17:30:10 +01:00
Zdenek Kabelac
6ec2f1f44b vdo: refactor conversion to vdo lv
Introduce struct vdo_convert_params {} to pass-in all the parameters
needed for the conversion of an LV to a vdopool + vdo LV.

Function convert_vdo_lv() is also able to create a new LV and swap
segments, so the passed in LV can be later on use for futher
conversion so this refactoring makes it ready for more enhanced
usage.
2024-01-10 14:02:22 +01:00
Zdenek Kabelac
7544b9fc10 vdo: refactor vdo_params passing
Introduce vdo_convert_params and use vdo_params from this structure
also with lvcreate_params.

Later we will use this for convertion of thin-pool data volume to VDO.
2024-01-10 14:02:22 +01:00
Zdenek Kabelac
4ccedceaa8 thin_pool: introduce --pooldatavdo
Introducing new arg --pooldatavdo y|n
2024-01-10 14:02:22 +01:00
Zdenek Kabelac
c496f80379 pool: code refactoring
Move pool related manipulation code to pool_manip.c.
2024-01-10 14:02:22 +01:00
Zdenek Kabelac
a176184b7d thin_pool: code refactoring
Move common code into thin_pool_set_params()
(just like we use  cache_set_params).
2024-01-10 14:02:22 +01:00
Zdenek Kabelac
2135bbe558 vdo: flip return value to int
Change API to return just 0/1.
2024-01-10 14:02:22 +01:00
Zdenek Kabelac
2f3d8659b1 commands: add lv_is_writable 2023-08-14 17:02:11 +02:00
Zdenek Kabelac
39457234db lvconvert: support conversion to thin volume
Update pool conversion function to handle also conversion of
thick LV to thin LV by moving thick LV into thin pool data LV
and creating fully provissioned thin LV on top of this volume.

Reworking existing conversion to use insert_layer_for_lv co
the uuid is now kept with thin-pool - this should however not
really matter as we are doing full deactivation & activation cycle.

With conversion to thin LV user can use same set of arguments
to set chunk-size.

TODO: add some smart code to decide best values for chunks sizes.
2023-07-10 17:13:32 +02:00
Zdenek Kabelac
558890ad0e pool: support passing data_lv for recalculation
Support data_lv to be passed as parameter when it's not yet
attached to the pool.
2023-07-10 17:13:32 +02:00
Zdenek Kabelac
d43a79eec9 segtype: add missing macros for error and zero segment
Add macros for checking error and zero segment as we do
with other segtypes.
2023-07-10 17:13:32 +02:00
Zdenek Kabelac
e84b00964f pool: avoid using artificial name internally 2023-06-29 13:55:27 +02:00
Zdenek Kabelac
631762d6db metadata-exported.h: correcting definition
Fixing minor mismatch between definition and declaration of
update_thin_pool_params().
2023-02-10 17:50:27 +01:00
Zdenek Kabelac
2451bc568f vdo: fix and enhance vdo constain checking
Enhance checking vdo constains so it also handles changes of active VDO LVs
where only added difference is considered now.

For this also the reported informational message about used memory
was improved to only list consuming RAM blocks.
2023-01-16 12:37:40 +01:00
Zdenek Kabelac
1bed2cafe8 vdo: read live vdo size configuration
Introduce struct vdo_pool_size_config usable to calculate necessary
memory size for active VDO volume.

Function lv_vdo_pool_size_config() is able to read out this
configuration out of runtime DM table line.
2023-01-16 12:37:40 +01:00
David Teigland
c98617c593 devices: factor common list functions
which were duplicated in various places
2022-11-07 11:38:46 -06:00
David Teigland
264827cb98 lvresize: add new options and defaults for fs handling
The new option "--fs String" for lvresize/lvreduce/lvextend
controls the handling of file systems before/after resizing
the LV.  --resizefs is the same as --fs resize.

The new option "--fsmode String" can be used to control
mounting and unmounting of the fs during resizing.

Possible --fs values:

checksize
  Only applies to reducing size; does nothing for extend.
  Check the fs size and reduce the LV if the fs is not using
  the affected space, i.e. the fs does not need to be shrunk.
  Fail the command without reducing the fs or LV if the fs is
  using the affected space.

resize
  Resize the fs using the fs-specific resize command.
  This may include mounting, unmounting, or running fsck.
  See --fsmode to control mounting behavior, and --nofsck to
  disable fsck.

resize_fsadm
  Use the old method of calling fsadm to handle the fs
  (deprecated.) Warning: this option does not prevent lvreduce
  from destroying file systems that are unmounted (or mounted
  if prompts are skipped.)

ignore
  Resize the LV without checking for or handling a file system.
  Warning: using ignore when reducing the LV size may destroy the
  file system.

Possible --fsmode values:

manage
  Mount or unmount the fs as needed to resize the fs,
  and attempt to restore the original mount state at the end.

nochange
  Do not mount or unmount the fs. If mounting or unmounting
  is required to resize the fs, then do not resize the fs or
  the LV and fail the command.

offline
  Unmount the fs if it is mounted, and resize the fs while it
  is unmounted. If mounting is required to resize the fs,
  then do not resize the fs or the LV and fail the command.

Notes on lvreduce:

When no --fs or --resizefs option is specified:
. lvextend default behavior is fs ignore.
. lvreduce default behavior is fs checksize
  (includes activating the LV.)

With the exception of --fs resize_fsadm|ignore, lvreduce requires
the recent libblkid fields FSLASTBLOCK and FSBLOCKSIZE.
FSLASTBLOCK*FSBLOCKSIZE is the last byte used by the fs on the LV,
which determines if reducing the fs is necessary.
2022-09-13 15:15:05 -05:00
David Teigland
18722dfdf4 lvresize: restructure code
Rewrite top level resize function to prepare for adding
new fs resizing.
2022-09-09 16:18:55 -05:00
Zdenek Kabelac
60ca2ce20f thin: rename internal function
Names matching internal code layout.
Functionc in thin_manip.c uses thin_pool in its name.
Keep 'pool' only for function working for both cache and thin pools.

No change of functionality.
2022-08-30 13:54:19 +02:00
Zdenek Kabelac
e2e31d9acf vdo: enhance lvcreate validation
When creating VDO pool based of  % values, lvm2 is now more clever
and avoids to create 'unsupportable' sizes of physical backend
volumes as 16TiB is maximum size supported by VDO target
(and also limited by maximum supportable slabs (8192) based on slab
size.

If the requested virtual size is approaching max supported size 4PiB,
switch header size to 0.
2022-07-11 01:18:24 +02:00
Zdenek Kabelac
ebad057579 vdo: check vdo memory constrains
Add function to check for avaialble memory for particular VDO
configuration - to avoid unnecessary machine swapping for configs
that will not fit into memory (possibly in locked section).

Formula tries to estimate RAM size machine can use also with
swapping for kernel target - but still leaving some amount of
usable RAM.

Estimation is based on documented RAM usage of VDO target.

If the /proc/meminfo would be theoretically unavailable, try to use
'sysinfo()' function, however this is giving only free RAM without
the knowledge about how much RAM could be eventually swapped.

TODO: move _get_memory_info() into generic lvm2 API function used
by other targets with non-trivial memory requirements.
2022-07-11 01:18:24 +02:00
Zdenek Kabelac
f83b3962c1 asan: fix some reports from libasan
When compiled and used with:

CFLAGS="-fsanitize=address -g -O0"
ASAN_OPTIONS=strict_string_checks=1:detect_stack_use_after_return=1:check_initialization_order=1:strict_init_order=1

we have few reported issue - they where not normally spotted, since
we were still accessing our own memory - but ouf of buffer-range.

TODO: there is still something to enhance with handling of #orphan vgids
2022-02-07 20:02:11 +01:00
Zdenek Kabelac
e6f735d411 vdo: read new sysfs path
New versions of kvdo module exposes statistics at new location:
/sys/block/dm-XXX/vdo/statistics/...

Enhance lvm2 to access this location first.
Also if the statistic info is missing - make it 'debug' level info,
so it is not failing 'lvs' command.
2021-09-09 15:24:15 +02:00
Zdenek Kabelac
2c6a2b6e86 vdo: support vdo_pool_header_size
Add profilable configurable setting for vdo pool header size, that is
used as 'extra' empty space at the front and end of vdo-pool device
to avoid having a disk in the system the may have same data is real
vdo LV.

For some conversion cases however we may need to allow using '0' header size.

TODO: in this case we may eventually avoid adding 'linear' mapping layer
in future - but this requires further modification over lvm code base.
2021-06-28 20:41:07 +02:00
David Teigland
4a746f7ffc lvremove: fix removing thin pool with writecache on data 2021-05-24 16:09:35 -05:00
Leo Yan
ef1c57e68f lib: locking: Add new type "idm"
We can consider the drive firmware a server to handle the locking
request from nodes, this essentially is a client-server model.
DLM uses the kernel as a central place to manage locks, so it also
complies with client-server model for locking operations.  This is
why IDM and DLM are similar with each other for their wrappers.

This patch largely works by generalizing the DLM code paths and then
providing degeneralized functions as wrappers for both IDM and DLM.

Signed-off-by: Leo Yan <leo.yan@linaro.org>
2021-05-20 16:01:05 -05:00
David Teigland
0a28e3c44b Add metadata-based autoactivation property for VG and LV
The autoactivation property can be specified in lvcreate
or vgcreate for new LVs/VGs, and the property can be changed
by lvchange or vgchange for existing LVs/VGs.

 --setautoactivation y|n
 enables|disables autoactivation of a VG or LV.

Autoactivation is enabled by default, which is consistent with
past behavior.  The disabled state is stored as a new flag
in the VG metadata, and the absence of the flag allows
autoactivation.

If autoactivation is disabled for the VG, then no LVs in the VG
will be autoactivated (the LV autoactivation property will have
no effect.)  When autoactivation is enabled for the VG, then
autoactivation can be controlled on individual LVs.

The state of this property can be reported for LVs/VGs using
the "-o autoactivation" option in lvs/vgs commands, which will
report "enabled", or "" for the disabled state.

Previous versions of lvm do not recognize this property.  Since
autoactivation is enabled by default, the disabled setting will
have no effect in older lvm versions.  If the VG is modified by
older lvm versions, the disabled state will also be dropped from
the metadata.

The autoactivation property is an alternative to using the lvm.conf
auto_activation_volume_list, which is still applied to to VGs/LVs
in addition to the new property.

If VG or LV autoactivation is disabled either in metadata or in
auto_activation_volume_list, it will not be autoactivated.

An autoactivation command will silently skip activating an LV
when the autoactivation property is disabled.

To determine the effective autoactivation behavior for a specific
LV, multiple settings would need to be checked:
the VG autoactivation property, the LV autoactivation property,
the auto_activation_volume_list.  The "activation skip" property
would also be relevant, since it applies to both normal and auto
activation.
2021-04-07 15:32:49 -05:00
Zdenek Kabelac
a9b4acd511 dev_manager: add lv_raid_status
Just like with other segtype use this function to get whole
raid status info available per a single ioctl call.

Also it nicely simplifies read of percentage info about
in_sync portion of raid volume.

TODO: drop use of other calls then lv_raid_status call,
since all such calls could already use status - so it just
adds unnecessary duplication.
2021-03-18 18:34:57 +01:00
Zdenek Kabelac
a915cd5a46 lvconvert: vdo may convert already formated vdo
User use 'lvconvert -Zn --type vdo-pool' to convert an existing
vdo formated volume and skip lvm2 internal formating.
This however requires user is passing proper matching parameters.
For them user can use --profile|--metadataprofile option whos
support has been also enhanced.

TODO: add support to read values directly from formated volume.
2021-02-17 11:21:35 +01:00
Zdenek Kabelac
abc9265a06 cache: reuse code for metadata min_max
Use update_pool_metadata_min_max() which is shared with
thin-pool metadata min-max updating.

Gives improved messages when converting volumes to metadata.
2021-02-01 12:06:13 +01:00
Zdenek Kabelac
b4212be2e7 thin: improve 16g support for thin pool metadata
Initial support for thin-pool used slightly smaller max size 15.81GiB
for thin-pool metadata. However the real limit later settled at 15.88GiB
(difference is ~64MiB - 16448 4K blocks).

lvm2 could not simply increase the size as it has been using hard cropping
of the loaded metadata device to avoid warnings printing warning of kernel
when the size was bigger (i.e. due to bigger extent_size).

This patch adds the new lvm.conf configurable setting:
allocation/thin_pool_crop_metadata
which defaults to 0 -> no crop of metadata beyond 15.81GiB.
Only user with these sizes of metadata will be affected.

Without cropping lvm2 now limits metadata allocation size to 15.88GiB.
Any space beyond is currently not used by thin-pool target.
Even if i.e. bigger LV is used for metadata via lvconvert,
or allocated bigger because of to large extent size.

With cropping enabled (=1) lvm2 preserves the old limitation
15.81GiB and should allow to work in the evironement with
older lvm2 tools (i.e. older distribution).

Thin-pool metadata with size bigger then 15.81G is now using CROP_METADATA
flag within lvm2 metadata, so older lvm2 recognizes an
incompatible thin-pool and cannot activate such pool!

Users should use uncropped version as it is not suffering
from various issues between thin_repair results and allocated
metadata LV as thin_repair limit is 15.88GiB
Users should use cropping only when really needed!

Patch also better handles resize of thin-pool metadata and prevents resize
beoyond usable size 15.88GiB. Resize beyond 15.81GiB automatically
switches pool to no-crop version. Even with existing bigger thin-pool
metadata command 'lvextend -l+1 vg/pool_tmeta' does the change.

Patch gives better controls 'coverted' metadata LV and
reports less confusing message during conversion.

Patch set also moves the code for updating min/max into pool_manip.c
for better sharing with cache_pool code.
2021-02-01 12:06:13 +01:00
Heinz Mauelshagen
f08ef23856 lvdisplay: enhance LV status output for raid(0)
In case legs of a raid0 LV are removed, the lvdisplay command still
reports 'available' though raid0 is not providing any resilience
compared to the other raid levels.

Also lvdisplay does not display '(partial)' in case of missing raid0
legs as oposed to the lvs command.

Enhance lvdisplay to report "NOT available" for any RaidLV type in case
too many legs are inaccessible hence causing data loss.  I.e. any leg
for raid0, all for raid1, more than 1 for raid4/5, more than 2 for raid6
and in case of completely lost mirror groups for raid10.

Add test/shell/lvdisplay-raid.sh.

Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1872678
2021-01-27 16:56:22 +01:00
David Teigland
5fef89361d integrity: display total mismatches at raid LV level
Each integrity image in a raid LV reports its own number
of integrity mismatches, e.g.

lvs -o integritymismatches vg/lv_rimage_0
lvs -o integritymismatches vg/lv_rimage_1

In addition to this, allow the total number of integrity
mismatches from all images to be displayed for the raid LV.

lvs -o integritymismatches vg/lv

shows the number of mismatches from both lv_rimage_0 and
lv_rimage_1.
2020-11-11 15:10:15 -06:00
David Teigland
c32d7fed4f writecache: use two step detach
When detaching a writecache, use the cleaner setting
by default to writeback data prior to suspending the
lv to detach the writecache.  This avoids potentially
blocking for a long period with the device suspended.

Detaching a writecache first sets the cleaner option, waits
for a short period of time (less than a second), and checks
if the writecache has quickly become clean.  If so, the
writecache is detached immediately.  This optimizes the case
where little writeback is needed.

If the writecache does not quickly become clean, then the
detach command leaves the writecache attached with the
cleaner option set.  This leaves the LV in the same state
as if the user had set the cleaner option directly with
lvchange --cachesettings cleaner=1 LV.

After leaving the LV with the cleaner option set, the
detach command will wait and watch the writeback progress,
and will finally detach the writecache when the writeback
is finished.  The detach command does not need to wait
during the writeback phase, and can be canceled, in which
case the LV will remain with the writecache attached and
the cleaner option set.  When the user runs the detach
command again it will complete the detach.

To detach a writecache directly, without using the cleaner
step (which has been the approach previously), add the
option --cachesettings cleaner=0 to the detach command.
2020-10-01 11:33:02 -05:00
Zdenek Kabelac
4de6f58085 thin: use lv_status_thin and lv_status_thin_pool
Introduce structures lv_status_thin_pool and
lv_status_thin  (pair to lv_status_cache, lv_status_vdo)

Convert lv_thin_percent() -> lv_thin_status()
and  lv_thin_pool_percent() + lv_thin_pool_transaction_id() ->
lv_thin_pool_status().

This way a function user can see not only percentages, but also
other important status info about thin-pool.

TODO:
This patch tries to not change too many other things,
but pool_below_threshold() now uses new thin-pool info to return
failure if thin-pool cannot be actually modified.
This should be handle separately in a better way.
2020-09-29 10:43:56 +02:00
Zdenek Kabelac
d588de77aa wipe: convert zero_value to uint8_t
We always write this value as byte.
2020-09-15 22:52:25 +02:00
David Teigland
ed249a2c53 integrity: report mismatches
with lvs -o integritymismatches

reported for integrity images, which may report
different values
2020-09-01 17:13:21 -05:00
David Teigland
f2c1de783c integrity: always default to journal mode
lvconvert was defaulting to bitmap mode,
and lvcreate was defaulting to journal mode.
2020-09-01 17:12:28 -05:00
Zdenek Kabelac
ff4827ffb1 lv_manip: get_default_region_size return uint32_t 2020-08-28 21:43:02 +02:00
Zdenek Kabelac
bc39d5bec6 pool: zero metadata
To avoid polution of metadata with some 'garbage' content or eventualy
some leak of stale data in case user want to upload metadata somewhere,
ensure upon allocation the metadata device is fully zeroed.

Behaviour may slow down allocation of thin-pool or cache-pool a bit
so the old behaviour can be restored with lvm.conf setting:
allocation/zero_metadata=0

TODO: add zeroing for extension of metadata volume.
2020-06-24 15:01:03 +02:00
David Teigland
2aed2a41f7 lvcreate: new cache or writecache lv with single command
To create a new cache or writecache LV with a single command:

lvcreate --type cache|writecache
    -n Name -L Size --cachedevice PVfast VG [PVslow ...]

- A new main linear|striped LV is created as usual, using the
  specified -n Name and -L Size, and using the optionally
  specified PVslow devices.
- Then, a new cachevol LV is created internally, using PVfast
  specified by the cachedevice option.
- Then, the cachevol is attached to the main LV, converting the
  main LV to type cache|writecache.

Include --cachesize Size to specify the size of cache|writecache
to create from the specified --cachedevice PVs, otherwise the
entire cachedevice PV is used.  The --cachedevice option can be
repeated to create the cache from multiple devices, or the
cachedevice option can contain a tag name specifying a set of PVs
to allocate the cache from.

To create a new cache or writecache LV with a single command
using an existing cachevol LV:

lvcreate --type cache|writecache
    -n Name -L Size --cachevol LVfast VG [PVslow ...]

- A new main linear|striped LV is created as usual, using the
  specified -n Name and -L Size, and using the optionally
  specified PVslow devices.
- Then, the cachevol LVfast is attached to the main LV, converting
  the main LV to type cache|writecache.

In cases where more advanced types (for the main LV or cachevol LV)
are needed, they should be created independently and then combined
with lvconvert.

Example
-------

user creates a new VG with one slow device and one fast device:

$ vgcreate vg /dev/slow1 /dev/fast1

user creates a new 8G main LV on /dev/slow1 that uses all of
/dev/fast1 as a writecache:

$ lvcreate --type writecache --cachedevice /dev/fast1
    -n main -L 8G vg /dev/slow1

Example
-------

user creates a new VG with two slow devs and two fast devs:

$ vgcreate vg /dev/slow1 /dev/slow2 /dev/fast1 /dev/fast2

user creates a new 8G main LV on /dev/slow1 and /dev/slow2
that uses all of /dev/fast1 and /dev/fast2 as a writecache:

$ lvcreate --type writecache --cachedevice /dev/fast1 --cachedevice /dev/fast2
    -n main -L 8G vg /dev/slow1 /dev/slow2

Example
-------

A user has several slow devices and several fast devices in their VG,
the slow devs have tag @slow, the fast devs have tag @fast.

user creates a new 8G main LV on the slow devs with a
2G writecache on the fast devs:

$ lvcreate --type writecache -n main -L 8G
    --cachedevice @fast --cachesize 2G vg @slow
2020-06-16 13:46:51 -05:00
David Teigland
1ee42f1391 writecache: cachesettings in lvchange and lvs
lvchange --cachesettings
lvs -o+cache_settings
2020-06-10 12:14:00 -05:00
David Teigland
240062a183 writecache: remove from an active lv 2020-06-10 12:13:31 -05:00
David Teigland
d945b53ff7 remove vg_read_error
Once converted results to error numbers but is now just a null check.
2020-04-24 11:14:29 -05:00
David Teigland
d9e8895a96 Allow dm-integrity to be used for raid images
dm-integrity stores checksums of the data written to an
LV, and returns an error if data read from the LV does
not match the previously saved checksum.  When used on
raid images, dm-raid will correct the error by reading
the block from another image, and the device user sees
no error.  The integrity metadata (checksums) are stored
on an internal LV allocated by lvm for each linear image.
The internal LV is allocated on the same PV as the image.

Create a raid LV with an integrity layer over each
raid image (for raid levels 1,4,5,6,10):

lvcreate --type raidN --raidintegrity y [options]

Add an integrity layer to images of an existing raid LV:

lvconvert --raidintegrity y LV

Remove the integrity layer from images of a raid LV:

lvconvert --raidintegrity n LV

Settings

Use --raidintegritymode journal|bitmap (journal is default)
to configure the method used by dm-integrity to ensure
crash consistency.

Initialization

When integrity is added to an LV, the kernel needs to
initialize the integrity metadata/checksums for all blocks
in the LV.  The data corruption checking performed by
dm-integrity will only operate on areas of the LV that
are already initialized.  The progress of integrity
initialization is reported by the "syncpercent" LV
reporting field (and under the Cpy%Sync lvs column.)

Example: create a raid1 LV with integrity:

$ lvcreate --type raid1 -m1 --raidintegrity y -n rr -L1G foo
  Creating integrity metadata LV rr_rimage_0_imeta with size 12.00 MiB.
  Logical volume "rr_rimage_0_imeta" created.
  Creating integrity metadata LV rr_rimage_1_imeta with size 12.00 MiB.
  Logical volume "rr_rimage_1_imeta" created.
  Logical volume "rr" created.
$ lvs -a foo
  LV                  VG  Attr       LSize  Origin              Cpy%Sync
  rr                  foo rwi-a-r---  1.00g                     4.93
  [rr_rimage_0]       foo gwi-aor---  1.00g [rr_rimage_0_iorig] 41.02
  [rr_rimage_0_imeta] foo ewi-ao---- 12.00m
  [rr_rimage_0_iorig] foo -wi-ao----  1.00g
  [rr_rimage_1]       foo gwi-aor---  1.00g [rr_rimage_1_iorig] 39.45
  [rr_rimage_1_imeta] foo ewi-ao---- 12.00m
  [rr_rimage_1_iorig] foo -wi-ao----  1.00g
  [rr_rmeta_0]        foo ewi-aor---  4.00m
  [rr_rmeta_1]        foo ewi-aor---  4.00m
2020-04-15 12:10:32 -05:00
David Teigland
b6b4ad8e28 move pv_list code into lib 2020-04-13 10:04:14 -05:00
Zdenek Kabelac
2266a1863f lv_manip: add lv_uniq_rename_update
Add function to rename LV to either passed name or if
the name is already in use, generate new lvol% name.
2019-10-21 12:14:15 +02:00