1
0
mirror of git://sourceware.org/git/lvm2.git synced 2024-12-22 17:35:59 +03:00
Commit Graph

477 Commits

Author SHA1 Message Date
Zdenek Kabelac
524df486b3 cov: annotate already validated lv name 2024-06-03 15:30:05 +02:00
Zdenek Kabelac
915dd18361 typo: fix typos
Typo and indent.
2024-06-03 15:30:05 +02:00
Zdenek Kabelac
1f92fc2af7 lvcreate: --yes option for thin-pool vdo creation
Correct typo and accept proper --yes option instead
of misplaced --force option.
2024-05-04 00:56:32 +02:00
David Teigland
f4911177da lvcreate: allow raidintegrity option with implicit raid type
Allow "lvcreate -m1 --raidintegrity y" when raid1 is used, but
not explicit on the command line.
2024-04-17 13:40:33 -05:00
Zdenek Kabelac
27bdd038a8 thin: validate usable volume for external origin
When creating external origin via 'lvcreate --type thin'
add the validation for LV being usable as external origin
since certain LVs cannot be really used this way.
Also call this function early during lvcreate cmdline arg
validation se we do not need to do unecesary operation.
2024-03-16 10:40:54 +01:00
Zdenek Kabelac
db0de73d6e vdo: support creation of compressed thin-pools
Add code to handle creation of  thin-pool with VDO data backend
which can be seen as compressed deduplicated thin-pool.

To avoid need of changing to many internal APIs, pass the conversion
parameters for create thin-pool data volume via cmd_context.
2024-01-10 14:02:22 +01:00
Zdenek Kabelac
7544b9fc10 vdo: refactor vdo_params passing
Introduce vdo_convert_params and use vdo_params from this structure
also with lvcreate_params.

Later we will use this for convertion of thin-pool data volume to VDO.
2024-01-10 14:02:22 +01:00
Zdenek Kabelac
4ccedceaa8 thin_pool: introduce --pooldatavdo
Introducing new arg --pooldatavdo y|n
2024-01-10 14:02:22 +01:00
Zdenek Kabelac
a882878eba lvcreate: support vg profile for error_on_full 2024-01-10 14:02:22 +01:00
Heinz Mauelshagen
f4edd87ffc raid: lvcreate and lvchange fail if --min_recovery_rate is defined
Fix typos in previous commit 3589e515d.

Both commands default [raid_](min|max)recoveryrate to 0 but ensure
min_recovery_rate is not larger than max_recoveryrate.  This results
in command failure without requesting the user to also define
max_recovery_rate >= min_recovery_rate.

Fix both commands by defining max_recovery_rate = min_recoveryrate
in case "lvcreate/lvchange --minrecoveryrate Size ..." requests a
larger value than current maxrecoveryrate without also giving option
2023-11-21 14:19:30 +01:00
Heinz Mauelshagen
3589e515dc raid: lvcreate and lvchange fail if --min_recovery_rate is defined
Both commands default [raid_](min|max)recoveryrate to 0 but ensure
min_recovery_rate is not larger than max_recoveryrate.  This results
in command failure without requestinng the user to also define
max_recovery_rate >= min_recovery_rate.

Fix both commands by defining max_recovery_rate = min_recoveryrate
in case "lvcreate/lvchange --minrecoveryrate Size ..." requests a
larger value than current maxrecoveryrate without also giving option
"--maxrecoveryrate Size ..." with a size greater or equal than min.
2023-11-20 17:14:01 +01:00
Zdenek Kabelac
2451bc568f vdo: fix and enhance vdo constain checking
Enhance checking vdo constains so it also handles changes of active VDO LVs
where only added difference is considered now.

For this also the reported informational message about used memory
was improved to only list consuming RAM blocks.
2023-01-16 12:37:40 +01:00
Zdenek Kabelac
b9f35e07db lvcreate: fix error path return values
Return failing error code for return path, as 'return 0' in this
case was returning success.
2022-11-08 12:39:25 +01:00
Zdenek Kabelac
e757965222 gcc: eliminate warnings
Gcc starts to show new warning - although unlikely to be able to hit
initialize variables to 0.
2022-09-07 14:58:01 +02:00
Zdenek Kabelac
e2e31d9acf vdo: enhance lvcreate validation
When creating VDO pool based of  % values, lvm2 is now more clever
and avoids to create 'unsupportable' sizes of physical backend
volumes as 16TiB is maximum size supported by VDO target
(and also limited by maximum supportable slabs (8192) based on slab
size.

If the requested virtual size is approaching max supported size 4PiB,
switch header size to 0.
2022-07-11 01:18:24 +02:00
Zdenek Kabelac
ebad057579 vdo: check vdo memory constrains
Add function to check for avaialble memory for particular VDO
configuration - to avoid unnecessary machine swapping for configs
that will not fit into memory (possibly in locked section).

Formula tries to estimate RAM size machine can use also with
swapping for kernel target - but still leaving some amount of
usable RAM.

Estimation is based on documented RAM usage of VDO target.

If the /proc/meminfo would be theoretically unavailable, try to use
'sysinfo()' function, however this is giving only free RAM without
the knowledge about how much RAM could be eventually swapped.

TODO: move _get_memory_info() into generic lvm2 API function used
by other targets with non-trivial memory requirements.
2022-07-11 01:18:24 +02:00
Zdenek Kabelac
5e060b8fa7 vdo: support --vdosettings
Allow to use --vdosettings with lvcreate,lvconvert,lvchange.
Support settings currenly only configurable via lvm.conf.
With lvchange we require inactivate LV for changes to be applied.

Settings block_map_era_length has supported alias block_map_period.
2022-05-03 19:09:52 +02:00
Zdenek Kabelac
7f1f7ad694 lvcreate: code move 2022-01-26 15:09:58 +01:00
Zdenek Kabelac
d8dbabb28e lvcreate: cachesettings works also with writecache 2022-01-26 15:09:58 +01:00
Zdenek Kabelac
a06ccdf44b lvcreate: fix crash for unspecified LV name for writecache
Fix aplication crash when creating writecached LV with 'automatic' name.
2022-01-26 15:09:58 +01:00
David Teigland
c28541eccd lvcreate: include recent options
The permitted option list in lvcreate has not kept
up with command-lines.in.
2021-12-13 08:59:31 -06:00
Zdenek Kabelac
9668427fe8 cleanup: use first parameter uint
Easier with struct zeroing and matching assing of type uint.
2021-09-27 18:56:14 +02:00
Zdenek Kabelac
2c6a2b6e86 vdo: support vdo_pool_header_size
Add profilable configurable setting for vdo pool header size, that is
used as 'extra' empty space at the front and end of vdo-pool device
to avoid having a disk in the system the may have same data is real
vdo LV.

For some conversion cases however we may need to allow using '0' header size.

TODO: in this case we may eventually avoid adding 'linear' mapping layer
in future - but this requires further modification over lvm code base.
2021-06-28 20:41:07 +02:00
David Teigland
0a28e3c44b Add metadata-based autoactivation property for VG and LV
The autoactivation property can be specified in lvcreate
or vgcreate for new LVs/VGs, and the property can be changed
by lvchange or vgchange for existing LVs/VGs.

 --setautoactivation y|n
 enables|disables autoactivation of a VG or LV.

Autoactivation is enabled by default, which is consistent with
past behavior.  The disabled state is stored as a new flag
in the VG metadata, and the absence of the flag allows
autoactivation.

If autoactivation is disabled for the VG, then no LVs in the VG
will be autoactivated (the LV autoactivation property will have
no effect.)  When autoactivation is enabled for the VG, then
autoactivation can be controlled on individual LVs.

The state of this property can be reported for LVs/VGs using
the "-o autoactivation" option in lvs/vgs commands, which will
report "enabled", or "" for the disabled state.

Previous versions of lvm do not recognize this property.  Since
autoactivation is enabled by default, the disabled setting will
have no effect in older lvm versions.  If the VG is modified by
older lvm versions, the disabled state will also be dropped from
the metadata.

The autoactivation property is an alternative to using the lvm.conf
auto_activation_volume_list, which is still applied to to VGs/LVs
in addition to the new property.

If VG or LV autoactivation is disabled either in metadata or in
auto_activation_volume_list, it will not be autoactivated.

An autoactivation command will silently skip activating an LV
when the autoactivation property is disabled.

To determine the effective autoactivation behavior for a specific
LV, multiple settings would need to be checked:
the VG autoactivation property, the LV autoactivation property,
the auto_activation_volume_list.  The "activation skip" property
would also be relevant, since it applies to both normal and auto
activation.
2021-04-07 15:32:49 -05:00
Zdenek Kabelac
abc9265a06 cache: reuse code for metadata min_max
Use update_pool_metadata_min_max() which is shared with
thin-pool metadata min-max updating.

Gives improved messages when converting volumes to metadata.
2021-02-01 12:06:13 +01:00
Zdenek Kabelac
b4212be2e7 thin: improve 16g support for thin pool metadata
Initial support for thin-pool used slightly smaller max size 15.81GiB
for thin-pool metadata. However the real limit later settled at 15.88GiB
(difference is ~64MiB - 16448 4K blocks).

lvm2 could not simply increase the size as it has been using hard cropping
of the loaded metadata device to avoid warnings printing warning of kernel
when the size was bigger (i.e. due to bigger extent_size).

This patch adds the new lvm.conf configurable setting:
allocation/thin_pool_crop_metadata
which defaults to 0 -> no crop of metadata beyond 15.81GiB.
Only user with these sizes of metadata will be affected.

Without cropping lvm2 now limits metadata allocation size to 15.88GiB.
Any space beyond is currently not used by thin-pool target.
Even if i.e. bigger LV is used for metadata via lvconvert,
or allocated bigger because of to large extent size.

With cropping enabled (=1) lvm2 preserves the old limitation
15.81GiB and should allow to work in the evironement with
older lvm2 tools (i.e. older distribution).

Thin-pool metadata with size bigger then 15.81G is now using CROP_METADATA
flag within lvm2 metadata, so older lvm2 recognizes an
incompatible thin-pool and cannot activate such pool!

Users should use uncropped version as it is not suffering
from various issues between thin_repair results and allocated
metadata LV as thin_repair limit is 15.88GiB
Users should use cropping only when really needed!

Patch also better handles resize of thin-pool metadata and prevents resize
beoyond usable size 15.88GiB. Resize beyond 15.81GiB automatically
switches pool to no-crop version. Even with existing bigger thin-pool
metadata command 'lvextend -l+1 vg/pool_tmeta' does the change.

Patch gives better controls 'coverted' metadata LV and
reports less confusing message during conversion.

Patch set also moves the code for updating min/max into pool_manip.c
for better sharing with cache_pool code.
2021-02-01 12:06:13 +01:00
Zdenek Kabelac
a375657092 cleanup: user force_t enums instead of ints 2020-09-01 17:57:50 +02:00
David Teigland
2aed2a41f7 lvcreate: new cache or writecache lv with single command
To create a new cache or writecache LV with a single command:

lvcreate --type cache|writecache
    -n Name -L Size --cachedevice PVfast VG [PVslow ...]

- A new main linear|striped LV is created as usual, using the
  specified -n Name and -L Size, and using the optionally
  specified PVslow devices.
- Then, a new cachevol LV is created internally, using PVfast
  specified by the cachedevice option.
- Then, the cachevol is attached to the main LV, converting the
  main LV to type cache|writecache.

Include --cachesize Size to specify the size of cache|writecache
to create from the specified --cachedevice PVs, otherwise the
entire cachedevice PV is used.  The --cachedevice option can be
repeated to create the cache from multiple devices, or the
cachedevice option can contain a tag name specifying a set of PVs
to allocate the cache from.

To create a new cache or writecache LV with a single command
using an existing cachevol LV:

lvcreate --type cache|writecache
    -n Name -L Size --cachevol LVfast VG [PVslow ...]

- A new main linear|striped LV is created as usual, using the
  specified -n Name and -L Size, and using the optionally
  specified PVslow devices.
- Then, the cachevol LVfast is attached to the main LV, converting
  the main LV to type cache|writecache.

In cases where more advanced types (for the main LV or cachevol LV)
are needed, they should be created independently and then combined
with lvconvert.

Example
-------

user creates a new VG with one slow device and one fast device:

$ vgcreate vg /dev/slow1 /dev/fast1

user creates a new 8G main LV on /dev/slow1 that uses all of
/dev/fast1 as a writecache:

$ lvcreate --type writecache --cachedevice /dev/fast1
    -n main -L 8G vg /dev/slow1

Example
-------

user creates a new VG with two slow devs and two fast devs:

$ vgcreate vg /dev/slow1 /dev/slow2 /dev/fast1 /dev/fast2

user creates a new 8G main LV on /dev/slow1 and /dev/slow2
that uses all of /dev/fast1 and /dev/fast2 as a writecache:

$ lvcreate --type writecache --cachedevice /dev/fast1 --cachedevice /dev/fast2
    -n main -L 8G vg /dev/slow1 /dev/slow2

Example
-------

A user has several slow devices and several fast devices in their VG,
the slow devs have tag @slow, the fast devs have tag @fast.

user creates a new 8G main LV on the slow devs with a
2G writecache on the fast devs:

$ lvcreate --type writecache -n main -L 8G
    --cachedevice @fast --cachesize 2G vg @slow
2020-06-16 13:46:51 -05:00
David Teigland
d9e8895a96 Allow dm-integrity to be used for raid images
dm-integrity stores checksums of the data written to an
LV, and returns an error if data read from the LV does
not match the previously saved checksum.  When used on
raid images, dm-raid will correct the error by reading
the block from another image, and the device user sees
no error.  The integrity metadata (checksums) are stored
on an internal LV allocated by lvm for each linear image.
The internal LV is allocated on the same PV as the image.

Create a raid LV with an integrity layer over each
raid image (for raid levels 1,4,5,6,10):

lvcreate --type raidN --raidintegrity y [options]

Add an integrity layer to images of an existing raid LV:

lvconvert --raidintegrity y LV

Remove the integrity layer from images of a raid LV:

lvconvert --raidintegrity n LV

Settings

Use --raidintegritymode journal|bitmap (journal is default)
to configure the method used by dm-integrity to ensure
crash consistency.

Initialization

When integrity is added to an LV, the kernel needs to
initialize the integrity metadata/checksums for all blocks
in the LV.  The data corruption checking performed by
dm-integrity will only operate on areas of the LV that
are already initialized.  The progress of integrity
initialization is reported by the "syncpercent" LV
reporting field (and under the Cpy%Sync lvs column.)

Example: create a raid1 LV with integrity:

$ lvcreate --type raid1 -m1 --raidintegrity y -n rr -L1G foo
  Creating integrity metadata LV rr_rimage_0_imeta with size 12.00 MiB.
  Logical volume "rr_rimage_0_imeta" created.
  Creating integrity metadata LV rr_rimage_1_imeta with size 12.00 MiB.
  Logical volume "rr_rimage_1_imeta" created.
  Logical volume "rr" created.
$ lvs -a foo
  LV                  VG  Attr       LSize  Origin              Cpy%Sync
  rr                  foo rwi-a-r---  1.00g                     4.93
  [rr_rimage_0]       foo gwi-aor---  1.00g [rr_rimage_0_iorig] 41.02
  [rr_rimage_0_imeta] foo ewi-ao---- 12.00m
  [rr_rimage_0_iorig] foo -wi-ao----  1.00g
  [rr_rimage_1]       foo gwi-aor---  1.00g [rr_rimage_1_iorig] 39.45
  [rr_rimage_1_imeta] foo ewi-ao---- 12.00m
  [rr_rimage_1_iorig] foo -wi-ao----  1.00g
  [rr_rmeta_0]        foo ewi-aor---  4.00m
  [rr_rmeta_1]        foo ewi-aor---  4.00m
2020-04-15 12:10:32 -05:00
Zdenek Kabelac
5ccf3e6f30 vdo: avoid running initialization of cache pool vars
Since VDO is also pool, the old if() case missed to know about this,
and executed unnecesserily initialization of cache pool variables.
This was usually harmless when using 'smaller' sizes of VDO pools,
but for big VDO pool size, we were reporting senseless messages
about big cache chunk sizes.
2020-01-13 17:42:31 +01:00
Heinz Mauelshagen
29db9c6325 lvcreate: ensure striped raid region size is at least stripe size
The kernel MD runtime requires region size to be larger than stripe size
on striped raid layouts, thus the dm-raid target's constructor rejects
such request.

This causes e.g. an 'lvcreate --type raid10 -i3 -I4096 -R2048 -n lv vg' to fail.

Avoid failing late in the kernel by enforcing region size to be
larger or equal to stripe size.

Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1698225
2019-11-26 22:31:58 +01:00
Zdenek Kabelac
87864f09f6 vdo: complete matching with thin syntax
Just like we support for thin-pool syntax:

lvcreate --thinpool new_tpoolname -L Size vg

add same support logic with for vdo-poo:

lvcreate --vdopool new_vpoolname -L Size vg

Also move description of syntax bellow thin-pool, so it's
correctly ordered in generated man page.
2019-01-28 22:18:17 +01:00
Zdenek Kabelac
022ebb0cfe lv_manip: better work with PERCENT_VG modifier
When using 'lvcreate -l100%VG' and there is big disproportion between
real available space and requested setting - automatically fallback
to 100%FREE.

Difference can be seen when VG is big and already most space was
allocated, so the requestion 100%VG can end (and by spec for % modifier
it's correct) as LV with size of 1%VG.  Usually this is not a big
problem - buit in some cases - like cache-pool allocation, this
can result a big difference for chunksize selection.

With this patch it's more closely match common-sense logic without
the need of reitteration of too big changes in lvm2 core ATM.

TODO: in the future there should be allocator solving all allocations
in a single call.
2019-01-21 12:53:15 +01:00
Zdenek Kabelac
c58733ca15 lvcreate: vdo support
Supports basic:  'lvcreate --vdo -LXXXG -VYYYG vg/vdoname -n lvname'
Allows to create basic VDO pool volume and virtual VDO volume.
2018-07-09 15:29:12 +02:00
David Teigland
18259d5559 Remove unused clvm variations for active LVs
Different flavors of activate_lv() and lv_is_active()
which are meaningful in a clustered VG can be eliminated
and replaced with whatever that flavor already falls back
to in a local VG.

e.g. lv_is_active_exclusive_locally() is distinct from
lv_is_active() in a clustered VG, but in a local VG they
are equivalent.  So, all instances of the variant are
replaced with the basic local equivalent.

For local VGs, the same behavior remains as before.
For shared VGs, lvmlockd was written with the explicit
requirement of local behavior from these functions
(lvmlockd requires locking_type 1), so the behavior
in shared VGs also remains the same.
2018-06-07 16:17:04 +01:00
David Teigland
b6f0f20da2 lvmlockd: primarily use vg_is_shared
to check if a vg uses an lvmlockd lock_type,
instead of the equivalent but longer is_lockd_type.
2018-06-01 13:15:22 -05:00
David Teigland
8d9d32b315 lvmlockd: enable lvcreate -H -L LV
Allow this command in a shared VG which had previously been
disallowed.
2018-05-31 14:20:11 -05:00
David Teigland
c516321325 lvmlockd: enable lvcreate of new LV plus existing cache pool
In this command, lvcreate creates a new LV and then combines
it with an existing cache pool, producing a cache LV.  This
command was previously not allowed in in a shared VG.
2018-05-30 15:24:24 -05:00
David Teigland
403c87c1aa lvmlockd: enable creation of cache pool with lvcreate
Previously, cache pools needed to be created with lvconvert.
2018-05-30 09:25:45 -05:00
David Teigland
948f2d9979 lvmlockd: enable lvcreate of thin pool and thin lv in one command
Previously, thin pools and thin lvs need needed to be
created with separate commands, now the combined command
is permitted.
2018-05-30 09:25:45 -05:00
Zdenek Kabelac
23de09aeb8 lvcreate: fix activation of cached LV
Since LV for caching can be already a stacked LV, proper activation
needs to use lock holding LV.
2018-03-06 15:39:27 +01:00
Zdenek Kabelac
90ee7783b4 pool: drop create spare on error path
When thin/cache pool creation fails and command created _pmspare,
such volume is now removed on error path.
2017-10-30 11:53:39 +01:00
Heinz Mauelshagen
adb80816fb lvcreate: error message with dot. 2017-10-26 17:25:22 +02:00
Zdenek Kabelac
df3ff32fc0 lvcreate: skip checking for name restriction for caching
lvcreate supports a 'conversion' when caching LV.
This normally worked fine, however in case passed LV was
thin-pool's data LV with suffix _tdata we have failed to early.

As the easiest fix looks dropping validation of name when
caching type is select - such name check will happen later
once the VG is opened again and properly detect if the LV
with protected name already exists and can be converted,
or will be rejected as ambigiuous operation requiring user
to specify  --type cache | --type cache-pool.
2017-10-23 12:01:15 +02:00
David Teigland
8e8755319c lvcreate: use cmd defs to deny unspported lockd cases
In a shared VG, lvconvert must be used to create thin pools
and cache pools, not the lvcreate variants of those commands.
Deny these cases early in lvcreate using the new command defs.
Denying these cases deeper in the code was missing some
cleanup of the partially completed command.
2017-09-14 12:28:48 -05:00
Zdenek Kabelac
0bf836aa14 tidy: prefer not using else after return
clang-tidy: avoid using  'else' after return - give more readable code,
and also saves indention level.
2017-07-20 11:18:29 +02:00
Zdenek Kabelac
b3ef051e06 cache: lvcreate --cachepool checks for cache pool
Code path missed validation of lvcreate --cachepool argument.
If the non cache-pool LV was passed in, code has still continued
further work and failed later on internal error.  Validate this
condition at right place now.
2017-06-09 10:59:37 +02:00
Alasdair G Kergon
2583732165 lvcreate: Fix last commit for virtual sizes.
Don't stop when extents is 0 if a virtual size parameter was supplied
instead.
2017-05-12 13:16:10 +01:00
Alasdair G Kergon
cf73f6cf61 lvcreate: Fix mirror percentage size calculations.
Trap cases where the percentage calculation currently leads to an empty
LV and the message:

  Internal error: Unable to create new logical volume with no extents

Additionally convert the calculated number of extents from physical to
logical when creating a mirror using a percentage that is based on
Physical Extents.  Otherwise a command like 'lvcreate -m3 -l80%FREE'
can never leave any free space.

This brings the behaviour closer to that of lvresize.
(A further patch is needed to cover all the raid types.)
2017-05-12 02:31:35 +01:00
Zdenek Kabelac
4d2b1a0660 cache: enable usage of --cachemetadataformat
lvcreate and lvconvert may select cache metadata format when caching LV.
By default lvm2 picks best available format.
2017-03-10 19:33:01 +01:00