1
0
mirror of git://sourceware.org/git/lvm2.git synced 2025-08-26 17:49:29 +03:00

607 Commits

Author SHA1 Message Date
6fa5510b65 revert: lv_manip: retuurn failure for LV resize without actual LV+FS resize
We have different behavior for lvreduce (contains the fs size
pre-check before calling the external script) and lvextend
(does not ave the pre-check).

Also, there's different behavior with "lvreduce --fs resize"
(contains the pre-check) and "lvreduce --fs resize_fsadm"
(does not have the pre-check).

Obvously, only "lvreduce --fs resize" does have the pre-check.
To make it consitent all across, we should remove that one
and everybody will be happy.
2025-07-17 12:07:06 +02:00
9c599292aa lv_manip: return failure for LV resize without actual LV+FS resize
Also return failure if there is no change in file system size while
resizing an LV including the file system, lik with lvresize/lvreduce --fs resize'.
2025-07-16 15:18:05 +02:00
e3871db279 persistent reservations
Enable lvm to use persistent reservations on a VG, which
are applied to each PV in the VG.

. lvmpersist is a low level script, which uses commands
  sg_persist, mpathpersist, and nvme to do PR operations
  on devices. This script is used by higher level lvm
  commands, and would not often be run by users.

. vgchange --setpersist is a VG metadata configuration command
  that specifies how PR should be started and enforced for a VG
  relative to other lvm commands.

. vgchange --persist is a command to change PR state of PVs in
  the VG, e.g. start PR to register and reserve.

The lvmpersist man page contains a complete description.
2025-06-27 10:26:00 +02:00
951fd6358b metadata: replace pv status WRONG_VG with pv bit field
Avoid any special meaning associated with the status field.
2025-03-06 10:52:50 -06:00
227d3ca507 lvmcache: add WRONG_VG PV status flag
_vg_read() calls lvmcache_update_vg_from_read() which detects
that the VG metadata is incorrectly claiming the PV.  Flag this
condition in the PV status as WRONG_VG.  Later, vg_read() can
simply check the WRONG_VG flag rather than repeating the same
PV/VG checks that were already done in lvmcache_update_vg_from_read.
2025-03-05 14:28:42 -06:00
924221765e cleanup: match function prototype with definition
Match variable name in function definition with
its prototype. Pick the name which better fits
the usage.

No functional change.
2025-02-17 15:51:03 +01:00
a6b2ce6299 cleanup: project headers in front
Include project headers before system header files.
2025-02-17 15:51:03 +01:00
a1017024f1 raidintegrity: support removal of partial images
vgreduce --removemissing --force replaces a partial image
with an error target.  When that image includes an integrity
layer, that layer needs to first be removed.
2025-02-10 12:10:08 -06:00
7d7b5db230 lvmlockd: thin locking improvements
There was a lot of messy and inefficient locking calls sent from
a command to lvmlockd when working with thin volumes, e.g.
- requesting a lock numerous times that was already held
- releasing a lock numerous times that was already unlocked
- repeating lock/unlock/lock/unlock rather than holding the
  lock until it was no longer needed

Mistakes in the locking could easily hide among all the noise.

The mess was largely because thin-related commands involve a
lot of internal LV manipulations, and lvmlockd calls were done
at the lower level of LV activation/deactivation.  This change
adds locking code that is more specific to the thin command
being run, so it can be more intelligent in acquiring and
releasing locks where needed.
2025-01-17 14:06:21 -06:00
db0f1b799f metadata: use radix_tree for find_lv_in_vg
Since there is a group of commands that need to access 'lv_list'
while still need to search for LV by its name, make the whole
struct lv_list a member of logical_volume structure.
This makes it easy to return also 'lv_list' this list this LV
within VG.
Also the patch should not use more memory, since we were allocating
lv_list for each LV anyway when linkin LV to VG.

Since find_lv_by_name() is now using radix_tree(),
use the same 'search for /' in LV in name for both
find_lv() & find_lv_in_vg().

TODO: Possibly refactor code and use only dm_list
instead of lv_list and dereference LV with container_of()
(thus saving pointer within struct logical_volume) - but
we use 'lv_list' currently in many places...
2024-10-31 17:55:31 +01:00
39b7d1ba8f cleanup: typos in comments
Collection of typos in code comments.
Should have no runtime effect.
2024-08-30 16:51:15 +02:00
78d14a805c integrity: add --integritysettings for tuning
The option can be used in multiple ways (like --cachesettings):

--integritysettings key=val
--integritysettings 'key1=val1 key2=val2'
--integritysettings key1=val1 --integritysettings key2=val2

Use with lvcreate or lvconvert when integrity is first enabled
to configure:
journal_sectors
journal_watermark
commit_time
bitmap_flush_interval
allow_discards

Use with lvchange to configure (only while inactive):
journal_watermark
commit_time
bitmap_flush_interval
allow_discards

lvchange --integritysettings "" clears any previously configured
settings, so dm-integrity will use its own defaults.

lvs -a -o integritysettings displays configured settings.
2024-08-07 17:40:34 -05:00
943e979079 lvmlockd: parse lockopt string into flags 2024-06-27 13:29:03 -05:00
40cf4ce1e1 cleanup: struct reorder
Better alignments.
2024-05-27 15:35:58 +02:00
47f8bda051 lvremove: remove device_id for PVs on LVs
When PVs are created on LVs, remove the devices file entries
for the PVs when the LVs are removed.  In general, the devices
file entries should be removed with lvmdevices --deldev when
the LVs are removed (lvremove is the equivalent of detaching
a device from the system when layering PVs on LVs.)
This change is effectively an automatic lvmdevices --deldev
command that is built into lvremove when the LV has a PV on it.
2024-05-22 15:32:17 -05:00
effafa8fc8 const: _ops text handler
Making sure these structures ends in '.data.rel.ro' section.
(instead of plain '.data' section).
2024-05-04 01:01:42 +02:00
fc477192e5 cleanup: drop unused code
Code was related to long time obsoleted  vgconvert
for lvm1 to lvm2 conversion.
This code likely was missed to be removed.
2024-03-28 18:18:37 +01:00
27bdd038a8 thin: validate usable volume for external origin
When creating external origin via 'lvcreate --type thin'
add the validation for LV being usable as external origin
since certain LVs cannot be really used this way.
Also call this function early during lvcreate cmdline arg
validation se we do not need to do unecesary operation.
2024-03-16 10:40:54 +01:00
55937f9c8e vdo: read VDO stats via dm message
As the sysfs interface is seen somewhat obsolete, swith code
to prefer using DM 'stats' message to get our currently needed
info for VDO pool target.
2024-02-19 14:20:39 +01:00
9d9e43b87f vdo: correct vdo header size
Previous patch that introduced support for thinpool with vdo
not correctly handled header size - as this part is not fully usable
yet.  We are going to try to use the 0, but current state of code is not
yet compliant to this logic so keep vdo_header_size during conversion
and alos correctly pass through virtual_extents to vdo formating.
2024-01-17 17:30:10 +01:00
6ec2f1f44b vdo: refactor conversion to vdo lv
Introduce struct vdo_convert_params {} to pass-in all the parameters
needed for the conversion of an LV to a vdopool + vdo LV.

Function convert_vdo_lv() is also able to create a new LV and swap
segments, so the passed in LV can be later on use for futher
conversion so this refactoring makes it ready for more enhanced
usage.
2024-01-10 14:02:22 +01:00
7544b9fc10 vdo: refactor vdo_params passing
Introduce vdo_convert_params and use vdo_params from this structure
also with lvcreate_params.

Later we will use this for convertion of thin-pool data volume to VDO.
2024-01-10 14:02:22 +01:00
4ccedceaa8 thin_pool: introduce --pooldatavdo
Introducing new arg --pooldatavdo y|n
2024-01-10 14:02:22 +01:00
c496f80379 pool: code refactoring
Move pool related manipulation code to pool_manip.c.
2024-01-10 14:02:22 +01:00
a176184b7d thin_pool: code refactoring
Move common code into thin_pool_set_params()
(just like we use  cache_set_params).
2024-01-10 14:02:22 +01:00
2135bbe558 vdo: flip return value to int
Change API to return just 0/1.
2024-01-10 14:02:22 +01:00
2f3d8659b1 commands: add lv_is_writable 2023-08-14 17:02:11 +02:00
39457234db lvconvert: support conversion to thin volume
Update pool conversion function to handle also conversion of
thick LV to thin LV by moving thick LV into thin pool data LV
and creating fully provissioned thin LV on top of this volume.

Reworking existing conversion to use insert_layer_for_lv co
the uuid is now kept with thin-pool - this should however not
really matter as we are doing full deactivation & activation cycle.

With conversion to thin LV user can use same set of arguments
to set chunk-size.

TODO: add some smart code to decide best values for chunks sizes.
2023-07-10 17:13:32 +02:00
558890ad0e pool: support passing data_lv for recalculation
Support data_lv to be passed as parameter when it's not yet
attached to the pool.
2023-07-10 17:13:32 +02:00
d43a79eec9 segtype: add missing macros for error and zero segment
Add macros for checking error and zero segment as we do
with other segtypes.
2023-07-10 17:13:32 +02:00
e84b00964f pool: avoid using artificial name internally 2023-06-29 13:55:27 +02:00
631762d6db metadata-exported.h: correcting definition
Fixing minor mismatch between definition and declaration of
update_thin_pool_params().
2023-02-10 17:50:27 +01:00
2451bc568f vdo: fix and enhance vdo constain checking
Enhance checking vdo constains so it also handles changes of active VDO LVs
where only added difference is considered now.

For this also the reported informational message about used memory
was improved to only list consuming RAM blocks.
2023-01-16 12:37:40 +01:00
1bed2cafe8 vdo: read live vdo size configuration
Introduce struct vdo_pool_size_config usable to calculate necessary
memory size for active VDO volume.

Function lv_vdo_pool_size_config() is able to read out this
configuration out of runtime DM table line.
2023-01-16 12:37:40 +01:00
c98617c593 devices: factor common list functions
which were duplicated in various places
2022-11-07 11:38:46 -06:00
264827cb98 lvresize: add new options and defaults for fs handling
The new option "--fs String" for lvresize/lvreduce/lvextend
controls the handling of file systems before/after resizing
the LV.  --resizefs is the same as --fs resize.

The new option "--fsmode String" can be used to control
mounting and unmounting of the fs during resizing.

Possible --fs values:

checksize
  Only applies to reducing size; does nothing for extend.
  Check the fs size and reduce the LV if the fs is not using
  the affected space, i.e. the fs does not need to be shrunk.
  Fail the command without reducing the fs or LV if the fs is
  using the affected space.

resize
  Resize the fs using the fs-specific resize command.
  This may include mounting, unmounting, or running fsck.
  See --fsmode to control mounting behavior, and --nofsck to
  disable fsck.

resize_fsadm
  Use the old method of calling fsadm to handle the fs
  (deprecated.) Warning: this option does not prevent lvreduce
  from destroying file systems that are unmounted (or mounted
  if prompts are skipped.)

ignore
  Resize the LV without checking for or handling a file system.
  Warning: using ignore when reducing the LV size may destroy the
  file system.

Possible --fsmode values:

manage
  Mount or unmount the fs as needed to resize the fs,
  and attempt to restore the original mount state at the end.

nochange
  Do not mount or unmount the fs. If mounting or unmounting
  is required to resize the fs, then do not resize the fs or
  the LV and fail the command.

offline
  Unmount the fs if it is mounted, and resize the fs while it
  is unmounted. If mounting is required to resize the fs,
  then do not resize the fs or the LV and fail the command.

Notes on lvreduce:

When no --fs or --resizefs option is specified:
. lvextend default behavior is fs ignore.
. lvreduce default behavior is fs checksize
  (includes activating the LV.)

With the exception of --fs resize_fsadm|ignore, lvreduce requires
the recent libblkid fields FSLASTBLOCK and FSBLOCKSIZE.
FSLASTBLOCK*FSBLOCKSIZE is the last byte used by the fs on the LV,
which determines if reducing the fs is necessary.
2022-09-13 15:15:05 -05:00
18722dfdf4 lvresize: restructure code
Rewrite top level resize function to prepare for adding
new fs resizing.
2022-09-09 16:18:55 -05:00
60ca2ce20f thin: rename internal function
Names matching internal code layout.
Functionc in thin_manip.c uses thin_pool in its name.
Keep 'pool' only for function working for both cache and thin pools.

No change of functionality.
2022-08-30 13:54:19 +02:00
e2e31d9acf vdo: enhance lvcreate validation
When creating VDO pool based of  % values, lvm2 is now more clever
and avoids to create 'unsupportable' sizes of physical backend
volumes as 16TiB is maximum size supported by VDO target
(and also limited by maximum supportable slabs (8192) based on slab
size.

If the requested virtual size is approaching max supported size 4PiB,
switch header size to 0.
2022-07-11 01:18:24 +02:00
ebad057579 vdo: check vdo memory constrains
Add function to check for avaialble memory for particular VDO
configuration - to avoid unnecessary machine swapping for configs
that will not fit into memory (possibly in locked section).

Formula tries to estimate RAM size machine can use also with
swapping for kernel target - but still leaving some amount of
usable RAM.

Estimation is based on documented RAM usage of VDO target.

If the /proc/meminfo would be theoretically unavailable, try to use
'sysinfo()' function, however this is giving only free RAM without
the knowledge about how much RAM could be eventually swapped.

TODO: move _get_memory_info() into generic lvm2 API function used
by other targets with non-trivial memory requirements.
2022-07-11 01:18:24 +02:00
f83b3962c1 asan: fix some reports from libasan
When compiled and used with:

CFLAGS="-fsanitize=address -g -O0"
ASAN_OPTIONS=strict_string_checks=1:detect_stack_use_after_return=1:check_initialization_order=1:strict_init_order=1

we have few reported issue - they where not normally spotted, since
we were still accessing our own memory - but ouf of buffer-range.

TODO: there is still something to enhance with handling of #orphan vgids
2022-02-07 20:02:11 +01:00
e6f735d411 vdo: read new sysfs path
New versions of kvdo module exposes statistics at new location:
/sys/block/dm-XXX/vdo/statistics/...

Enhance lvm2 to access this location first.
Also if the statistic info is missing - make it 'debug' level info,
so it is not failing 'lvs' command.
2021-09-09 15:24:15 +02:00
2c6a2b6e86 vdo: support vdo_pool_header_size
Add profilable configurable setting for vdo pool header size, that is
used as 'extra' empty space at the front and end of vdo-pool device
to avoid having a disk in the system the may have same data is real
vdo LV.

For some conversion cases however we may need to allow using '0' header size.

TODO: in this case we may eventually avoid adding 'linear' mapping layer
in future - but this requires further modification over lvm code base.
2021-06-28 20:41:07 +02:00
4a746f7ffc lvremove: fix removing thin pool with writecache on data 2021-05-24 16:09:35 -05:00
ef1c57e68f lib: locking: Add new type "idm"
We can consider the drive firmware a server to handle the locking
request from nodes, this essentially is a client-server model.
DLM uses the kernel as a central place to manage locks, so it also
complies with client-server model for locking operations.  This is
why IDM and DLM are similar with each other for their wrappers.

This patch largely works by generalizing the DLM code paths and then
providing degeneralized functions as wrappers for both IDM and DLM.

Signed-off-by: Leo Yan <leo.yan@linaro.org>
2021-05-20 16:01:05 -05:00
0a28e3c44b Add metadata-based autoactivation property for VG and LV
The autoactivation property can be specified in lvcreate
or vgcreate for new LVs/VGs, and the property can be changed
by lvchange or vgchange for existing LVs/VGs.

 --setautoactivation y|n
 enables|disables autoactivation of a VG or LV.

Autoactivation is enabled by default, which is consistent with
past behavior.  The disabled state is stored as a new flag
in the VG metadata, and the absence of the flag allows
autoactivation.

If autoactivation is disabled for the VG, then no LVs in the VG
will be autoactivated (the LV autoactivation property will have
no effect.)  When autoactivation is enabled for the VG, then
autoactivation can be controlled on individual LVs.

The state of this property can be reported for LVs/VGs using
the "-o autoactivation" option in lvs/vgs commands, which will
report "enabled", or "" for the disabled state.

Previous versions of lvm do not recognize this property.  Since
autoactivation is enabled by default, the disabled setting will
have no effect in older lvm versions.  If the VG is modified by
older lvm versions, the disabled state will also be dropped from
the metadata.

The autoactivation property is an alternative to using the lvm.conf
auto_activation_volume_list, which is still applied to to VGs/LVs
in addition to the new property.

If VG or LV autoactivation is disabled either in metadata or in
auto_activation_volume_list, it will not be autoactivated.

An autoactivation command will silently skip activating an LV
when the autoactivation property is disabled.

To determine the effective autoactivation behavior for a specific
LV, multiple settings would need to be checked:
the VG autoactivation property, the LV autoactivation property,
the auto_activation_volume_list.  The "activation skip" property
would also be relevant, since it applies to both normal and auto
activation.
2021-04-07 15:32:49 -05:00
a9b4acd511 dev_manager: add lv_raid_status
Just like with other segtype use this function to get whole
raid status info available per a single ioctl call.

Also it nicely simplifies read of percentage info about
in_sync portion of raid volume.

TODO: drop use of other calls then lv_raid_status call,
since all such calls could already use status - so it just
adds unnecessary duplication.
2021-03-18 18:34:57 +01:00
a915cd5a46 lvconvert: vdo may convert already formated vdo
User use 'lvconvert -Zn --type vdo-pool' to convert an existing
vdo formated volume and skip lvm2 internal formating.
This however requires user is passing proper matching parameters.
For them user can use --profile|--metadataprofile option whos
support has been also enhanced.

TODO: add support to read values directly from formated volume.
2021-02-17 11:21:35 +01:00
abc9265a06 cache: reuse code for metadata min_max
Use update_pool_metadata_min_max() which is shared with
thin-pool metadata min-max updating.

Gives improved messages when converting volumes to metadata.
2021-02-01 12:06:13 +01:00
b4212be2e7 thin: improve 16g support for thin pool metadata
Initial support for thin-pool used slightly smaller max size 15.81GiB
for thin-pool metadata. However the real limit later settled at 15.88GiB
(difference is ~64MiB - 16448 4K blocks).

lvm2 could not simply increase the size as it has been using hard cropping
of the loaded metadata device to avoid warnings printing warning of kernel
when the size was bigger (i.e. due to bigger extent_size).

This patch adds the new lvm.conf configurable setting:
allocation/thin_pool_crop_metadata
which defaults to 0 -> no crop of metadata beyond 15.81GiB.
Only user with these sizes of metadata will be affected.

Without cropping lvm2 now limits metadata allocation size to 15.88GiB.
Any space beyond is currently not used by thin-pool target.
Even if i.e. bigger LV is used for metadata via lvconvert,
or allocated bigger because of to large extent size.

With cropping enabled (=1) lvm2 preserves the old limitation
15.81GiB and should allow to work in the evironement with
older lvm2 tools (i.e. older distribution).

Thin-pool metadata with size bigger then 15.81G is now using CROP_METADATA
flag within lvm2 metadata, so older lvm2 recognizes an
incompatible thin-pool and cannot activate such pool!

Users should use uncropped version as it is not suffering
from various issues between thin_repair results and allocated
metadata LV as thin_repair limit is 15.88GiB
Users should use cropping only when really needed!

Patch also better handles resize of thin-pool metadata and prevents resize
beoyond usable size 15.88GiB. Resize beyond 15.81GiB automatically
switches pool to no-crop version. Even with existing bigger thin-pool
metadata command 'lvextend -l+1 vg/pool_tmeta' does the change.

Patch gives better controls 'coverted' metadata LV and
reports less confusing message during conversion.

Patch set also moves the code for updating min/max into pool_manip.c
for better sharing with cache_pool code.
2021-02-01 12:06:13 +01:00