1
0
mirror of git://sourceware.org/git/lvm2.git synced 2024-12-22 17:35:59 +03:00
Commit Graph

562 Commits

Author SHA1 Message Date
Zdenek Kabelac
1bed2cafe8 vdo: read live vdo size configuration
Introduce struct vdo_pool_size_config usable to calculate necessary
memory size for active VDO volume.

Function lv_vdo_pool_size_config() is able to read out this
configuration out of runtime DM table line.
2023-01-16 12:37:40 +01:00
Zdenek Kabelac
2e79b005c2 dev_manager: accept misalined vdo pools.
Since lvm2 may create VDO pool virtual size aligned only on extent size
while VDO itself is just 4K aligned - we need to support such misalign.
2022-11-02 13:59:34 +01:00
David Teigland
264827cb98 lvresize: add new options and defaults for fs handling
The new option "--fs String" for lvresize/lvreduce/lvextend
controls the handling of file systems before/after resizing
the LV.  --resizefs is the same as --fs resize.

The new option "--fsmode String" can be used to control
mounting and unmounting of the fs during resizing.

Possible --fs values:

checksize
  Only applies to reducing size; does nothing for extend.
  Check the fs size and reduce the LV if the fs is not using
  the affected space, i.e. the fs does not need to be shrunk.
  Fail the command without reducing the fs or LV if the fs is
  using the affected space.

resize
  Resize the fs using the fs-specific resize command.
  This may include mounting, unmounting, or running fsck.
  See --fsmode to control mounting behavior, and --nofsck to
  disable fsck.

resize_fsadm
  Use the old method of calling fsadm to handle the fs
  (deprecated.) Warning: this option does not prevent lvreduce
  from destroying file systems that are unmounted (or mounted
  if prompts are skipped.)

ignore
  Resize the LV without checking for or handling a file system.
  Warning: using ignore when reducing the LV size may destroy the
  file system.

Possible --fsmode values:

manage
  Mount or unmount the fs as needed to resize the fs,
  and attempt to restore the original mount state at the end.

nochange
  Do not mount or unmount the fs. If mounting or unmounting
  is required to resize the fs, then do not resize the fs or
  the LV and fail the command.

offline
  Unmount the fs if it is mounted, and resize the fs while it
  is unmounted. If mounting is required to resize the fs,
  then do not resize the fs or the LV and fail the command.

Notes on lvreduce:

When no --fs or --resizefs option is specified:
. lvextend default behavior is fs ignore.
. lvreduce default behavior is fs checksize
  (includes activating the LV.)

With the exception of --fs resize_fsadm|ignore, lvreduce requires
the recent libblkid fields FSLASTBLOCK and FSBLOCKSIZE.
FSLASTBLOCK*FSBLOCKSIZE is the last byte used by the fs on the LV,
which determines if reducing the fs is necessary.
2022-09-13 15:15:05 -05:00
Zdenek Kabelac
e26c21cb8d vdo: extend volume and pool without flush
When the volume size is extended, there is no need to flush
IO operations (nothing can be targeting new space yet).
VDO target is supported as target that can safely work with
this condition.

Such support is also needed, when extending VDOPOOL size
while the pool is reaching its capacity - since this allows
to continue working without reaching 'out-of-space' condition
due to flushing of all in flight IO.
2022-08-19 14:56:55 +02:00
Zdenek Kabelac
4d2f9a4ff3 cov: restore disable_dm_devs also for error path
Keep the structure correct for failing error path,
alhtough likely this particual var will not be used.
2022-07-11 01:18:24 +02:00
David Teigland
4eb04c8c05 devices: fix dev_name assumptions
dev_name(dev) returns "[unknown]" if there are no names
on dev->aliases.  It's meant mainly for log messages.

Many places assume a valid path name is returned, and
use it directly.  A caller that wants to use the path
from dev_name() must first check if the dev has any
paths with dm_list_empty(&dev->aliases).
2022-02-24 17:22:04 -06:00
Zdenek Kabelac
d4a0816a58 dev_manager: use list info for preset devs
In some cases we can also use cached info obtained from DM_DEVICE_LIST
also to avoid extra ioctl check for present devices.
2022-02-16 01:00:36 +01:00
Zdenek Kabelac
fa7b67eeeb dev_manager: do not query for open_count
Oepn count is not used along this code path.
2022-02-16 01:00:36 +01:00
Zdenek Kabelac
c2679f76e5 dev_manager: failing status is not internal error
Different target type for LV it's not an internal error.
i.e.  when target type is replaced with 'error' type - it should be
reported as regular warning and not cause interruption of command with
internall error.
2022-02-16 01:00:36 +01:00
Zdenek Kabelac
6ffb150f30 dev_manager: fix dm_task_get_device_list
With very old version of DM target driver we have to avoid
trying to use newuuid setting - otherwise we get error
during ioctl preparation phase.

Patch is fixing regression from commit:
988ea0e94c
2022-02-16 01:00:36 +01:00
Zdenek Kabelac
04fbffb116 label: cache dm device list
Since we check for present DM devices - cache result for
futher use of checking presence of such device.

lvm2 uses cache result for label scan, but also when
it tries to activate or deactivate LV - however only simple
target 'striped' is reasonably supported.

Use disable_dm_devs to be able to control when lv_info()
get cache or uncached results.

TODO: support more type, however this is getting very complicated.
2021-12-20 16:13:28 +01:00
Zdenek Kabelac
0d67bc96fd activate: add get_device_list
Add funtion get_device_list() to get list of active DM devices.
Handled through new dm_task_get_device_list().
2021-12-20 16:13:28 +01:00
Zdenek Kabelac
65236ee722 activate: device_is_usable
Move checking of usable uuid into separate function
and pass in also cmd context.
2021-12-20 16:13:28 +01:00
Zdenek Kabelac
9971459d19 make: replace legacy use rindex with strrchr
Seems already dropped by some systems.

Reported-by: adamboardman of gemian
2021-09-27 18:56:14 +02:00
Zdenek Kabelac
e6f735d411 vdo: read new sysfs path
New versions of kvdo module exposes statistics at new location:
/sys/block/dm-XXX/vdo/statistics/...

Enhance lvm2 to access this location first.
Also if the statistic info is missing - make it 'debug' level info,
so it is not failing 'lvs' command.
2021-09-09 15:24:15 +02:00
David Teigland
96b777167c cov: clean up pvid and vgid usage
pvid and vgid are sometimes a null-terminated string, and
other times a 'struct id', and the two types were often
cast between each other.  When a struct id was cast to a char
pointer, the resulting string would not necessarily be null
terminated.  Casting a null-terminated string id to a
struct id is fine, but is still avoided when possible.

A struct id is:  int8_t uuid[ID_LEN]
A string id is:  char pvid[ID_LEN + 1]

A convention is introduced to help distinguish them:

- variables and struct fields named "pvid" or "vgid"
  should be null-terminated strings.

- variables and struct fields named "pv_id" or "vg_id"
  should be struct id's.

- examples:
  char pvid[ID_LEN + 1];
  char vgid[ID_LEN + 1];
  struct id pv_id;
  struct id vg_id;

Function names also attempt to follow this convention.

Avoid casting between the two types as much as possible,
with limited exceptions when known to be safe and clearly
commented.

Avoid using variations of strcpy and strcmp, and instead
use memcpy/memcmp with ID_LEN (with similar limited
exceptions possible.)
2021-08-16 11:31:15 -05:00
Zdenek Kabelac
ff8aaadec9 mirror_percent: support interruptible check
When checking for mirror percentage with WAITEVENT (i.e. during pvmove)
handle intrruption (^C) of such wait.
2021-04-06 22:02:31 +02:00
Zdenek Kabelac
adc238062d dev_manager: skip also zero targets
Devices made only from 'error' target cannot be used,
but if the device is also combined from 'zero' target
the same rule can be applied as such device cannot be used.
2021-03-18 18:34:57 +01:00
Zdenek Kabelac
a9b4acd511 dev_manager: add lv_raid_status
Just like with other segtype use this function to get whole
raid status info available per a single ioctl call.

Also it nicely simplifies read of percentage info about
in_sync portion of raid volume.

TODO: drop use of other calls then lv_raid_status call,
since all such calls could already use status - so it just
adds unnecessary duplication.
2021-03-18 18:34:57 +01:00
Zdenek Kabelac
e5a600860c dev_manager: status check with info check included
Reduce ioctl count and avoid separate info check,
when we can get the same info from status ioctl.

When devmanager calls return 0, then the exists value 0
means the reason of failure is missing device in table.
In such case we avoid stack trace.

Swap the flush parameter for the vdo status function
to match thin pool status.
2021-03-18 18:34:57 +01:00
Zdenek Kabelac
94701b700b cleanup: use dm_strncpy
Use own function.
2021-03-17 00:50:22 +01:00
Zdenek Kabelac
4d75c4f597 device_is_usable: minor improve
Replace allocated buffer with local vg_name which doesn't
pass pointer to allocation.

Join some conditions together.
2021-03-17 00:49:11 +01:00
David Teigland
e9d10f3711 filters: better message for excluding LV
Make the generic "device is not usable" message from filter-usable
more specific in case the device is not usable because it's an LV.
(i.e. when scan_lvs=0)
2021-03-03 12:07:57 -06:00
Zdenek Kabelac
b4212be2e7 thin: improve 16g support for thin pool metadata
Initial support for thin-pool used slightly smaller max size 15.81GiB
for thin-pool metadata. However the real limit later settled at 15.88GiB
(difference is ~64MiB - 16448 4K blocks).

lvm2 could not simply increase the size as it has been using hard cropping
of the loaded metadata device to avoid warnings printing warning of kernel
when the size was bigger (i.e. due to bigger extent_size).

This patch adds the new lvm.conf configurable setting:
allocation/thin_pool_crop_metadata
which defaults to 0 -> no crop of metadata beyond 15.81GiB.
Only user with these sizes of metadata will be affected.

Without cropping lvm2 now limits metadata allocation size to 15.88GiB.
Any space beyond is currently not used by thin-pool target.
Even if i.e. bigger LV is used for metadata via lvconvert,
or allocated bigger because of to large extent size.

With cropping enabled (=1) lvm2 preserves the old limitation
15.81GiB and should allow to work in the evironement with
older lvm2 tools (i.e. older distribution).

Thin-pool metadata with size bigger then 15.81G is now using CROP_METADATA
flag within lvm2 metadata, so older lvm2 recognizes an
incompatible thin-pool and cannot activate such pool!

Users should use uncropped version as it is not suffering
from various issues between thin_repair results and allocated
metadata LV as thin_repair limit is 15.88GiB
Users should use cropping only when really needed!

Patch also better handles resize of thin-pool metadata and prevents resize
beoyond usable size 15.88GiB. Resize beyond 15.81GiB automatically
switches pool to no-crop version. Even with existing bigger thin-pool
metadata command 'lvextend -l+1 vg/pool_tmeta' does the change.

Patch gives better controls 'coverted' metadata LV and
reports less confusing message during conversion.

Patch set also moves the code for updating min/max into pool_manip.c
for better sharing with cache_pool code.
2021-02-01 12:06:13 +01:00
David Teigland
9fe7aba251 cache: activation cache_check on cachevol
When using cache with a cachevol, the cache_check tool was
not being run on the cache metadata during activation.
cache_check clears the needs_check flag in the cache
metadata, so if the flag was set due to an unclean
shutdown, the activation would fail.
2020-12-09 17:36:09 -06:00
Zdenek Kabelac
4de6f58085 thin: use lv_status_thin and lv_status_thin_pool
Introduce structures lv_status_thin_pool and
lv_status_thin  (pair to lv_status_cache, lv_status_vdo)

Convert lv_thin_percent() -> lv_thin_status()
and  lv_thin_pool_percent() + lv_thin_pool_transaction_id() ->
lv_thin_pool_status().

This way a function user can see not only percentages, but also
other important status info about thin-pool.

TODO:
This patch tries to not change too many other things,
but pool_below_threshold() now uses new thin-pool info to return
failure if thin-pool cannot be actually modified.
This should be handle separately in a better way.
2020-09-29 10:43:56 +02:00
Zdenek Kabelac
af33a00847 Revert "raid: add _rimage and _rmeta as origin_only"
This reverts commit 3388e19489.
More thinking needed.
2020-09-09 00:58:52 +02:00
Zdenek Kabelac
3388e19489 raid: add _rimage and _rmeta as origin_only
Since we do not support rimage & rmeta for snapshots - we can
avoid quering for -cow devices and add them as origin_only -
since their snapshots (-cow) could have never existed.
This redumes several ioctl operation during table preloading.
2020-09-08 21:23:03 +02:00
David Teigland
d9e8895a96 Allow dm-integrity to be used for raid images
dm-integrity stores checksums of the data written to an
LV, and returns an error if data read from the LV does
not match the previously saved checksum.  When used on
raid images, dm-raid will correct the error by reading
the block from another image, and the device user sees
no error.  The integrity metadata (checksums) are stored
on an internal LV allocated by lvm for each linear image.
The internal LV is allocated on the same PV as the image.

Create a raid LV with an integrity layer over each
raid image (for raid levels 1,4,5,6,10):

lvcreate --type raidN --raidintegrity y [options]

Add an integrity layer to images of an existing raid LV:

lvconvert --raidintegrity y LV

Remove the integrity layer from images of a raid LV:

lvconvert --raidintegrity n LV

Settings

Use --raidintegritymode journal|bitmap (journal is default)
to configure the method used by dm-integrity to ensure
crash consistency.

Initialization

When integrity is added to an LV, the kernel needs to
initialize the integrity metadata/checksums for all blocks
in the LV.  The data corruption checking performed by
dm-integrity will only operate on areas of the LV that
are already initialized.  The progress of integrity
initialization is reported by the "syncpercent" LV
reporting field (and under the Cpy%Sync lvs column.)

Example: create a raid1 LV with integrity:

$ lvcreate --type raid1 -m1 --raidintegrity y -n rr -L1G foo
  Creating integrity metadata LV rr_rimage_0_imeta with size 12.00 MiB.
  Logical volume "rr_rimage_0_imeta" created.
  Creating integrity metadata LV rr_rimage_1_imeta with size 12.00 MiB.
  Logical volume "rr_rimage_1_imeta" created.
  Logical volume "rr" created.
$ lvs -a foo
  LV                  VG  Attr       LSize  Origin              Cpy%Sync
  rr                  foo rwi-a-r---  1.00g                     4.93
  [rr_rimage_0]       foo gwi-aor---  1.00g [rr_rimage_0_iorig] 41.02
  [rr_rimage_0_imeta] foo ewi-ao---- 12.00m
  [rr_rimage_0_iorig] foo -wi-ao----  1.00g
  [rr_rimage_1]       foo gwi-aor---  1.00g [rr_rimage_1_iorig] 39.45
  [rr_rimage_1_imeta] foo ewi-ao---- 12.00m
  [rr_rimage_1_iorig] foo -wi-ao----  1.00g
  [rr_rmeta_0]        foo ewi-aor---  4.00m
  [rr_rmeta_1]        foo ewi-aor---  4.00m
2020-04-15 12:10:32 -05:00
Zdenek Kabelac
caff31df19 vdo: make vdopool wrapping device is read-only
When vdopool is activated standalone - we use a wrapping linear device
to hold actual vdo device active - for this we can set-up read-only
device to ensure there cannot be made write through this device to
actual pool device.
2020-03-23 17:13:26 +01:00
David Teigland
8153c5f1e6 writecache: working real dm uuid suffix for wcorig lv 2020-02-20 17:13:43 -06:00
Zdenek Kabelac
892a182975 cachevol: stop dm errors with uncaching cache with cachevol
Fix the anoying kernel message reported:
device-mapper: cache: 253:2: metadata operation 'dm_cache_commit' failed: error = -5
which has been reported while cachevol has been removed.
Happened via confusing variable - so switch the variable to commonly user '_size'
which presents a value in sector units and avoid 'scaling' this as extent length
by vg extent size when placing 'error' target on removal path.

Patch shouldn't have impact on actual users data, since at this moment
of removal all date should have been already flushed to origin device.

m
2020-02-11 17:19:57 +01:00
Zdenek Kabelac
43db8f8d5d cov: ensure read_ahead is available
Make sure read_ahead pointer is not NULL when quering for RA.
2019-11-11 22:44:25 +01:00
Zdenek Kabelac
8679d45917 gcc: avoid declaration shadowing
dev_name is global in device.h
2019-11-11 22:44:18 +01:00
Zdenek Kabelac
80b2de9e6a mirror: fix leg splitting
Enhance lv_info with lv_info_with_name_check.
This 'variant' not only check existance if UUID in DM table
but also compares its  DM name  whether it's matching expected LV name.
Otherwise activation may 'skip' activation with rename in case the
DM UUID already exists, just device is different name.

This change make fairly easier manipulation with i.e. detached mirror
leg which ATM is using same UUID - just the LV name have been changed.

Used code was not able to run 'activation' (and do a rename) and just
skipped the call. So the code used to do a workaround and 'tried'
to deactivate such LV firts - this however work only in non-clvmd case,
as cluster was not having the lock for deactivated LV.

With this extended lv_info code will run 'activation' and will
synchronize the name to match expected LV name.

Patch extends _lv_info() with new paramter 'with_name_check',
which is later translated into 'name_check' argument for
_info_run() which in case of name mismatch evaluates the
check as if device does not exists.

Such call is only used in one place _lv_activate() which then
let activation run.  All other invocation of _info() calls
are left intact.

TODO: fix mirror table manipulation (and raid)....
2019-10-31 15:31:30 +01:00
Zdenek Kabelac
9968be55ed snapshot: correctly check device id of merged thin
When checking device id of a thin device that is just being
merged - the snapshot actually could have been already finished
which means  '-real' suffix for the LV is already gone and just LV
is there - so check explicitely for this condition and use
correct UUID for this case.
2019-10-26 00:49:16 +02:00
Zdenek Kabelac
f61d828c86 gcc: older compiler is happier with this initilizer 2019-10-21 15:32:35 +02:00
Zdenek Kabelac
2a08d6d1d4 cachevol: use CVOL UUID for cdata and cmeta layered devices
Since code is using -cdata and -cmeta UUID suffixes, it does not need
any new 'extra' ID to be generated and stored in metadata.

Since introduce of new 'segtype' cache+CACHE_USES_CACHEVOL we can
safely assume 'new' cache with cachevol will now be created
without extra metadata_id and data_id in metadata.

For backward compatibility, code still reads them in case older
version of metadata have them - so it still should be able
to activate such volumes.

Bonus is lowered size of lv structure used to store info about LV
(noticable with big volume groups).
2019-10-17 13:03:49 +02:00
Zdenek Kabelac
1cd308d640 cachevol: drop no longer needed functions
Code is no longer used/needed.
2019-10-14 15:20:25 +02:00
Zdenek Kabelac
2825ad9dd2 cachevol: improve manipulation with dm tree
Enhance activation of cached devices using cachevol.
Correctly instatiace  cachevol -cdata & -cmeta devices with
'-' in name (as they are only layered devices).
Code is also a bit more compacted (although still not ideal,
as the usage of extra UUIDs stored in metadata is troublesome
and will be repaired later).

NOTE: this patch my brink potentially minor incompatiblity for 'runtime' upgrade
2019-10-14 15:17:50 +02:00
Zdenek Kabelac
a454a1b4ea cachevol: put _cvol as protected suffix.
This revert "drop cvol dm uuid suffix for cachevol LVs"
commit 5191057d9d.
Start using -cvol for  DM UUID.
2019-10-14 15:16:05 +02:00
David Teigland
5191057d9d drop cvol dm uuid suffix for cachevol LVs
The "-cvol" suffix on the uuid is interfering with
activation code, so drop the suffix for now.
2019-09-23 14:13:31 -05:00
David Teigland
515e37b6dd cachevol: add dm uuid suffixes to hidden lvs
to indicate they are private lvm devs
2019-09-20 09:59:37 -05:00
Zdenek Kabelac
6612d8dd5e vdo: enhance activation with layer -vpool
Enhance 'activation' experience for VDO pool to more closely match
what happens for thin-pools where we do use a 'fake' LV to keep pool
running even when no thinLVs are active. This gives user a choice
whether he want to keep thin-pool running (wihout possibly lenghty
activation/deactivation process)

As we do plan to support multple VDO LVs to be mapped into a single VDO,
we want to give user same experience and 'use-patter' as with thin-pools.

This patch gives option to activate VDO pool only without activating
VDO LV.

Also due to 'fake' layering LV we can protect usage of VDO pool from
command like 'mkfs' which do require exlusive access to the volume,
which is no longer possible.

Note: VDO pool contains 1024 initial sectors as 'empty' header - such
header is also exposed in layered LV (as read-only LV).
For blkid we are indentified as LV with UUID suffix - thus private DM
device of lvm2 - so we do not need to store any extra info in this
header space (aka zero is good enough).
2019-09-17 13:17:19 +02:00
Zdenek Kabelac
66f69e766e thin: activate layer pool aas read-only LV
When lvm2 is activating layered pool LV (to basically keep pool opened,
the other function used to be 'locking' be in sync with DM table)
use this LV in read-only mode - this prevents 'write' access into
data volume content of thin-pool.

Note: since EMPTY/unused thin-pool is created as 'public LV' for generic
use by any user who i.e. wish to maintain thin-pool and thins himself.
At this moment, thin-pool appears as writable LV.  As soon as the 1st.
thinLV is created, layer volume will appear is 'read-only' LV from this moment.
2019-09-17 13:16:50 +02:00
Zdenek Kabelac
693215716b devices: crypto skip
Devices with UUID signature CRYPT-SUBDEV are internal crypto devices.
2019-09-17 13:15:22 +02:00
Zdenek Kabelac
b2885b7103 activation: use cmd pending mem for pending_delete
Since we need to preserve allocated strings across 2 separate
activation calls of '_tree_action()' we need to use other mem
pool them dm->mem - but since cmd->mem is released between
individual lvm2 locking calls, we rather introduce a new separate
mem pool just for pending deletes with easy to see life-span.
(not using 'libmem' as it would basicaly keep allocations over
the whole lifetime of clvmd)

This patch is fixing previous commmit where the memory was
improperly used after pool release.
2019-08-27 15:54:42 +02:00
Zdenek Kabelac
7833c45fbe activation: extend handling of pending_delete
With previous patch 30a98e4d67 we
started to put devices one pending_delete list instead
of directly scheduling their removal.

However we have operations like 'snapshot merge' where we are
resuming device tree in 2 subsequent activation calls - so
1st such call will still have suspened devices and no chance
to push 'remove' ioctl.

Since we curently cannot easily solve this by doing just single
activation call (which would be preferred solution) - we introduce
a preservation of pending_delete via command structure and
then restore it on next activation call.

This way we keep to remove devices later - although it might be
not the best moment - this may need futher tunning.

Also we don't keep the list of operation in 1 trasaction
(unless we do verify udev symlinks) - this could probably
also make it more correct in terms of which 'remove' can
be combined we already running 'resume'.
2019-08-26 15:16:38 +02:00
Zdenek Kabelac
30a98e4d67 activation: add synchronization point
Resuming of 'error' table entry followed with it's dirrect removal
is now troublesame with latest udev as it may skip processing of
udev rules for already 'dropped' device nodes.

As we cannot 'synchronize' with udev while we know we have devices
in suspended state - rework 'cleanup' so it collects nodes
for removal into pending_delete list and process the list with
synchronization once we are without any suspended nodes.
2019-08-20 12:46:11 +02:00
Zdenek Kabelac
0451225c19 pvmove: correcting read_ahead setting
When pvmove is finished, we do a tricky operation since we try to
resume multiple different device that were all joined into 1 big tree.

Currently we use the infromation from existing live DM table,
where we can get list of all holders of pvmove device.
We look for these nodes (by uuid) in new metadata, and we do now a full
regular device add into dm tree structure.  All devices should be
already PRELOAD with correct table before entering suspend state,
however for correctly working readahead we need to put correct info
also into RESUME tree.  Since table are preloaded, the same table
is skip and resume, but correct read ahead is now set.
2019-08-20 12:37:32 +02:00