1
0
mirror of git://sourceware.org/git/lvm2.git synced 2024-12-21 13:34:40 +03:00
Commit Graph

48 Commits

Author SHA1 Message Date
Zdenek Kabelac
026344e8cc cleanup: local const arrays
No need for relocation entries for locally used arrays.
2024-05-13 12:58:37 +02:00
Zdenek Kabelac
9b2f9d64c0 libdm: ensure suffixes list elements are const
This was rather API mistake - the internal of libdm
do handle suffixes list as const string, just the API
was only using  'const char **'.

So the user may pass safely casted 'const char * const`.
2024-05-04 00:57:52 +02:00
Zdenek Kabelac
d490572410 gcc: clear some complains
Use dm_strncpy() were possible to get rid of gcc compile warnings.
2024-04-08 14:52:23 +02:00
Zdenek Kabelac
93a633092a cov: store 64b flags
Alhtough no target is currently using that many bits for their flags,
preserve the size for loaded segment internally.
2024-03-29 01:36:48 +01:00
Zdenek Kabelac
68fdae1184 debug: enhace device_mapper debug log for dm tree 2024-03-17 13:17:53 +01:00
Zdenek Kabelac
3bd9f936da thin: delayed resume for LV conversion
When we have some existing LV and this LV is being converted to
external origin - during the DM table manipulation there is a short
moment when the LV is being 'resumed' as 'read-only' volume
while still being live as 'rw' volume i.e. we could have had
a single thin LV active twice.

To avoid such weird scenarios of dual access to a same volume, we
just postpone a resume until a moment, where the existing volume
is already suspended thus no I/O can be in flight to such device.

Note: however there is slight catch - that we now have basically
a different 'risk' case where a resume of such i.e. new external
origin LV might fail and we are already in suspend tree state -
resolving error path in this situation is untrivial as well...
2024-03-17 13:17:53 +01:00
Zdenek Kabelac
c8a5948a71 device_mapper: reactivate siblings for snapshot
When activating origin and its thick snapshots, ensure the
origin's LV udev processing is finished first and after this
reactivate its snapshot so the udev can scan them afterwards.

This should fix the problems for users using UUID of such device
in their fstab and occasionaly mounted snapshot instead of origin LV.
2023-02-01 11:47:47 +01:00
Zdenek Kabelac
85aa236946 device_mapper: reduce unnecessary looping passed
While looping through the list of nodes, check if there is higher
priority present and another iteration is still needed.
2023-02-01 11:47:47 +01:00
David Teigland
fa7fe5cbbe writecache: support settings metadata_only and pause_writeback
Two new settings for tuning dm-writecache.
2022-12-08 16:53:36 -06:00
Zdenek Kabelac
8e9410594b vdo: enhance detection of virtual size
Improve detection of VDO virtual size - so it's not reading VDO
metadata when VDO device is already active and instead we reuse
existing table line for knowing existing metadata size.

NOTE: if there is ever going to be added support for reduction
of VDO virtual size - this method will need to be reworked to
allow size difference only within 'extent_size' alignment.
2022-11-08 11:10:21 +01:00
Zdenek Kabelac
36a923926c device_mapper: vdo V4 avoid messaging
With V4 format build table line with compression and
deduplication and skip sending any messages to set up
these parameters.
2022-11-02 13:59:34 +01:00
Zdenek Kabelac
829ab01708 device_mapper: add parser for vdo metadata
Add very simplistic parser of vdo metadata to be able to obtain
logical_blocks stored within vdo metadata - as lvm2 may
submit smaller value due to internal aligment rules.

To avoid creation of mismatching table line - use this number
instead the one provided by lvm2.
2022-11-02 13:59:34 +01:00
Zdenek Kabelac
1c18ed3b4a vdo: support v4 kernel target line
Check and use new available table line v4, if kernel supports it.
2022-07-11 01:18:24 +02:00
Zdenek Kabelac
9971459d19 make: replace legacy use rindex with strrchr
Seems already dropped by some systems.

Reported-by: adamboardman of gemian
2021-09-27 18:56:14 +02:00
Zdenek Kabelac
a8ee82ed51 libdm: enhance tracking of activated LVs
Existing mechanism was not able to trace root volume issue.
Simplify the functionality by using simply using activated flag
and trace the dtree in reverse order.
2021-09-13 12:34:41 +02:00
Zdenek Kabelac
6c4cd7b2f2 cache: fix parentheses for migration_threshold
When generating table line for cache target line,
the estimation of added arguments was incorrectly
calculated as the evaluation order of "?" is
made after "+".

However the result was 'masked' by the

Reported-by: Jian Cai jcai19
2021-09-13 12:34:41 +02:00
Zdenek Kabelac
79427151dc vdo: add support for auto-unsafe writePolicy
This vdoWritePolicy policy missed matching support in lvm2.
2021-09-06 15:19:51 +02:00
Zdenek Kabelac
f1e8437c59 device_mapper: remove unused lines
No need for versioning history in internal version.
2021-03-30 13:07:51 +02:00
Zdenek Kabelac
2d64ffaee5 hash: use individual hint sizes
Use different 'hint' size for dm_hash_create() call - so
when debug info about hash is printed we can recognize which
hash was in use.

This patch doesn't change actual used size since that is always
rounded to be power of 2 and >=16 - so as such is only a
help to developer.

We could eventually use 'name' arg, but since this would have changed
API and this patchset will be routed to libdm & stable - we will
just use this small trick.
2021-03-08 15:33:15 +01:00
Zdenek Kabelac
0b7a4503e5 integrity: mark as user of secure_data
Use the secure_data with integrity target. Not so big difference,
as the secure feature of the integrity target is not used by lvm2.
2021-03-02 22:57:35 +01:00
Zdenek Kabelac
4b371246f5 device_mapper: simplify line emitter checking 2021-02-10 15:39:03 +01:00
Zdenek Kabelac
b4212be2e7 thin: improve 16g support for thin pool metadata
Initial support for thin-pool used slightly smaller max size 15.81GiB
for thin-pool metadata. However the real limit later settled at 15.88GiB
(difference is ~64MiB - 16448 4K blocks).

lvm2 could not simply increase the size as it has been using hard cropping
of the loaded metadata device to avoid warnings printing warning of kernel
when the size was bigger (i.e. due to bigger extent_size).

This patch adds the new lvm.conf configurable setting:
allocation/thin_pool_crop_metadata
which defaults to 0 -> no crop of metadata beyond 15.81GiB.
Only user with these sizes of metadata will be affected.

Without cropping lvm2 now limits metadata allocation size to 15.88GiB.
Any space beyond is currently not used by thin-pool target.
Even if i.e. bigger LV is used for metadata via lvconvert,
or allocated bigger because of to large extent size.

With cropping enabled (=1) lvm2 preserves the old limitation
15.81GiB and should allow to work in the evironement with
older lvm2 tools (i.e. older distribution).

Thin-pool metadata with size bigger then 15.81G is now using CROP_METADATA
flag within lvm2 metadata, so older lvm2 recognizes an
incompatible thin-pool and cannot activate such pool!

Users should use uncropped version as it is not suffering
from various issues between thin_repair results and allocated
metadata LV as thin_repair limit is 15.88GiB
Users should use cropping only when really needed!

Patch also better handles resize of thin-pool metadata and prevents resize
beoyond usable size 15.88GiB. Resize beyond 15.81GiB automatically
switches pool to no-crop version. Even with existing bigger thin-pool
metadata command 'lvextend -l+1 vg/pool_tmeta' does the change.

Patch gives better controls 'coverted' metadata LV and
reports less confusing message during conversion.

Patch set also moves the code for updating min/max into pool_manip.c
for better sharing with cache_pool code.
2021-02-01 12:06:13 +01:00
Zdenek Kabelac
a17ec7e0ba dm: remove created devices on error path
DM tree keeps track of created device while preloading a device tree.
When fail occures during such preload, it will now try to remove
all created and preloaded device. This makes it easier to maintain
stacking of device, since we do not need to check in-depth for
existance of all possible created devices during the failure.
2020-10-19 16:53:19 +02:00
Zdenek Kabelac
18c74666ee thin: validate thin-pool state before sending messages
Alhtough lvm2 does validation on its side, ensure DM code
is not sending messages to failed thin pool.
2020-09-29 10:43:56 +02:00
Zdenek Kabelac
2bfa868f91 device_mapper: enhance error message 2020-09-25 22:59:35 +02:00
Zdenek Kabelac
50a37948b5 vdo: allow passing renamed vdopool name to kernel
Although kernel does not allow to load a new dm table
with renamed vdopool, at least make lvm2 code ready
it it every will get supported.
2020-09-23 13:20:28 +02:00
David Teigland
a7b2fc8f57 writecache: add settings cleaner and max_age
available in dm-writecache 1.2
2020-06-10 12:15:50 -05:00
David Teigland
d9e8895a96 Allow dm-integrity to be used for raid images
dm-integrity stores checksums of the data written to an
LV, and returns an error if data read from the LV does
not match the previously saved checksum.  When used on
raid images, dm-raid will correct the error by reading
the block from another image, and the device user sees
no error.  The integrity metadata (checksums) are stored
on an internal LV allocated by lvm for each linear image.
The internal LV is allocated on the same PV as the image.

Create a raid LV with an integrity layer over each
raid image (for raid levels 1,4,5,6,10):

lvcreate --type raidN --raidintegrity y [options]

Add an integrity layer to images of an existing raid LV:

lvconvert --raidintegrity y LV

Remove the integrity layer from images of a raid LV:

lvconvert --raidintegrity n LV

Settings

Use --raidintegritymode journal|bitmap (journal is default)
to configure the method used by dm-integrity to ensure
crash consistency.

Initialization

When integrity is added to an LV, the kernel needs to
initialize the integrity metadata/checksums for all blocks
in the LV.  The data corruption checking performed by
dm-integrity will only operate on areas of the LV that
are already initialized.  The progress of integrity
initialization is reported by the "syncpercent" LV
reporting field (and under the Cpy%Sync lvs column.)

Example: create a raid1 LV with integrity:

$ lvcreate --type raid1 -m1 --raidintegrity y -n rr -L1G foo
  Creating integrity metadata LV rr_rimage_0_imeta with size 12.00 MiB.
  Logical volume "rr_rimage_0_imeta" created.
  Creating integrity metadata LV rr_rimage_1_imeta with size 12.00 MiB.
  Logical volume "rr_rimage_1_imeta" created.
  Logical volume "rr" created.
$ lvs -a foo
  LV                  VG  Attr       LSize  Origin              Cpy%Sync
  rr                  foo rwi-a-r---  1.00g                     4.93
  [rr_rimage_0]       foo gwi-aor---  1.00g [rr_rimage_0_iorig] 41.02
  [rr_rimage_0_imeta] foo ewi-ao---- 12.00m
  [rr_rimage_0_iorig] foo -wi-ao----  1.00g
  [rr_rimage_1]       foo gwi-aor---  1.00g [rr_rimage_1_iorig] 39.45
  [rr_rimage_1_imeta] foo ewi-ao---- 12.00m
  [rr_rimage_1_iorig] foo -wi-ao----  1.00g
  [rr_rmeta_0]        foo ewi-aor---  4.00m
  [rr_rmeta_1]        foo ewi-aor---  4.00m
2020-04-15 12:10:32 -05:00
Zdenek Kabelac
9d8a028e8c vdo: keep minimum_io_size in sectors 2019-10-04 17:31:55 +02:00
Zdenek Kabelac
30a3dda9d6 dm: ensure migration_threshold is big enough
When using caches with BIG pool size (>TB) there is required
to use relatively huge chunk size.  Once the chunksize has
got over 1MiB - kernel cache target stopped writing such chunks
back on this if the migration_threshold remained on default 1MiB
(2048 sectors) size.

This patch ensure, DM layer will not let pass table line which
has not big enough migration threshold that can let pass
at least 8 chunks (independently of lvm2 metadata).
2019-01-21 12:48:50 +01:00
Zdenek Kabelac
3320ab8334 lib: move towards v2 version of VDO format
Drop very old original format of VDO target and focus on V2 version.
So some variables were renamed or replaced.
There is no compatibility preserved (with assumption so far this is
experimental feature and there is no real user).

Note - version currently VDO calls this version 6.2.
2018-12-20 13:26:55 +01:00
Zdenek Kabelac
db6d9e04af debug: drop extra tracing
Stack tracing after log_error() is not needed.
2018-12-14 15:14:48 +01:00
Zdenek Kabelac
836dc9876b devicemapper: retry mirror leg deactivation
This could be seen as continuation of
6cee8f1b06.
Some test maching with old udev system shows problem,
where udev 'jumps on' leg device after mirror target
releases its legs -  since udev does not (in this old case) skips
such device from scanning - it opens device - and this prevent
leg device to be deactivated - effectively such device stays
'leaked' in DM table invisibly to lvm2 command.

So to 'combat' this issue - if the device has '_mimage' in its name,
the retry of deactivation is automatically assumed.

NOTE: wider impact is unexpected - as it's touching only old mirror
target which is nowadays replaced with 'raid'.

In case there will be some problem identified - probably both patches
should be reverted.
2018-11-12 15:30:40 +01:00
Zdenek Kabelac
6cee8f1b06 devicemapper: retry remove even for subLVs
With older systems and udevs we don't have control over scanning of lvm2
internal devices - so far we set retry-removal only for top-level LVs,
but in occasional cases udev can be 'fast enough' to open device for
scanning and prevent removal of such device from DM table.

So to combat this case - try to pass 'retry' flag also for removal of
internal device so see how many races can go away with this simple
patch.

Note: patch is applied only to internal version of libdm so the external
API remains working in the old way for now.
2018-11-08 12:20:57 +01:00
David Teigland
3ae5569570 Add dm-writecache support
dm-writecache is used like dm-cache with a standard LV
as the cache.

$ lvcreate -n main -L 128M -an foo /dev/loop0

$ lvcreate -n fast -L 32M -an foo /dev/pmem0

$ lvconvert --type writecache --cachepool fast foo/main

$ lvs -a foo -o+devices
  LV            VG  Attr       LSize   Origin        Devices
  [fast]        foo -wi-------  32.00m               /dev/pmem0(0)
  main          foo Cwi------- 128.00m [main_wcorig] main_wcorig(0)
  [main_wcorig] foo -wi------- 128.00m               /dev/loop0(0)

$ lvchange -ay foo/main

$ dmsetup table
foo-main_wcorig: 0 262144 linear 7:0 2048
foo-main: 0 262144 writecache p 253:4 253:3 4096 0
foo-fast: 0 65536 linear 259:0 2048

$ lvchange -an foo/main

$ lvconvert --splitcache foo/main

$ lvs -a foo -o+devices
  LV   VG  Attr       LSize   Devices
  fast foo -wi-------  32.00m /dev/pmem0(0)
  main foo -wi------- 128.00m /dev/loop0(0)
2018-11-06 14:18:41 -06:00
David Teigland
cac4a9743a Allow dm-cache cache device to be standard LV
If a single, standard LV is specified as the cache, use
it directly instead of converting it into a cache-pool
object with two separate LVs (for data and metadata).

With a single LV as the cache, lvm will use blocks at the
beginning for metadata, and the rest for data.  Separate
dm linear devices are set up to point at the metadata and
data areas of the LV.  These dm devs are given to the
dm-cache target to use.

The single LV cache cannot be resized without recreating it.

If the --poolmetadata option is used to specify an LV for
metadata, then a cache pool will be created (with separate
LVs for data and metadata.)

Usage:

$ lvcreate -n main -L 128M vg /dev/loop0

$ lvcreate -n fast -L 64M vg /dev/loop1

$ lvs -a vg
  LV   VG Attr       LSize   Type   Devices
  main vg -wi-a----- 128.00m linear /dev/loop0(0)
  fast vg -wi-a-----  64.00m linear /dev/loop1(0)

$ lvconvert --type cache --cachepool fast vg/main

$ lvs -a vg
  LV           VG Attr       LSize   Origin       Pool  Type   Devices
  [fast]       vg Cwi---C---  64.00m                     linear /dev/loop1(0)
  main         vg Cwi---C--- 128.00m [main_corig] [fast] cache  main_corig(0)
  [main_corig] vg owi---C--- 128.00m                     linear /dev/loop0(0)

$ lvchange -ay vg/main

$ dmsetup ls
vg-fast_cdata   (253:4)
vg-fast_cmeta   (253:5)
vg-main_corig   (253:6)
vg-main (253:24)
vg-fast (253:3)

$ dmsetup table
vg-fast_cdata: 0 98304 linear 253:3 32768
vg-fast_cmeta: 0 32768 linear 253:3 0
vg-main_corig: 0 262144 linear 7:0 2048
vg-main: 0 262144 cache 253:5 253:4 253:6 128 2 metadata2 writethrough mq 0
vg-fast: 0 131072 linear 7:1 2048

$ lvchange -an vg/min

$ lvconvert --splitcache vg/main

$ lvs -a vg
  LV   VG Attr       LSize   Type   Devices
  fast vg -wi-------  64.00m linear /dev/loop1(0)
  main vg -wi------- 128.00m linear /dev/loop0(0)
2018-11-06 13:44:54 -06:00
David Teigland
85b4b2f924 cache: clean up segment line creation 2018-11-06 11:36:28 -06:00
Zdenek Kabelac
aa8b2d6a0f cleanup: move cast to det_t into MKDEV macro 2018-11-05 17:25:11 +01:00
Zdenek Kabelac
fdd76da33d cov: drop uneeded header files 2018-10-15 17:49:44 +02:00
Zdenek Kabelac
cbbdace006 cov: dm node message fix missing initilization
In 2 teoretical error path the 'r' value has not been set to
proper value before possible use in error path.
2018-10-15 17:49:44 +02:00
Zdenek Kabelac
d8a41f22e9 device_mapper: basic support for vdo dm target 2018-07-09 15:28:35 +02:00
Zdenek Kabelac
c1a6b10d09 device_mapper: relocate code for sending messages
To be able to send messages for recently resumed devices,
move code into inner loop.
2018-07-02 10:25:35 +02:00
Zdenek Kabelac
d56e400d44 device_mapper: deactive new nodes when load fails
When node loading fails, there is not much the caller can do,
since there is 'unknown' set of devices preloaded.

Only suspend during preload knows future precommitted 'metadata',
so it's non-trivial to drop 'preloaded' entries with any later call.

However dm tree tracks newly loaded entries - so in this case it
may simplify the recovery path by dropping preloaded entries so
they are not leaked in the DM table.
2018-07-02 10:25:35 +02:00
Zdenek Kabelac
4194fc9bbd device_mapper: add new _dm_task_create_device_status
Introduce new function _dm_task_create_device_status for grabbing
status of device for better code sharing.
2018-06-25 15:07:55 +02:00
Zdenek Kabelac
739a213d2e device_mapper: split code for sending message
Move message sending from _thin_pool_node_message to
new _node_message for possible better code sharing.
2018-06-25 15:07:55 +02:00
Zdenek Kabelac
a1c81c009a device_mapper: split _node_send_message
For better code reuse split _node_send_messages into commont
messaging part and separate _thin_pool_node_send_messages.

Patch makes it possible to better reuse common code for messaging
other targets.
2018-06-25 15:07:55 +02:00
Joe Thornber
d5da55ed85 device_mapper: remove dbg_malloc.
I wrote dbg_malloc before we had valgrind.  These days there's just
no need.
2018-06-08 13:40:53 +01:00
Joe Thornber
ccc35e2647 device-mapper: Fork libdm internally.
The device-mapper directory now holds a copy of libdm source.  At
the moment this code is identical to libdm.  Over time code will
migrate out to appropriate places (see doc/refactoring.txt).

The libdm directory still exists, and contains the source for the
libdevmapper shared library, which we will continue to ship (though
not neccessarily update).

All code using libdm should now use the version in device-mapper.
2018-05-16 13:00:50 +01:00