1
0
mirror of git://sourceware.org/git/lvm2.git synced 2025-08-29 01:49:28 +03:00
Commit Graph

4382 Commits

Author SHA1 Message Date
6eb9eba59b bcache: support longer writes
When initiated larger write request, it may have happened, bcache
got out of free chunks - fix the loop, that is supposed to wait
until next free chunk becomes avain available.
2020-06-24 15:01:03 +02:00
211eaa284c WHATS_NEW: integrity with raid 2020-04-15 12:10:39 -05:00
e10f20bc23 WHATS_NEWS: update 2020-04-08 15:37:24 +02:00
06cbe3cfc6 post-release 2020-03-26 12:22:09 +01:00
e1c2b41265 pre-release 2020-03-26 12:21:16 +01:00
9532bb577a tests: validate vdo slab_size
New vdoformat can print this size - so check we pass proper bit count
matching preset value.
2020-02-26 13:29:21 +01:00
892a182975 cachevol: stop dm errors with uncaching cache with cachevol
Fix the anoying kernel message reported:
device-mapper: cache: 253:2: metadata operation 'dm_cache_commit' failed: error = -5
which has been reported while cachevol has been removed.
Happened via confusing variable - so switch the variable to commonly user '_size'
which presents a value in sector units and avoid 'scaling' this as extent length
by vg extent size when placing 'error' target on removal path.

Patch shouldn't have impact on actual users data, since at this moment
of removal all date should have been already flushed to origin device.

m
2020-02-11 17:19:57 +01:00
25b97e522d post-release 2020-02-11 10:53:01 +01:00
b9752d719c pre-release 2020-02-11 10:51:57 +01:00
9255c7148a WHATS_NEW: update 2020-02-04 17:22:06 +01:00
7404216241 WHATS_NEW: update 2020-01-23 10:32:15 +01:00
151bf52649 WHATS_NEW: update 2020-01-13 17:42:53 +01:00
dad2660a38 WHATS_NEW: VDO lvmdbusd adds 2020-01-09 13:11:41 -06:00
91f91b80f1 post-release 2019-11-30 14:46:56 +01:00
3d7f755674 pre-release 2019-11-30 14:45:51 +01:00
29db9c6325 lvcreate: ensure striped raid region size is at least stripe size
The kernel MD runtime requires region size to be larger than stripe size
on striped raid layouts, thus the dm-raid target's constructor rejects
such request.

This causes e.g. an 'lvcreate --type raid10 -i3 -I4096 -R2048 -n lv vg' to fail.

Avoid failing late in the kernel by enforcing region size to be
larger or equal to stripe size.

Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1698225
2019-11-26 22:31:58 +01:00
9cad26be32 WHATS_NEW: update 2019-11-11 22:44:25 +01:00
569e328cc0 WHATS_NEW: update 2019-10-31 15:43:02 +01:00
8b3cf53e24 Experimental VDO lvmdbusd support 2019-10-30 10:55:06 -05:00
6163b733e1 WHATS_NEW 2019-10-26 00:50:23 +02:00
c8b01f33a6 post-release 2019-10-23 09:51:55 +02:00
b9391b1b9f pre-release 2019-10-23 09:51:55 +02:00
3e01ff2783 dm: fix compilation of dmsetup
Fix: 889c88e9da
Use correct enum DM_DEVICE_GET_TARGET_VERSION.
2019-10-22 13:39:45 +02:00
dd7629ea09 cache: use _cpool for used cache-pools
When LV gets cached and uses cache-pool - such cache-pool
will now get _cpool suffix automatically.

Thus 'Pool' column for cached LV will now show either _cvol
or _cpool LV.
2019-10-21 15:31:33 +02:00
2a08d6d1d4 cachevol: use CVOL UUID for cdata and cmeta layered devices
Since code is using -cdata and -cmeta UUID suffixes, it does not need
any new 'extra' ID to be generated and stored in metadata.

Since introduce of new 'segtype' cache+CACHE_USES_CACHEVOL we can
safely assume 'new' cache with cachevol will now be created
without extra metadata_id and data_id in metadata.

For backward compatibility, code still reads them in case older
version of metadata have them - so it still should be able
to activate such volumes.

Bonus is lowered size of lv structure used to store info about LV
(noticable with big volume groups).
2019-10-17 13:03:49 +02:00
2825ad9dd2 cachevol: improve manipulation with dm tree
Enhance activation of cached devices using cachevol.
Correctly instatiace  cachevol -cdata & -cmeta devices with
'-' in name (as they are only layered devices).
Code is also a bit more compacted (although still not ideal,
as the usage of extra UUIDs stored in metadata is troublesome
and will be repaired later).

NOTE: this patch my brink potentially minor incompatiblity for 'runtime' upgrade
2019-10-14 15:17:50 +02:00
a454a1b4ea cachevol: put _cvol as protected suffix.
This revert "drop cvol dm uuid suffix for cachevol LVs"
commit 5191057d9d.
Start using -cvol for  DM UUID.
2019-10-14 15:16:05 +02:00
8a8e6ebba2 cachevol: rename converted LV to _cvol
When converting existing public LV to internally used
'CacheVol' LV - rename LV to LV_cvol.

When splitting CacheVol, remove _cvol suffix.
2019-10-14 15:15:12 +02:00
f6d171ffe3 cachevol: wipe 'normal' device
For wiping we activate and clear 'regular' devices,
since in case of whole process interuption (i.e. kill -9)
we leave metadata & DM table and workable state all the time.
2019-10-14 15:14:46 +02:00
36944e1009 cache: reload only when switched to cleaner policy
Reload cache target only when lvm2 reload table with
cache with clearer policy.
2019-10-14 15:14:22 +02:00
76a9a86fd3 lvconvert: fix return value when zeroing fails
Use correct error return code for fail path.
2019-10-14 15:13:33 +02:00
414f903cdc WHATS_NEW: update 2019-10-04 17:31:55 +02:00
6612d8dd5e vdo: enhance activation with layer -vpool
Enhance 'activation' experience for VDO pool to more closely match
what happens for thin-pools where we do use a 'fake' LV to keep pool
running even when no thinLVs are active. This gives user a choice
whether he want to keep thin-pool running (wihout possibly lenghty
activation/deactivation process)

As we do plan to support multple VDO LVs to be mapped into a single VDO,
we want to give user same experience and 'use-patter' as with thin-pools.

This patch gives option to activate VDO pool only without activating
VDO LV.

Also due to 'fake' layering LV we can protect usage of VDO pool from
command like 'mkfs' which do require exlusive access to the volume,
which is no longer possible.

Note: VDO pool contains 1024 initial sectors as 'empty' header - such
header is also exposed in layered LV (as read-only LV).
For blkid we are indentified as LV with UUID suffix - thus private DM
device of lvm2 - so we do not need to store any extra info in this
header space (aka zero is good enough).
2019-09-17 13:17:19 +02:00
66f69e766e thin: activate layer pool aas read-only LV
When lvm2 is activating layered pool LV (to basically keep pool opened,
the other function used to be 'locking' be in sync with DM table)
use this LV in read-only mode - this prevents 'write' access into
data volume content of thin-pool.

Note: since EMPTY/unused thin-pool is created as 'public LV' for generic
use by any user who i.e. wish to maintain thin-pool and thins himself.
At this moment, thin-pool appears as writable LV.  As soon as the 1st.
thinLV is created, layer volume will appear is 'read-only' LV from this moment.
2019-09-17 13:16:50 +02:00
693215716b devices: crypto skip
Devices with UUID signature CRYPT-SUBDEV are internal crypto devices.
2019-09-17 13:15:22 +02:00
7612c21f55 lvconvert: improve validation thin and cache pool conversion
Limit convertible LVs to thin-pool and cache-pools.
Also fix return code on  interal error path to return ECMD_FAILED.
2019-09-17 13:13:49 +02:00
c98e34e4d0 cache: improve vgremove loop
Support internal removal of 'cache origin' volume - which we
do not normally expose to a user - however internal processing
loops may hit this condition (depending on order of list LVs).

So when this operation is internally requested - we automatically
try to remove it's 'holding' LV (cache LV) - which will also
remove the origin.
2019-08-26 15:32:12 +02:00
30a98e4d67 activation: add synchronization point
Resuming of 'error' table entry followed with it's dirrect removal
is now troublesame with latest udev as it may skip processing of
udev rules for already 'dropped' device nodes.

As we cannot 'synchronize' with udev while we know we have devices
in suspended state - rework 'cleanup' so it collects nodes
for removal into pending_delete list and process the list with
synchronization once we are without any suspended nodes.
2019-08-20 12:46:11 +02:00
0bdd6d6240 pvmove: add missing synchronization
Between 'resume' and 'remove' we need to wait for udev to synchronize,
otherwise udev may 'skip' resume event processing if the udev node
is already gone.
2019-08-20 12:44:39 +02:00
0451225c19 pvmove: correcting read_ahead setting
When pvmove is finished, we do a tricky operation since we try to
resume multiple different device that were all joined into 1 big tree.

Currently we use the infromation from existing live DM table,
where we can get list of all holders of pvmove device.
We look for these nodes (by uuid) in new metadata, and we do now a full
regular device add into dm tree structure.  All devices should be
already PRELOAD with correct table before entering suspend state,
however for correctly working readahead we need to put correct info
also into RESUME tree.  Since table are preloaded, the same table
is skip and resume, but correct read ahead is now set.
2019-08-20 12:37:32 +02:00
125f27ac37 udev: remove unsupported OPTIONS+="event_timeout" rule
The OPTIONS+="event_timeout" is Unsupported since systemd/udev version 216,
that is ~5 years ago.

Since systemd/udev version 243, there's a new message printed if unsupported
OPTIONS value is used:

  Invalid value for OPTIONS key, ignoring: 'event_timeout=180'

Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1740666
2019-08-13 15:18:30 +02:00
3a0d493d91 WHATS_NEW: vgcreate/vgextend logical block size 2019-08-01 10:15:27 -05:00
c1996c78c1 WHATS_NEW: fix large physical block size 2019-07-30 16:14:28 -05:00
1d1741b23a post-release 2019-06-15 09:23:03 +02:00
60bd9e8406 pre-release 2019-06-15 09:21:47 +02:00
a4dbbefaff WHATS_NEW for recent changes 2019-06-13 17:44:14 -05:00
dbc5543cbb post-release 2019-06-10 17:04:30 +02:00
f1b4aeba66 pre-release 2019-06-10 16:59:49 +02:00
4d11bf8d50 post-release 2019-06-07 17:24:51 +02:00
cb6277aa8a pre-release 2019-06-07 17:24:51 +02:00