1
0
mirror of git://sourceware.org/git/lvm2.git synced 2024-12-21 13:34:40 +03:00
Commit Graph

3156 Commits

Author SHA1 Message Date
Zdenek Kabelac
f96b455506 pool: limit pmspare to 16GiB
There is not much point to let allocate more then this size
even when i.e. converted LV is bigger then 16GiB (%extent_size)
ATM neither thin-pool nor cache-pool supports bigger metadata.
2021-02-01 12:06:13 +01:00
Zdenek Kabelac
b4212be2e7 thin: improve 16g support for thin pool metadata
Initial support for thin-pool used slightly smaller max size 15.81GiB
for thin-pool metadata. However the real limit later settled at 15.88GiB
(difference is ~64MiB - 16448 4K blocks).

lvm2 could not simply increase the size as it has been using hard cropping
of the loaded metadata device to avoid warnings printing warning of kernel
when the size was bigger (i.e. due to bigger extent_size).

This patch adds the new lvm.conf configurable setting:
allocation/thin_pool_crop_metadata
which defaults to 0 -> no crop of metadata beyond 15.81GiB.
Only user with these sizes of metadata will be affected.

Without cropping lvm2 now limits metadata allocation size to 15.88GiB.
Any space beyond is currently not used by thin-pool target.
Even if i.e. bigger LV is used for metadata via lvconvert,
or allocated bigger because of to large extent size.

With cropping enabled (=1) lvm2 preserves the old limitation
15.81GiB and should allow to work in the evironement with
older lvm2 tools (i.e. older distribution).

Thin-pool metadata with size bigger then 15.81G is now using CROP_METADATA
flag within lvm2 metadata, so older lvm2 recognizes an
incompatible thin-pool and cannot activate such pool!

Users should use uncropped version as it is not suffering
from various issues between thin_repair results and allocated
metadata LV as thin_repair limit is 15.88GiB
Users should use cropping only when really needed!

Patch also better handles resize of thin-pool metadata and prevents resize
beoyond usable size 15.88GiB. Resize beyond 15.81GiB automatically
switches pool to no-crop version. Even with existing bigger thin-pool
metadata command 'lvextend -l+1 vg/pool_tmeta' does the change.

Patch gives better controls 'coverted' metadata LV and
reports less confusing message during conversion.

Patch set also moves the code for updating min/max into pool_manip.c
for better sharing with cache_pool code.
2021-02-01 12:06:13 +01:00
David Teigland
a690d16d29 writecache: use cleaner message instead of table reload
When detaching writecache, make the first stage send a message
to dm-writecache to set the cleaner option.  This is instead of
reloading the dm table with the cleaner option set.  Reloading
the table causes udev to process/probe the dm dev, which gets
stalled because of the writeback activity, and the stalled udev
in turn stalls the lvconvert command when it tries to sync with
udev events.

When getting writecache status we do not need to get
open_count or read_head info, which can cause extra steps.
2021-01-28 15:14:25 -06:00
Heinz Mauelshagen
f08ef23856 lvdisplay: enhance LV status output for raid(0)
In case legs of a raid0 LV are removed, the lvdisplay command still
reports 'available' though raid0 is not providing any resilience
compared to the other raid levels.

Also lvdisplay does not display '(partial)' in case of missing raid0
legs as oposed to the lvs command.

Enhance lvdisplay to report "NOT available" for any RaidLV type in case
too many legs are inaccessible hence causing data loss.  I.e. any leg
for raid0, all for raid1, more than 1 for raid4/5, more than 2 for raid6
and in case of completely lost mirror groups for raid10.

Add test/shell/lvdisplay-raid.sh.

Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1872678
2021-01-27 16:56:22 +01:00
Zdenek Kabelac
8532b1ca97 vdo: support online rename
New VDO targets v6.2.3 corrects support for online rename of VDO device.
If needed if can be disable via new lvm.conf setting:

vdo_disabled_features = [ "online_rename" ]
2021-01-22 15:30:37 +01:00
Zdenek Kabelac
4b8e5ad595 pools: fix removal of spare volume
When removing pool LV from a stacked LV setup, it's been possible
to leak _pmspare and such hidden LV then required manual
user removal.

Fix it by moving automatic removal into _lv_reduce().
2021-01-22 15:30:37 +01:00
David Teigland
0534723a2d integrity: fix segfault on error path when replacing images
When adding replacement raid+integrity images (lvconvert --repair
after a raid image is lost), various errors can cause the function
to exit with an error.  On this exit path, the function attempts
to revert new images that had been created but not yet used.  The
cleanup failed to account for the fact that not all images needed
to be reverted.
2021-01-13 13:39:33 -06:00
Zdenek Kabelac
0b6ee6a912 alloc: enhance estimation of sufficient_pes_free
Since commit 77fdc17d70 always include
log_len size into needed extents - however now we may need sometimes
more extents then necessary - mainly when multiple PVs are involved
into allocation.

Add logs_still_needed into calculation of sufficient_pes_free()
2021-01-13 12:54:45 +01:00
David Teigland
b84a9927b7 partial flag for writecache and integrity
When a writecache sublv or an integrity metadata sublv
are partial (missing a dev), set the partial flag on
the upper level LV also, as is done for other sublvs.
2020-12-11 16:25:25 -06:00
David Teigland
9fe7aba251 cache: activation cache_check on cachevol
When using cache with a cachevol, the cache_check tool was
not being run on the cache metadata during activation.
cache_check clears the needs_check flag in the cache
metadata, so if the flag was set due to an unclean
shutdown, the activation would fail.
2020-12-09 17:36:09 -06:00
David Teigland
5fef89361d integrity: display total mismatches at raid LV level
Each integrity image in a raid LV reports its own number
of integrity mismatches, e.g.

lvs -o integritymismatches vg/lv_rimage_0
lvs -o integritymismatches vg/lv_rimage_1

In addition to this, allow the total number of integrity
mismatches from all images to be displayed for the raid LV.

lvs -o integritymismatches vg/lv

shows the number of mismatches from both lv_rimage_0 and
lv_rimage_1.
2020-11-11 15:10:15 -06:00
Zdenek Kabelac
7bafae48bb gcc: cleanup warns from older gcc 2020-10-26 13:06:53 +01:00
Zdenek Kabelac
9740e98cbd lv_manip: add space into message
Just add space between %s(.
2020-10-24 01:42:16 +02:00
David Teigland
6226512ad2 get dev size when setting pv device
In some cases the dev size may not have been read yet
in set_pv_devices().  In this case get the dev size
before comparing the dev size with the pv size.
2020-10-22 13:19:17 -05:00
Zdenek Kabelac
b75c2dfe1b debug: shorten error message
Just check for sigint during log_error().
2020-10-19 16:53:18 +02:00
Zdenek Kabelac
e7fff97b8d wipe_lv: use BLKZEROOUT when possible
Since BLKZEROOUT ioctl should be supposedly fastest
way how to clear block device start using this ioctl
for zeroing a device. Commonly we do zero typically
small portion of a device (8KiB) - however since we now
also started to zero metadata devices, in the case
of i.e. thin-pool metadata this can go upto ~16GiB
and here the performance starts to be noticable.
2020-10-02 21:04:16 +02:00
Zdenek Kabelac
c65d3a6b8a wipe_lv: interruptible wiping
Since we now block signals and wiping may take unexpectedly long
time - support breaking command while wipe is in progress.
2020-10-02 21:03:19 +02:00
Zdenek Kabelac
7396f1cfee wipe_lv: drop label_scan_invalidate on error path
Since dev_set_bytes() now closes  dev on error path itself,
remove this unneeded call now (introduced few commits back
in history thus removing comment from WHATS_NEW)
2020-10-02 21:02:04 +02:00
David Teigland
c32d7fed4f writecache: use two step detach
When detaching a writecache, use the cleaner setting
by default to writeback data prior to suspending the
lv to detach the writecache.  This avoids potentially
blocking for a long period with the device suspended.

Detaching a writecache first sets the cleaner option, waits
for a short period of time (less than a second), and checks
if the writecache has quickly become clean.  If so, the
writecache is detached immediately.  This optimizes the case
where little writeback is needed.

If the writecache does not quickly become clean, then the
detach command leaves the writecache attached with the
cleaner option set.  This leaves the LV in the same state
as if the user had set the cleaner option directly with
lvchange --cachesettings cleaner=1 LV.

After leaving the LV with the cleaner option set, the
detach command will wait and watch the writeback progress,
and will finally detach the writecache when the writeback
is finished.  The detach command does not need to wait
during the writeback phase, and can be canceled, in which
case the LV will remain with the writecache attached and
the cleaner option set.  When the user runs the detach
command again it will complete the detach.

To detach a writecache directly, without using the cleaner
step (which has been the approach previously), add the
option --cachesettings cleaner=0 to the detach command.
2020-10-01 11:33:02 -05:00
David Teigland
2272a32e6f lvmlockd vdo: add support
lvmlockd handling for vdo lv and vdo pool is like
thin lv and thin pool.
2020-09-29 14:43:27 -05:00
Zdenek Kabelac
bd0d4de4e2 active: fix compilation without devmapper
Better support for compilation without device-mapper.
2020-09-29 10:43:56 +02:00
Zdenek Kabelac
4de6f58085 thin: use lv_status_thin and lv_status_thin_pool
Introduce structures lv_status_thin_pool and
lv_status_thin  (pair to lv_status_cache, lv_status_vdo)

Convert lv_thin_percent() -> lv_thin_status()
and  lv_thin_pool_percent() + lv_thin_pool_transaction_id() ->
lv_thin_pool_status().

This way a function user can see not only percentages, but also
other important status info about thin-pool.

TODO:
This patch tries to not change too many other things,
but pool_below_threshold() now uses new thin-pool info to return
failure if thin-pool cannot be actually modified.
This should be handle separately in a better way.
2020-09-29 10:43:56 +02:00
Zdenek Kabelac
92c0e8c17f writecache: archive before modification of metadata
Archive before we start to modify metadata.
2020-09-29 10:43:56 +02:00
Zdenek Kabelac
08e838f488 cleanup: avoid unneeded check
Since creation of thin snapshot already makes sure,
the message list is empty, there is no need to check
this again.
2020-09-29 10:43:56 +02:00
Heinz Mauelshagen
8952dcbff0 Revert "lvconvert: display warning if raid1 LV image count does not change"
This reverts superfluous commit 3c9177fdc0 as
_lv_raid_change_image_count() already checks for non-changed image count.

Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1872130
2020-09-28 17:14:03 +02:00
Zdenek Kabelac
e414ebef6e thin: pass through whole code
Instead of early 'return 0' let the whole code finish
in case of an error with syncing.
2020-09-25 22:59:35 +02:00
Zdenek Kabelac
ef59c83f2d thin: enhance lvcreate error paths
Improve error response and reporting, when creating thin snapshots.
If the thin pool kernel metadata already have device with ID lvm2
tries to create, give more meanigful error message and also properly
restore transaction id to the value known to thin-pool in this case.

Before it's been possible to divert by one from kernel TID value,
and lvm2 stacked delete message for such thin device.
2020-09-25 22:56:40 +02:00
Zdenek Kabelac
e2eb1dc501 thin: no delete message for device_id 0
Since we always use device_id > 0, we could use
device_id == 0 to actually mark thinLV as an
LV we want to remove without delete message.
2020-09-25 22:54:07 +02:00
Zdenek Kabelac
7c19186271 vdo: disable support for online rename of vdopool LV
Since ATM kernel does not support this operation,
disable 'lvrename' of an active vdopool.

As a workaround, user may simply deactivate, rename and activate.
2020-09-23 13:18:23 +02:00
Zdenek Kabelac
3a3307c0d8 vdo: enhance vdo pool extension
When user tries to extend vdo pool - he needs to go always
at least by 1 full VDO slab  (defined as  vdo_slab_size_mb).

To avoid all trouble around find 'workable' size - lvm2 automatically
increases the passed (or by --use-policies calculated) extension size
(and informs a user about sometimes possibly large increase as slab
size can go upto 32GiB)

With VDO users need to always 'think-big' anyway and expect such
operation to be in GiB domain range.
2020-09-22 23:28:43 +02:00
Zdenek Kabelac
f38b7afd62 vdo: extend vdo segment validation
Try to catch all suspicious VDO segments in metadata early.
2020-09-22 23:25:16 +02:00
Zdenek Kabelac
642ef54399 vdo: correct message about policy extend support
Policy extend is already supported for vdo pools as well,
so correct the error message.
2020-09-22 23:25:16 +02:00
Zdenek Kabelac
5bc66532c7 activation: use revert_lv on tree suspend failure
When thetable reload fails during suspend() - we were only calling
plain resume() - and this will reload only those devices,
which were left suspend, but will not try to restore
metadata state according to lvm2 reverted metadata.
So if we were reloading device tree - we have restored
only top-level LV and rest of reverted device manipulation
were left alone and possibly mismatched what is in committed
metadata.

FIXME: There are several cases were such revert will likely not work
properly anyway as some operation are currenly handled in single commit,
while they need multiple commits, but it's step towards better correctness.
At least we catch there errors now earlier.
2020-09-22 21:02:14 +02:00
David Teigland
1404e5ee61 metadata: open rw fd before closing ro fd
lvm opens devices readonly to scan them, but
needs to open then readwrite to update the metadata.
Previously, the ro fd was closed before the rw fd
was opened, leaving a small gap where the dev was
not held open, and during which the dev could
possibly change which storage it referred to.

With the bcache_change_fd() interface, lvm opens a
rw fd on a device to be written, tells bcache to
change to the new rw fd, and closes the ro fd.

. open dev ro
. read dev with the ro fd (label_scan)
. lock vg (ex for writing)
. open dev rw
. close ro fd
. rescan dev to check if the metadata changed
  between the scan and the lock
. if the metadata did change, reread in full
. write the metadata
2020-09-18 15:10:11 -05:00
David Teigland
1570e76233 bcache: use indirection table for fd
Add a "device index" (di) for each device, and use this
in the bcache api to the rest of lvm.  This replaces the
file descriptor (fd) in the api.  The rest of lvm uses
new functions bcache_set_fd(), bcache_clear_fd(), and
bcache_change_fd() to control which fd bcache uses for
io to a particular device.

. lvm opens a dev and gets and fd.
  fd = open(dev);

. lvm passes fd to the bcache layer and gets a di
  to use in the bcache api for the dev.
  di = bcache_set_fd(fd);

. lvm uses bcache functions, passing di for the dev.
  bcache_write_bytes(di, ...), etc.

. bcache translates di to fd to do io.

. lvm closes the device and clears the di/fd bcache state.
  close(fd);
  bcache_clear_fd(di);

In the bcache layer, a di-to-fd translation table
(int *_fd_table) is added.  When bcache needs to
perform io on a di, it uses _fd_table[di].

In the following commit, lvm will make use of the new
bcache_change_fd() function to change the fd that
bcache uses for the dev, without dropping cached blocks.
2020-09-18 15:10:11 -05:00
Zdenek Kabelac
2b36542f41 wipe: dev_set_bytes resolves zeroing
Since dev_write_zeros() is just subset of dev_set_bytes()
use it directly and simplify code.
2020-09-15 23:07:06 +02:00
Zdenek Kabelac
d588de77aa wipe: convert zero_value to uint8_t
We always write this value as byte.
2020-09-15 22:52:25 +02:00
Zdenek Kabelac
ec4e8b5c0e wipe: zeroing of 8 sectors is granted
With do_zero min is always 8 sectors, so use 0 as default.
2020-09-15 22:52:25 +02:00
Zdenek Kabelac
187cc8d344 lvcreate: change error message
Provide more useful error message.
2020-09-15 22:52:25 +02:00
Zdenek Kabelac
39198eb2ce lvcreate: add extra synchronization at error path
Put explict udev synchronization before we try to deactive devices.
2020-09-15 22:52:25 +02:00
Zdenek Kabelac
b2978efbff cache: simplier signal handling
Use just single sigint_allow()/restore() within flushing loop
and void one extra signal manipulation.
2020-09-14 00:15:14 +02:00
Zdenek Kabelac
77fdc17d70 alloc: improve estimation of sufficient_pes_free
Metadata size was calculated correctly only for raids.

Fixes problem for crash during lvcreate when thin-pool was created
on a VG where remaining free space had the size to only fit a single
metadata LV and not also its _pmspare.

Lvcreate crashed with this assert message:

lvcreate: metadata/pv_map.c:198: consume_pv_area: Assertion `to_go <= pva->count' failed.
Aborted (core dumped)

TODO: there is probably to large overload of several alloc_handle
variables.

Reported-by: Wu Guanghao<wuguanghao3@huawei.com>
Reported-by: Zhiqiang Liu <liuzhiqiang26@huawei.com>
2020-09-11 21:51:24 +02:00
Zdenek Kabelac
9f78acfee9 thin: compensate metadata size by extra percent
When using --use-policy for automatic extension of thin-pool,
the extension of thin-pool's metadata itself can actually take
some extra space.
Since I'm not aware of exact compensation formula, add just
1% extra to calculated amount and hope it fits.

Wanted target is to always have usable thin-pool that fits
bellow pool_metadata_min_threshold().
2020-09-11 21:42:37 +02:00
Zdenek Kabelac
b798554a20 lv_manip: even better rounding 2020-09-11 13:37:04 +02:00
Zdenek Kabelac
678951f635 cleanup: comment typo 2020-09-10 23:55:03 +02:00
Zdenek Kabelac
e7bd3ba22d debug: drop debug trace from regular path
Since we query on regular code these:
  lv_raid_has_integrity()
  lv_has_integrity_recalculate_metadata()
without prior checking for lv_is_raid() - these 'return 0' should
not use <stacktrace> as they are expected.
2020-09-10 23:55:03 +02:00
Zdenek Kabelac
bc09803628 lv_manip: relocate check to proper function 2020-09-10 23:54:33 +02:00
Zdenek Kabelac
e7f5acdfa6 lvextend: improve percentage estimation
Correcting rounding rules for percentage evaluation.

Validate supported range of percentage.
(although ranges are already validated earlier on code path)
2020-09-10 23:54:31 +02:00
Zdenek Kabelac
3e6bb77228 lv_manip: add synchronization points 2020-09-08 21:23:03 +02:00
David Teigland
d1019a6434 integrity: improve lv type checks 2020-09-02 12:40:45 -05:00
David Teigland
9a7b81fb72 integrity: fix segfault for lv with no seg
in lv_raid_has_integrity
2020-09-02 09:15:58 -05:00
David Teigland
ed249a2c53 integrity: report mismatches
with lvs -o integritymismatches

reported for integrity images, which may report
different values
2020-09-01 17:13:21 -05:00
David Teigland
f2c1de783c integrity: always default to journal mode
lvconvert was defaulting to bitmap mode,
and lvcreate was defaulting to journal mode.
2020-09-01 17:12:28 -05:00
Zdenek Kabelac
672d5ad98b gcc: hide warn about possible uninitialized use of dev_ret
Older gcc reports this fp problem.
2020-09-01 23:40:24 +02:00
Zdenek Kabelac
56c41b7522 cov: avoid duplicated assign 2020-09-01 17:57:50 +02:00
Zdenek Kabelac
fd96f1014b gcc: zero-sized array to fexlible array C99
Switch remaining zero sized struct to flexible arrays to be C99
complient.

These simple rules should apply:

- The incomplete array type must be the last element within the structure.
- There cannot be an array of structures that contain a flexible array member.
- Structures that contain a flexible array member cannot be used as a member of another structure.
- The structure must contain at least one named member in addition to the flexible array member.

Although some of the code pieces should be still improved.
2020-09-01 17:57:50 +02:00
Zdenek Kabelac
b722ce2f10 gcc: drop bogus ; 2020-08-28 21:43:03 +02:00
Zdenek Kabelac
ee0cb17608 gcc: use apropriate type for reading and printing values 2020-08-28 21:43:03 +02:00
Zdenek Kabelac
ff4827ffb1 lv_manip: get_default_region_size return uint32_t 2020-08-28 21:43:02 +02:00
Zdenek Kabelac
03f9cd95b4 writecache: correct usage of const struct 2020-08-28 21:43:02 +02:00
David Teigland
9a88a9c4ce Revert "lvdisplay: dispaly correct status when underlying devs missing"
This reverts commit 1d0dc74f91.

We should avoid adding anything new to lvdisplay and report
new information via lvs reporting fields.
2020-08-28 13:28:15 -05:00
Zhao Heming
1d0dc74f91 lvdisplay: dispaly correct status when underlying devs missing
reproducible steps:
1. vgcreate vg1 /dev/sda /dev/sdb
2. lvcreate --type raid0 -l 100%FREE -n raid0lv vg1
3. do remove the /dev/sdb action
4. lvdisplay show wrong 'LV Status'

After removing raid0 type LV underlying dev, lvdisplay still display
'available'. This is wrong status for raid0.

This patch add a new function raid_is_available(), which will handle
all raid case.

With this patch, lvdisplay will show
from:
  LV Status              available
to:
  LV Status              NOT available (partial)

Reviewed-by: Enzo Matsumiya <ematsumiya@suse.com>
Signed-off-by: Zhao Heming <heming.zhao@suse.com>
2020-08-24 09:47:04 -05:00
Zdenek Kabelac
46d15b5e4d wipe_lv: close devices on error path
Device was kept open preventing its deactivated and removed
on error path.
2020-08-19 15:09:09 +02:00
Heinz Mauelshagen
3c9177fdc0 lvconvert: display warning if raid1 LV image count does not change
Fix "lvconvert -mN $RaidLV" to display a warning in
case the same number of images is being requested.

Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1774696
2020-07-20 15:42:15 +02:00
Heinz Mauelshagen
286a793c12 lvconvert: fix conversion to 'mirrored' mirror log with larger regionsize
merge.c:_check_lv_segment() was checking regionsize vs. mirrored LV size on
any 'mirror/raid1/raid10' segment type including type 'mirrored' mirror logs.

Avoid the check only for 'mirrored' mirror logs to allow conversion from log
type 'disk' with regionsize > mirror log SubLV size.

As we disabled support for 'mirrored' mirror logs with
commit e82303fd6a which still conditionally
allows to enable it via global/support_mirrored_mirror_logs=1,
patch is mandatory for all distributions.

Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1712983
2020-07-09 14:39:50 +02:00
Zdenek Kabelac
9b9bf8786f raid: no wiping when zeroing raid metadata device
Currently lvm2 is not wiping signatures when creating 'metadata' volumes
and raid _rmeta was the only exception - so make the behavior consistent
with other metadata devices and drop wiping ATM.
Drop also some extra debug since they are now more explanatory in
wipe_lv() function.
Also note - although lvm2 now does not wipe signatures - the error
from such wipping used to be actually 'ignored' before wipe_lv()
started to return error (with recent commit) and raid creation
continued with 'unzeroed' metadata device.

TODO: Several issues to resolve:

1. We may want to flip to wipping with all LVs (in that case we need to
support passing --yet & --force).

2. Also we may want to clear whole metadata device - however current
function is also used for wipping i.e. snapshot COW device which
is likely not a good candidate for full device zeroing.
We may also need to think about better logic when extent size is
enforcing very large LVs, when only a small portion of LV is ever
being used.

3. Using TRIM instead of zeroing metadata device might be worth to
implement.

mm
2020-07-08 11:40:55 +02:00
Zdenek Kabelac
fe78cd4082 wipe_lv: always zero at least 4K
When zero_sectors passed value like 1 - we could zero only 1 sector.
Reinstantiate we always zero at least 4K block.
2020-07-08 11:12:54 +02:00
David Teigland
ad773511c5 integrity: add initial size to metadata size
The metadata device size needs to include space for
the dm-integrity "initial_sectors" which hold journals.
2020-06-30 16:43:05 -05:00
Zdenek Kabelac
eb06832b37 cov: remove unused header 2020-06-24 15:01:03 +02:00
Zdenek Kabelac
bc39d5bec6 pool: zero metadata
To avoid polution of metadata with some 'garbage' content or eventualy
some leak of stale data in case user want to upload metadata somewhere,
ensure upon allocation the metadata device is fully zeroed.

Behaviour may slow down allocation of thin-pool or cache-pool a bit
so the old behaviour can be restored with lvm.conf setting:
allocation/zero_metadata=0

TODO: add zeroing for extension of metadata volume.
2020-06-24 15:01:03 +02:00
Zdenek Kabelac
edbc5a62b2 wipe_lv: make error a fatal event
Failure in wiping/zeroing stop the command.
If user wants to avoid command abortion he should use -Zn or -Wn
to avoid wiping.

Note: there is no easy way to distinguish which kind of failure has
happend - so it's safe to not proceed any futher.
2020-06-24 15:01:03 +02:00
Heinz Mauelshagen
04bba5ea42 lv{resize,extend,reduce}: also check for 2-legged raid4
Users can also convert 2-legged raid1 to raid4 thus causing 'Bus error'
on resize requests.

Related: https://bugzilla.redhat.com/show_bug.cgi?id=1784351
2020-06-24 14:02:31 +02:00
Heinz Mauelshagen
2cf0f90780 lv{resize,extend,reduce}: reject size change on 2-legged raid5*
Reject size changing request in to avoid 'Bus error' and
display hint to convert to more stripes.

Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1784351
2020-06-24 13:52:56 +02:00
David Teigland
2aed2a41f7 lvcreate: new cache or writecache lv with single command
To create a new cache or writecache LV with a single command:

lvcreate --type cache|writecache
    -n Name -L Size --cachedevice PVfast VG [PVslow ...]

- A new main linear|striped LV is created as usual, using the
  specified -n Name and -L Size, and using the optionally
  specified PVslow devices.
- Then, a new cachevol LV is created internally, using PVfast
  specified by the cachedevice option.
- Then, the cachevol is attached to the main LV, converting the
  main LV to type cache|writecache.

Include --cachesize Size to specify the size of cache|writecache
to create from the specified --cachedevice PVs, otherwise the
entire cachedevice PV is used.  The --cachedevice option can be
repeated to create the cache from multiple devices, or the
cachedevice option can contain a tag name specifying a set of PVs
to allocate the cache from.

To create a new cache or writecache LV with a single command
using an existing cachevol LV:

lvcreate --type cache|writecache
    -n Name -L Size --cachevol LVfast VG [PVslow ...]

- A new main linear|striped LV is created as usual, using the
  specified -n Name and -L Size, and using the optionally
  specified PVslow devices.
- Then, the cachevol LVfast is attached to the main LV, converting
  the main LV to type cache|writecache.

In cases where more advanced types (for the main LV or cachevol LV)
are needed, they should be created independently and then combined
with lvconvert.

Example
-------

user creates a new VG with one slow device and one fast device:

$ vgcreate vg /dev/slow1 /dev/fast1

user creates a new 8G main LV on /dev/slow1 that uses all of
/dev/fast1 as a writecache:

$ lvcreate --type writecache --cachedevice /dev/fast1
    -n main -L 8G vg /dev/slow1

Example
-------

user creates a new VG with two slow devs and two fast devs:

$ vgcreate vg /dev/slow1 /dev/slow2 /dev/fast1 /dev/fast2

user creates a new 8G main LV on /dev/slow1 and /dev/slow2
that uses all of /dev/fast1 and /dev/fast2 as a writecache:

$ lvcreate --type writecache --cachedevice /dev/fast1 --cachedevice /dev/fast2
    -n main -L 8G vg /dev/slow1 /dev/slow2

Example
-------

A user has several slow devices and several fast devices in their VG,
the slow devs have tag @slow, the fast devs have tag @fast.

user creates a new 8G main LV on the slow devs with a
2G writecache on the fast devs:

$ lvcreate --type writecache -n main -L 8G
    --cachedevice @fast --cachesize 2G vg @slow
2020-06-16 13:46:51 -05:00
David Teigland
48872b0369 integrity: avoid increasing logical block size of active LV
When adding integrity to an active LV, avoid choosing an
integrity block size that would result in increasing the
logical block size of the LV.
2020-06-16 12:27:22 -05:00
David Teigland
b528a9ce90 integrity: fix block size check when inactive
Checking fs block size requires the LV to be active.
2020-06-11 12:43:52 -05:00
David Teigland
38eaa1035b writecache: allow snapshot of LV with writecache 2020-06-10 12:18:00 -05:00
David Teigland
712c9efbf6 fix bad result from _cache_min_metadata_size
fixes regression from switching to use _cache_min_metadata_size
(commit c08704cee7) which returns
a bogus value when the cachevol size is 8MB.
2020-06-10 12:17:34 -05:00
David Teigland
a7b2fc8f57 writecache: add settings cleaner and max_age
available in dm-writecache 1.2
2020-06-10 12:15:50 -05:00
David Teigland
1ee42f1391 writecache: cachesettings in lvchange and lvs
lvchange --cachesettings
lvs -o+cache_settings
2020-06-10 12:14:00 -05:00
David Teigland
ce772bfab9 writecache: show error in lv_health_status and lv_attr
lv_attr is 'E' and lv_health_status is 'error'
when dm-writecache status reports error.
2020-06-10 12:13:48 -05:00
David Teigland
240062a183 writecache: remove from an active lv 2020-06-10 12:13:31 -05:00
David Teigland
fa9eb76a5d improve info about vgck updatemetadata
Add man page info about this option, and add log messages
pointing to this option.
2020-06-03 12:38:27 -05:00
David Teigland
d945b53ff7 remove vg_read_error
Once converted results to error numbers but is now just a null check.
2020-04-24 11:14:29 -05:00
David Teigland
d79afd4084 lvmcache: rework handling of VGs with duplicate vgnames
The previous method of managing duplicate vgnames prevented
vgreduce from working if a foreign vg with the same name
existed.
2020-04-21 14:40:34 -05:00
David Teigland
cc4051eec0 pass cmd struct through more functions
no functional change
2020-04-21 10:58:05 -05:00
David Teigland
d9e8895a96 Allow dm-integrity to be used for raid images
dm-integrity stores checksums of the data written to an
LV, and returns an error if data read from the LV does
not match the previously saved checksum.  When used on
raid images, dm-raid will correct the error by reading
the block from another image, and the device user sees
no error.  The integrity metadata (checksums) are stored
on an internal LV allocated by lvm for each linear image.
The internal LV is allocated on the same PV as the image.

Create a raid LV with an integrity layer over each
raid image (for raid levels 1,4,5,6,10):

lvcreate --type raidN --raidintegrity y [options]

Add an integrity layer to images of an existing raid LV:

lvconvert --raidintegrity y LV

Remove the integrity layer from images of a raid LV:

lvconvert --raidintegrity n LV

Settings

Use --raidintegritymode journal|bitmap (journal is default)
to configure the method used by dm-integrity to ensure
crash consistency.

Initialization

When integrity is added to an LV, the kernel needs to
initialize the integrity metadata/checksums for all blocks
in the LV.  The data corruption checking performed by
dm-integrity will only operate on areas of the LV that
are already initialized.  The progress of integrity
initialization is reported by the "syncpercent" LV
reporting field (and under the Cpy%Sync lvs column.)

Example: create a raid1 LV with integrity:

$ lvcreate --type raid1 -m1 --raidintegrity y -n rr -L1G foo
  Creating integrity metadata LV rr_rimage_0_imeta with size 12.00 MiB.
  Logical volume "rr_rimage_0_imeta" created.
  Creating integrity metadata LV rr_rimage_1_imeta with size 12.00 MiB.
  Logical volume "rr_rimage_1_imeta" created.
  Logical volume "rr" created.
$ lvs -a foo
  LV                  VG  Attr       LSize  Origin              Cpy%Sync
  rr                  foo rwi-a-r---  1.00g                     4.93
  [rr_rimage_0]       foo gwi-aor---  1.00g [rr_rimage_0_iorig] 41.02
  [rr_rimage_0_imeta] foo ewi-ao---- 12.00m
  [rr_rimage_0_iorig] foo -wi-ao----  1.00g
  [rr_rimage_1]       foo gwi-aor---  1.00g [rr_rimage_1_iorig] 39.45
  [rr_rimage_1_imeta] foo ewi-ao---- 12.00m
  [rr_rimage_1_iorig] foo -wi-ao----  1.00g
  [rr_rmeta_0]        foo ewi-aor---  4.00m
  [rr_rmeta_1]        foo ewi-aor---  4.00m
2020-04-15 12:10:32 -05:00
David Teigland
b6b4ad8e28 move pv_list code into lib 2020-04-13 10:04:14 -05:00
Zdenek Kabelac
d02d7bc560 vdo: fix slab size bits calculation
When formating VDO volume, the calculated amound of bits
for 'vdoformat --slab-bits' parameter was shifted by 2 bits
(calculated size was making 2MiB vdo_slab_size_mb value appear like if
user would be specifying only 512KiB)

Fixed by properly converting internal size_mb value to KiB.
2020-02-25 17:43:16 +01:00
David Teigland
81d0333067 writecache: allow removing wcorig lv
like removing corig
2020-02-21 12:41:52 -06:00
David Teigland
8153c5f1e6 writecache: working real dm uuid suffix for wcorig lv 2020-02-20 17:13:43 -06:00
Zdenek Kabelac
3716aa848e vdo: fix vdoformat when -V is specified
The previous patch improved read of pipe when lvm2 was looking
for default logical size, but we clearly must read pipe also
for -V case, when the logical size is already defined.
2020-02-10 15:41:30 +01:00
Zdenek Kabelac
96985b1373 raid: better place for blocking reshapes
Still the place can be better to block only particular reshape
operations which ATM cause kernel problems.

We check if the new number of images is higher - and prevent to take
conversion if the volume is in use (i.e. thin-pool's data LV).
2020-02-07 16:48:48 +01:00
David Teigland
ffea7daec3 writecache: prevent snapshots
there appear to be problems with taking a snapshot
of an LV with a writecache, so block it until that
is understood or fixed.
2020-02-06 11:27:33 -06:00
David Teigland
2a6078f961 writecache: fix splitcache when origin is raid 2020-02-04 16:12:09 -06:00
Zdenek Kabelac
336361b2f2 lv_manip: add extra check for existin origin_lv
clang: it's supposedly impossible path to hit, as we should always
have origin_lv defined when running this path, but adding protection
isn't a big issue to make this obvious to analyzer.
2020-02-04 17:22:06 +01:00
Zdenek Kabelac
67f627c8fb raid: add internal error for no segment
clang: capture internal error when data_seg would not be defined.
(invalid LV with no areas)
2020-02-04 17:22:06 +01:00
Zdenek Kabelac
409362c127 lv_manip: add error handling for _reserve_area
Since _reserve_area() may fail due to error allocation failure,
add support to report this already reported failure upward.

FIXME: it's log_error() without causing direct command failure.
2020-02-04 17:22:06 +01:00
Zdenek Kabelac
d6ac039b65 cov: widen before calculating min_chunk_size
Although we expect min_chunk_size to be 32bit value, for
large size of caches it might be useful to do calcs 64bit.
So to avoid doing shift as signed 32bit - use unsigned 64bit
from the start.
2020-02-04 17:22:06 +01:00
Zdenek Kabelac
de43527f94 cov: unused header file removal
cov: unused header removed
Also ensure library header file with config settings goes first.
Move inclusion of format-text.h into layout.h
2020-02-04 17:22:06 +01:00
David Teigland
bddbbcb98c writecache: report status fields
reporting fields (-o) directly from kernel:
writecache_total_blocks
writecache_free_blocks
writecache_writeback_blocks
writecache_error

The data_percent field shows used cache blocks / total cache blocks.
2020-01-31 11:52:49 -06:00
Zdenek Kabelac
cf844941d4 vdo: adapt for multi line vdo_format output
Do not close pipeline after 1st. line parsed from vdo_format.
Also reprint the output for a user so new messages from vdo_format
can be seen by users.
2020-01-23 10:32:15 +01:00
Zdenek Kabelac
d7bf7091c3 raid: more limitted prohibition of stacked raid usage
We actually need to prohibit only reshaping cases which are
running over multiple commands.
2020-01-23 10:32:15 +01:00
Zdenek Kabelac
7737ffb11c raid: disallow reshape of stacked LVs
Until we resolve reshape for 'stacked' devices, we need to disable it.
So users can no longer reshape i.e. thin-pool data volumes, causing
ATM bad thin-pool problems.
2020-01-13 17:42:31 +01:00
David Teigland
2173bdb821 drop warnings about missing pvs in foreign vgs
When a foreign VG is ignored, don't print warnings that
it is missing PVs.
2019-12-11 12:56:15 -06:00
Zdenek Kabelac
89d839e541 clenaup: simpler form 2019-12-10 15:44:16 +01:00
Zdenek Kabelac
abc0a8faba vg_read: use else for 3 case
Make it visible we check for ==, >, <  of same var.
2019-12-10 15:44:16 +01:00
Zdenek Kabelac
5555765cfc debug: enhance messages
Drop 'extra' stack trace where errors are already logged from function.
Add some missing dots in messages.
2019-12-10 15:44:16 +01:00
Nikhil Kshirsagar
e70d5d470c debug: print VG name in log messages for segment errors
Signed-off-by: Nikhil Kshirsagar <nkshirsa@redhat.com>
2019-12-10 15:44:06 +01:00
David Teigland
74ad2cd76f metadata: add vg_from_config_tree
Add cmd/fmt args to import functions so that
they can be used without the fid arg which.
2019-11-27 11:13:47 -06:00
David Teigland
98a8099da9 scanning: use bool type for _scan_text_mismatch 2019-11-27 09:26:49 -06:00
David Teigland
0c1316cda8 scanning: optimize by checking text offset and checksum
After the VG lock is taken for vg_read, reread the mda_header
and compare the metadata text offset and checksum to what was
seen during label scan.  If it is unchanged, then the metadata
has not changed since the label scan, and the metadata does not
need to be reread under the lock for command processing.

For commands that do not make changes (e.g. reporting), the
mda_header is reread and checked on one mda to decide if the
full metadata rereading can be skipped.  For other commands
(e.g. modifying the vg) the mda_header is reread and checked
from all PVs.  (These could probably just check one mda also.)
2019-11-26 16:52:28 -06:00
Zdenek Kabelac
33c1d2e921 cov: add explicit ret value ignoring
We don't need to check for any error result codes here.
2019-11-14 18:06:42 +01:00
Zdenek Kabelac
ad0343d8cb cov: remove unused headers 2019-11-14 18:06:42 +01:00
Heming Zhao
13c254fc05 fix dev_unset_last_byte after write error
dev_unset_last_byte() must be called while the fd is still valid.
After a write error, dev_unset_last_byte() must be called before
closing the dev and resetting the fd.

In the write error path, dev_unset_last_byte() was being called
after label_scan_invalidate() which meant that it would not unset
the last_byte values.

After a write error, dev_unset_last_byte() is now called in
dev_write_bytes() before label_scan_invalidate(), instead of by
the caller of dev_write_bytes().

In the common case of a successful write, the sequence is still:
dev_set_last_byte(); dev_write_bytes(); dev_unset_last_byte();

Signed-off-by: Zhao Heming <heming.zhao@suse.com>
2019-11-13 09:36:58 -06:00
Zdenek Kabelac
08f36dd093 lvextend: fix resizing volumes of different segtype
When resizing 2 volumes like  thin-pool and it's metadata and they
would be of a different type - command would be actually expecting
both LVs being of a same segtype - and would throw an error in
case they are different.

This patch fixes is by setting a new segtype from last segment of
2nd. extented device.

Also it fixes the possible 'percentage' extension setup that
might have been used for 'primary' volume - while the 'secondary'
LV always goes with direct size - as we do not support 'percentage'
setup for them

This affects maily usage of thin-pool where the extension of
thin-pool data size may also lead to extension of metadata size.
2019-11-11 22:44:25 +01:00
Zdenek Kabelac
8689b4ed82 raid: drop internal error
Fix some internal error reports and debug trace returns
2019-10-31 15:31:30 +01:00
Zdenek Kabelac
3d9fc7d6f3 manip: optimize lvs_using_lv
Instead of checking all LVs in a VG - do just a direct copy of LVs
from the existing list ->segs_using_thin_lv.

TODO: maybe it could be better to expose seg_list to /tools...
2019-10-31 15:31:30 +01:00
Zdenek Kabelac
c21440536d mirror: remove unused code 2019-10-31 15:31:30 +01:00
Zdenek Kabelac
ab315e7a81 mirror: directly activate updated mirror 2019-10-31 15:31:30 +01:00
Zdenek Kabelac
0e5f39a5ac snapshot: use single merging sequence
The resume of 'released' 'COW' should preceed the resume of origin.
The fact we need to do the sequence differently for merge was
cause by bugs fixed in 2 previous commits - so we no longer need
to recognize 'merging' and we should always go with single
sequence.

The importance of this order is - to properly remove  '-real' device
from origin LV. When COW is activated as 2nd. '-real' device is
kept in table as it cannot be removed during 1st. resume of origin,
and later activation of COW LV no longer builds tree associated
with origin LV.
2019-10-26 00:49:16 +02:00
David Teigland
6a8bd0c509 lvmlockd: fix cachevol locking
When a cachevol LV is attached, have the LV keep it's lock
allocated.  The lock on the cachevol won't be used while
it's attached.  When the cachevol is split a new lock does
not need to be allocated.  (Applies to cachevol usage by
both dm-cache and dm-writecache.)
2019-10-25 14:08:59 -05:00
David Teigland
c08704cee7 cachevol: use cachepool code for metadata size
Based on a more detailed calculation, but because of
extent size rounding, the final result is about the
same.
2019-10-21 12:13:33 -05:00
Zdenek Kabelac
0c01a4c2a6 gcc: avoid warning: declaration of xxx shadows a global declaration
Fix some gcc complaints again shadowing global declarations
2019-10-21 15:32:35 +02:00
Zdenek Kabelac
dd7629ea09 cache: use _cpool for used cache-pools
When LV gets cached and uses cache-pool - such cache-pool
will now get _cpool suffix automatically.

Thus 'Pool' column for cached LV will now show either _cvol
or _cpool LV.
2019-10-21 15:31:33 +02:00
Zdenek Kabelac
2266a1863f lv_manip: add lv_uniq_rename_update
Add function to rename LV to either passed name or if
the name is already in use, generate new lvol% name.
2019-10-21 12:14:15 +02:00
Zdenek Kabelac
ec85dfe0f8 cachevol: support removal of cachevol
Removal of cachevol is equivalent of lvconvert --uncache
and works the same way as with cachepool.
2019-10-17 13:03:50 +02:00
Zdenek Kabelac
5938cde11b cache: single code for removal of cached volume
Use same routine for dropping cached LV for cachevol and cachepool.
2019-10-17 13:03:50 +02:00
Zdenek Kabelac
9969361b51 debug: missing trace 2019-10-17 13:03:50 +02:00
Zdenek Kabelac
dab4a2c893 cachevol: move flag setting after taking archive
Before 'archive()' is called, lvm2 must not touch/modify metadata.
So move setting  CACHE_VOL related flags past this point.

Also make sure reading of cache segtype always restores this
flag properly (even if compatible flag would be lost).
2019-10-17 13:03:50 +02:00
Zdenek Kabelac
f63e20ebcc cache: drop validation check
Since now we can cache either with cache-pool LV or
any other LV (being used as cachevol LV) drop the
validation condition.
2019-10-17 13:03:49 +02:00
Zdenek Kabelac
af8cfa90d9 cache: add more comments for min meta size
Enhance source code with better explanation how the minimal
metadata size is evaluated from data size and chunk size.
2019-10-17 13:03:49 +02:00
Zdenek Kabelac
2a08d6d1d4 cachevol: use CVOL UUID for cdata and cmeta layered devices
Since code is using -cdata and -cmeta UUID suffixes, it does not need
any new 'extra' ID to be generated and stored in metadata.

Since introduce of new 'segtype' cache+CACHE_USES_CACHEVOL we can
safely assume 'new' cache with cachevol will now be created
without extra metadata_id and data_id in metadata.

For backward compatibility, code still reads them in case older
version of metadata have them - so it still should be able
to activate such volumes.

Bonus is lowered size of lv structure used to store info about LV
(noticable with big volume groups).
2019-10-17 13:03:49 +02:00
David Teigland
81fe045714 cache: change default cachevol metadata sizes
The first part of a cachevol LV is used for metadata,
and the rest of the space is used for data.  The
division of space between metadata and data depends
on the total size of the cachevol.

The previous division gave more space than needed to
metadata, it was:

cachevol size 8M to 128M -> metadata size 16M *
cachevol size 128M to 1G -> metadata size 32M
cachevol size 1G and up  -> metadata size 64M

(* if this resulted in over half the LV used as
metadata, then half the cachevol would be used
for metadata, and the other half for data.)

The division of space now gives less space to
metadata, it is:

cachevol size 8M to 16M  -> metadata size 4M
cachevol size 16M to 4G  -> metadata size 8M
cachevol size 4G to 16G  -> metadata size 16M
cachevol size 16G to 32G -> metadata size 32M
cachevol size 32G and up -> metadata size 64M
2019-10-15 14:36:03 -05:00
David Teigland
0443d00ff1 allow activating known LVs when other LVs have unknown segtypes
When a VG contains some LVs with unknown segtypes, the user
should still be allowed to activate other LVs in the VG that
are understood.

$ lvs foo
  WARNING: Unrecognised flag CACHE_USES_CACHEVOL in segment type cache+CACHE_USES_CACHEVOL.
  WARNING: Unrecognised segment type cache+CACHE_USES_CACHEVOL
  LV    VG  Attr       LSize
  lvol0 foo -wi-------  4.00m
  other foo vwi---u--- 48.00m

$ lvcreate -l1 foo
  WARNING: Unrecognised flag CACHE_USES_CACHEVOL in segment type cache+CACHE_USES_CACHEVOL.
  WARNING: Unrecognised segment type cache+CACHE_USES_CACHEVOL
  Cannot change VG foo with unknown segments in it!
  Cannot process volume group foo

$ lvchange -ay foo/lvol0
  WARNING: Unrecognised flag CACHE_USES_CACHEVOL in segment type cache+CACHE_USES_CACHEVOL.
  WARNING: Unrecognised segment type cache+CACHE_USES_CACHEVOL

$ lvchange -ay foo/other
  WARNING: Unrecognised flag CACHE_USES_CACHEVOL in segment type cache+CACHE_USES_CACHEVOL.
  WARNING: Unrecognised segment type cache+CACHE_USES_CACHEVOL
  Refusing activation of LV foo/other containing an unrecognised segment.

$ lvs foo
  WARNING: Unrecognised flag CACHE_USES_CACHEVOL in segment type cache+CACHE_USES_CACHEVOL.
  WARNING: Unrecognised segment type cache+CACHE_USES_CACHEVOL
  LV    VG  Attr       LSize
  lvol0 foo -wi-a-----  4.00m
  other foo vwi---u--- 48.00m
2019-10-15 14:34:53 -05:00
David Teigland
91ee025d5b cache: change cachevol flags for backward compat
A cachevol LV had the CACHE_VOL status flag in metadata,
and the cache LV using it had no new flag.  This caused
problems if the new metadata was used by an old version
of lvm.  An old version of lvm would have two problems
processing the new metadata:

. The old lvm would return an error when reading the VG
  metadata when it saw the unknown CACHE_VOL status flag.

. The old lvm would return an error when reading the VG
  metadata because it would not find an expected cache pool
  attached to the cache LV (since the cache LV had a
  cachevol attached instead.)

Change the use of flags:

. Change the CACHE_VOL flag to be a COMPATIBLE flag (instead
  of a STATUS flag) so that old versions will not fail when
  they see it.

. When a cache LV is using a cachevol, the cache LV gets
  a new SEGTYPE flag CACHE_USES_CACHEVOL.  This flag is
  appended to the segtype name, so that old lvm versions
  will fail to use the LV because of an unknown segtype,
  as opposed to failing to read the VG.
2019-10-15 09:05:52 -05:00
Zdenek Kabelac
1cd308d640 cachevol: drop no longer needed functions
Code is no longer used/needed.
2019-10-14 15:20:25 +02:00
Zdenek Kabelac
201ffbd04a cachevol: use lv_cache_remove
Use same routine for dropping cache.
2019-10-14 15:20:25 +02:00
Zdenek Kabelac
77deadd3af cachevol: drop LV_CACHE_VOL on detach automatically
Move dropping of cachevol flag into detach function.
TODO: this flag should be internal to lvm2.
2019-10-14 15:15:14 +02:00
Zdenek Kabelac
615e18f5b2 cache: enhance removal function to work with cvol
To keep things simple, use same code for all cache removal functions,
not just for cachepools but also cachevols.
2019-10-14 15:14:25 +02:00
Zdenek Kabelac
6ee83f699b cache: correct condition 2019-10-14 15:14:25 +02:00
Zdenek Kabelac
bc35ccd174 cache: recognize cachevol with lv_cache_remove 2019-10-14 15:14:25 +02:00
Zdenek Kabelac
36944e1009 cache: reload only when switched to cleaner policy
Reload cache target only when lvm2 reload table with
cache with clearer policy.
2019-10-14 15:14:22 +02:00
David Teigland
bd21736e8b vgck: let updatemetadata repair mismatched metadata
Let vgck --updatemetadata repair cases where different mdas
hold indepedently valid but unmatching copies of the metadata,
i.e. different text metadata checksums or text metadata sizes.
2019-10-11 12:57:39 -05:00
David Teigland
fe16d296b0 pvmove: remove some cmirror related code
which is no longer used
2019-10-11 11:31:42 -05:00
Zdenek Kabelac
cf8aee096f vdo: introduce get_vdo_write_policy_name 2019-10-04 17:31:55 +02:00
Zdenek Kabelac
c756f76802 vdo: correct internal API for set_vdo_write_policy
This is 'setting' function.
2019-10-04 17:31:55 +02:00
Zdenek Kabelac
9d8a028e8c vdo: keep minimum_io_size in sectors 2019-10-04 17:31:55 +02:00
Zdenek Kabelac
6a9a4b4534 resize: continue change for getting vdo status before resize
Continue commit a98b77c164.
There needs to be error reported when status can't be obtained.
2019-10-04 17:31:55 +02:00
David Teigland
a68258339d lvmlockd: set failure flag for test mode
Set a failure flag when vg_read returns an error
for test mode.  The caller can segfault if there's
an error with no flag set.
2019-10-04 10:09:49 -05:00
David Teigland
3a8e41a67b metadata: import device name hint from metadata
Start by using it in a comment for a missing PV.
2019-09-30 11:38:10 -05:00
Zdenek Kabelac
a98b77c164 vdo: properly check percentage for resize
Avoid checking 'lv_is_active()' since special LV types does this
validation anyway what calling  _percent() function  and call it
ONLY when none of special types is queried.

This restores support for VDO resize (as with support for
separate VDO pool activation, plain query for lv_is_active()
is not working in this case).
2019-09-30 13:34:34 +02:00
David Teigland
26596ce7fa writecache: allow removing LV with attached writecache 2019-09-24 15:51:05 -05:00
David Teigland
76dd9b2b51 writecache: move code into new file
put writecache specific code in writecache_manip.c

should be no functional change
2019-09-24 15:51:05 -05:00
David Teigland
56aadd7fe2 lvremove: remove attached cachevol with removed LV
When an LV is removed that has an attached cachevol,
also remove the cachevol LV.
2019-09-24 15:51:05 -05:00
David Teigland
27c3c1d7c8 writecache: display layout and role fields 2019-09-20 14:55:11 -05:00
David Teigland
6f7d7089b4 writecache: use dm suffixes and lv attributes
- use internal CACHE_VOL flag on cachevol LV
- add suffixes to dm uuids for internal LVs
- display appropriate letters in the LV attr field
- display writecache's cachevol in lvs output
2019-09-20 14:08:51 -05:00
David Teigland
5d3bced5ea lvconvert: detaching cachevol with missing PVs
. For dm-cache in writethrough, always allow splitcache,
  whether the cache is missing PVs or not.

. For dm-cache in writeback, if the cache is missing PVs,
  allow splitcache with force and yes.

. For dm-writecache, if the cache is missing PVs,
  allow splitcache with force and yes.
2019-09-20 09:59:37 -05:00
David Teigland
d2c065789c lvconvert: cachevol LV can have multiple segments 2019-09-20 09:59:37 -05:00
Zdenek Kabelac
6612d8dd5e vdo: enhance activation with layer -vpool
Enhance 'activation' experience for VDO pool to more closely match
what happens for thin-pools where we do use a 'fake' LV to keep pool
running even when no thinLVs are active. This gives user a choice
whether he want to keep thin-pool running (wihout possibly lenghty
activation/deactivation process)

As we do plan to support multple VDO LVs to be mapped into a single VDO,
we want to give user same experience and 'use-patter' as with thin-pools.

This patch gives option to activate VDO pool only without activating
VDO LV.

Also due to 'fake' layering LV we can protect usage of VDO pool from
command like 'mkfs' which do require exlusive access to the volume,
which is no longer possible.

Note: VDO pool contains 1024 initial sectors as 'empty' header - such
header is also exposed in layered LV (as read-only LV).
For blkid we are indentified as LV with UUID suffix - thus private DM
device of lvm2 - so we do not need to store any extra info in this
header space (aka zero is good enough).
2019-09-17 13:17:19 +02:00
David Teigland
25b58310e3 pvscan: avoid full scan for activation
When an online PV completed a VG, the standard
activation functions were used to activate the VG.
These functions use a full scan of all devs.
When many pvscans are run during startup and need
to activate many VGs, scanning all devs from all
the pvscans can take a long time.

Optimize VG activation in pvscan to scan only the
devs in the VG being activated.  This makes use of
the online file info that was used to determine
the VG was complete.

The downside of this approach is that pvscan activation
will not detect duplicate PVs and block activation,
where a normal activation command (which scans all
devices) would.
2019-09-03 10:11:16 -05:00
David Teigland
98d420200e vgextend: check missing device during block size check
Checking the block size when a device is missing could
trigger a segfault.
2019-09-03 10:07:56 -05:00
David Teigland
7cfbf3a394 fix segfault for invalid characters in vg name
Fixes a regression from commit ba7ff96faf
"improve reading and repairing vg metadata"

where the error path for a vg name with invalid
charaters was missing an error flag, which led
to the caller not recognizing an error occured.
Previously, an error flag was hidden in the old
_vg_make_handle function.
2019-08-29 11:35:46 -05:00
Zdenek Kabelac
4b1dcc2eeb lv_manip: add synchronizations
New udev in rawhide seems to be 'dropping' udev rule operations for devices
that are no longer existing - while this is 'probably' a bug - it's
revealing moments in lvm2 that likely should not run in a single
transaction and we should wait for a cookie before submitting more work.

TODO: it seem more 'error' paths should always include synchronization
before starting deactivating 'just activated' devices.
We should probably figure out some 'automatic' solution for this instead
of placing sync_local_dev_name() all over the place...
2019-08-26 15:32:19 +02:00
Zdenek Kabelac
c98e34e4d0 cache: improve vgremove loop
Support internal removal of 'cache origin' volume - which we
do not normally expose to a user - however internal processing
loops may hit this condition (depending on order of list LVs).

So when this operation is internally requested - we automatically
try to remove it's 'holding' LV (cache LV) - which will also
remove the origin.
2019-08-26 15:32:12 +02:00
Zdenek Kabelac
af0b84ccc8 snapshot: always activate
Drop the 'cluster-only' optimization so we do resume ALL device
before we try to wait on cookie before 'removal' operation.

It's more correct order of operation - alhtough possibly slightly
less efficient - but until we have correct list of operations
'in-progress' we can't do anything better.
2019-08-26 15:23:44 +02:00
David Teigland
677833ce6f lvmcache: renaming functions and variables
related to duplicates, no functional changes.
2019-08-16 13:26:11 -05:00
David Teigland
65bcd16be2 md component detection addition in vg_read
Usually md components are eliminated in label scan and/or
duplicate resolution, but they could sometimes get into
the vg_read stage, where set_pv_devices compares the
device to the PV.

If set_pv_devices runs an md component check and finds
one, vg_read should eliminate the components.

In set_pv_devices, run an md component check always
if the PV is smaller than the device (this is not
very common.)  If the PV is larger than the device,
(more common), do the component check when the config
setting is "auto" (the default).
2019-08-16 13:24:34 -05:00
David Teigland
09bc2d0fd1 devices: clean up block size functions
Replace calls to the old dev_get_block_size function
with calls to the new dev_get_direct_block_size function,
and remove the old function.
2019-08-07 11:48:10 -05:00
David Teigland
0404539edb vgcreate/vgextend: restrict PVs with mixed block sizes
Avoid having PVs with different logical block sizes in the same VG.
This prevents LVs from having mixed block sizes, which can produce
file system errors.

The new config setting devices/allow_mixed_block_sizes (default 0)
can be changed to 1 to return to the unrestricted mode.
2019-08-01 10:06:47 -05:00
David Teigland
f17353e3e6 md component detection for differing PV and device sizes
This check was mistakenly removed when shifting code in commit
"separate code for setting devices from metadata parsing".

Put it back with some new conditions.
2019-07-09 13:40:41 -05:00
David Teigland
b4402bd821 exported vg handling
The exported VG checking/enforcement was scattered and
inconsistent.  This centralizes it and makes it consistent,
following the existing approach for foreign and shared
VGs/PVs, which are very similar to exported VGs/PVs.

The access policy that now applies to foreign/shared/exported
VGs/PVs, is that if a foreign/shared/exported VG/PV is named
on the command line (i.e. explicitly requested by the user),
and the command is not permitted to operate on it because it
is foreign/shared/exported, then an access error is reported
and the command exits with an error.  But, if the command is
processing all VGs/PVs, and happens to come across a
foreign/shared/exported VG/PV (that is not explicitly named on
the command line), then the command silently skips it and does
not produce an error.

A command using tags or --select handles inaccessible VGs/PVs
the same way as a command processing all VGs/PVs, and will
not report/return errors if these inaccessible VGs/PVs exist.

The new policy fixes the exit codes on a somewhat random set of
commands that previously exited with an error if they were
looking at all VGs/PVs and an exported VG existed on the system.

There should be no change to which commands are allowed/disallowed
on exported VGs/PVs.

Certain LV commands (lvs/lvdisplay/lvscan) would previously not
display LVs from an exported VG (for unknown reasons).  This has
not changed.  The lvm fullreport command would previously report
info about an exported VG but not about the LVs in it.  This
has changed to include all info from the exported VG.
2019-06-25 15:39:08 -05:00
David Teigland
d16142f90f scanning: open devs rw when rescanning for write
When vg_read rescans devices with the intention of
writing the VG, the label rescan can open the devs
RW so they do not need to be closed and reopened
RW in dev_write_bytes.
2019-06-21 10:57:49 -05:00
David Teigland
8fecd9c14e metadata: include description with command in metadata areas
Previously the VG metadata description field (which contains
the command line) was only included in backup/archive copies
of the metadata.  Now also include it in the metadata written
to the metadata areas.
2019-06-20 16:09:05 -05:00
David Teigland
4bb7d3da0e lvmcache: remove wrapper around lvmcache_get_vgnameids
This was left over from when there was an lvmetad
version of the function.
2019-06-11 14:10:14 -05:00
David Teigland
550536474f vgsplit: simplify vg creation
The way that this command now uses the global lock
followed by a label scan, it can simply check if the
new VG name exists, and if not lock it and create it.
2019-06-10 10:38:32 -05:00
David Teigland
a3a676e0e7 metadata.c: removed unused code
if 0 was placed around old vg_read code by
the previous commit.
2019-06-07 15:54:04 -05:00
David Teigland
ba7ff96faf improve reading and repairing vg metadata
The fact that vg repair is implemented as a part of vg read
has led to a messy and complicated implementation of vg_read,
and limited and uncontrolled repair capability.  This splits
read and repair apart.

Summary
-------

- take all kinds of various repairs out of vg_read
- vg_read no longer writes anything
- vg_read now simply reads and returns vg metadata
- vg_read ignores bad or old copies of metadata
- vg_read proceeds with a single good copy of metadata
- improve error checks and handling when reading
- keep track of bad (corrupt) copies of metadata in lvmcache
- keep track of old (seqno) copies of metadata in lvmcache
- keep track of outdated PVs in lvmcache
- vg_write will do basic repairs
- new command vgck --updatemetdata will do all repairs

Details
-------

- In scan, do not delete dev from lvmcache if reading/processing fails;
  the dev is still present, and removing it makes it look like the dev
  is not there.  Records are now kept about the problems with each PV
  so they be fixed/repaired in the appropriate places.

- In scan, record a bad mda on failure, and delete the mda from
  mda in use list so it will not be used by vg_read or vg_write,
  only by repair.

- In scan, succeed if any good mda on a device is found, instead of
  failing if any is bad.  The bad/old copies of metadata should not
  interfere with normal usage while good copies can be used.

- In scan, add a record of old mdas in lvmcache for later, do not repair
  them while reading, and do not let them prevent us from finding and
  using a good copy of metadata from elsewhere.  One result is that
  "inconsistent metadata" is no longer a read error, but instead a
  record in lvmcache that can be addressed separate from the read.

- Treat a dev with no good mdas like a dev with no mdas, which is an
  existing case we already handle.

- Don't use a fake vg "handle" for returning an error from vg_read,
  or the vg_read_error function for getting that error number;
  just return null if the vg cannot be read or used, and an error_flags
  arg with flags set for the specific kind of error (which can be used
  later for determining the kind of repair.)

- Saving an original copy of the vg metadata, for purposes of reverting
  a write, is now done explicitly in vg_read instead of being hidden in
  the vg_make_handle function.

- When a vg is not accessible due to "access restrictions" but is
  otherwise fine, return the vg through the new error_vg arg so that
  process_each_pv can skip the PVs in the VG while processing.
  (This is a temporary accomodation for the way process_each_pv
  tracks which devs have been looked at, and can be dropped later
  when process_each_pv implementation dev tracking is changed.)

- vg_read does not try to fix or recover a vg, but now just reads the
  metadata, checks access restrictions and returns it.
  (Checking access restrictions might be better done outside of vg_read,
   but this is a later improvement.)

- _vg_read now simply makes one attempt to read metadata from
  each mda, and uses the most recent copy to return to the caller
  in the form of a 'vg' struct.
  (bad mdas were excluded during the scan and are not retried)
  (old mdas were not excluded during scan and are retried here)

- vg_read uses _vg_read to get the latest copy of metadata from mdas,
  and then makes various checks against it to produce warnings,
  and to check if VG access is allowed (access restrictions include:
  writable, foreign, shared, clustered, missing pvs).

- Things that were previously silently/automatically written by vg_read
  that are now done by vg_write, based on the records made in lvmcache
  during the scan and read:
  . clearing the missing flag
  . updating old copies of metadata
  . clearing outdated pvs
  . updating pv header flags

- Bad/corrupt metadata are now repaired; they were not before.

Test changes
------------

- A read command no longer writes the VG to repair it, so add a write
  command to do a repair.
  (inconsistent-metadata, unlost-pv)

- When a missing PV is removed from a VG, and then the device is
  enabled again, vgck --updatemetadata is needed to clear the
  outdated PV before it can be used again, where it wasn't before.
  (lvconvert-repair-policy, lvconvert-repair-raid, lvconvert-repair,
   mirror-vgreduce-removemissing, pv-ext-flags, unlost-pv)

Reading bad/old metadata
------------------------

- "bad metadata": the mda_header or metadata text has invalid fields
  or can't be parsed by lvm.  This is a form of corruption that would
  not be caused by known failure scenarios.  A checksum error is
  typically included among the errors reported.

- "old metadata": a valid copy of the metadata that has a smaller seqno
  than other copies of the metadata.  This can happen if the device
  failed, or io failed, or lvm failed while commiting new metadata
  to all the metadata areas.  Old metadata on a PV that has been
  removed from the VG is the "outdated" case below.

When a VG has some PVs with bad/old metadata, lvm can simply ignore
the bad/old copies, and use a good copy.  This is why there are
multiple copies of the metadata -- so it's available even when some
of the copies cannot be used.  The bad/old copies do not have to be
repaired before the VG can be used (the repair can happen later.)

A PV with no good copies of the metadata simply falls back to being
treated like a PV with no mdas; a common and harmless configuration.

When bad/old metadata exists, lvm warns the user about it, and
suggests repairing it using a new metadata repair command.
Bad metadata in particular is something that users will want to
investigate and repair themselves, since it should not happen and
may indicate some other problem that needs to be fixed.

PVs with bad/old metadata are not the same as missing devices.
Missing devices will block various kinds of VG modification or
activation, but bad/old metadata will not.

Previously, lvm would attempt to repair bad/old metadata whenever
it was read.  This was unnecessary since lvm does not require every
copy of the metadata to be used.  It would also hide potential
problems that should be investigated by the user.  It was also
dangerous in cases where the VG was on shared storage.  The user
is now allowed to investigate potential problems and decide how
and when to repair them.

Repairing bad/old metadata
--------------------------

When label scan sees bad metadata in an mda, that mda is removed
from the lvmcache info->mdas list.  This means that vg_read will
skip it, and not attempt to read/process it again.  If it was
the only in-use mda on a PV, that PV is treated like a PV with
no mdas.  It also means that vg_write will also skip the bad mda,
and not attempt to write new metadata to it.  The only way to
repair bad metadata is with the metadata repair command.

When label scan sees old metadata in an mda, that mda is kept
in the lvmcache info->mdas list.  This means that vg_read will
read/process it again, and likely see the same mismatch with
the other copies of the metadata.  Like the label_scan, the
vg_read will simply ignore the old copy of the metadata and
use the latest copy.  If the command is modifying the vg
(e.g. lvcreate), then vg_write, which writes new metadata to
every mda on info->mdas, will write the new metadata to the
mda that had the old version.  If successful, this will resolve
the old metadata problem (without needing to run a metadata
repair command.)

Outdated PVs
------------

An outdated PV is a PV that has an old copy of VG metadata
that shows it is a member of the VG, but the latest copy of
the VG metadata does not include this PV.  This happens if
the PV is disconnected, vgreduce --removemissing is run to
remove the PV from the VG, then the PV is reconnected.
In this case, the outdated PV needs have its outdated metadata
removed and the PV used flag needs to be cleared.  This repair
will be done by the subsequent repair command.  It is also done
if vgremove is run on the VG.

MISSING PVs
-----------

When a device is missing, most commands will refuse to modify
the VG.  This is the simple case.  More complicated is when
a command is allowed to modify the VG while it is missing a
device.

When a VG is written while a device is missing for one of it's PVs,
the VG metadata is written to disk with the MISSING flag on the PV
with the missing device.  When the VG is next used, it is treated
as if the PV with the MISSING flag still has a missing device, even
if that device has reappeared.

If all LVs that were using a PV with the MISSING flag are removed
or repaired so that the MISSING PV is no longer used, then the
next time the VG metadata is written, the MISSING flag will be
dropped.

Alternative methods of clearing the MISSING flag are:

vgreduce --removemissing will remove PVs with missing devices,
or PVs with the MISSING flag where the device has reappeared.

vgextend --restoremissing will clear the MISSING flag on PVs
where the device has reappeared, allowing the VG to be used
normally.  This must be done with caution since the reappeared
device may have old data that is inconsistent with data on other PVs.

Bad mda repair
--------------

The new command:
vgck --updatemetadata VG

first uses vg_write to repair old metadata, and other basic
issues mentioned above (old metadata, outdated PVs, pv_header
flags, MISSING_PV flags).  It will also go further and repair
bad metadata:

. text metadata that has a bad checksum
. text metadata that is not parsable
. corrupt mda_header checksum and version fields

(To keep a clean diff, #if 0 is added around functions that
are replaced by new code.  These commented functions are
removed by the following commit.)
2019-06-07 15:54:04 -05:00
David Teigland
015b906069 add a warning message when updating old metadata
in an mda that had previously not been updated
2019-06-07 15:54:04 -05:00
David Teigland
47effdc025 vgck --updatemetadata is a new command
uses vg_write to correct more common or less severe issues,
and also adds the ability to repair some metadata corruption
that couldn't be handled previously.
2019-06-07 15:54:04 -05:00
David Teigland
de3d3b11f4 move pv header repairs to vg_write
Correct PV header in-use or version fields
from vg_write instead of vg_read.
2019-06-07 15:54:04 -05:00
David Teigland
ab61a6d85d move wipe_outdated_pvs to vg_write
and implement it based on a device, not based
on a pv struct (which is not available when the
device is not a part of the vg.)

currently only the vgremove command wipes outdated
pvs until more advanced recovery is added in a
subsequent commit
2019-06-07 15:54:04 -05:00
David Teigland
45b164f62c create separate lvmcache update functions for read and write
The vg read and vg write cases need to update lvmcache
differently, so create separate functions for them.

The read case now handles checking for outdated mdas
and moves them aside into a new list to be repaired in
a subsequent commit.
2019-06-07 15:54:04 -05:00
David Teigland
027e0e92e6 fix vg_commit return value
The existing comment was desribing the correct behavior,
but the code didn't match.  The commit is successful if
one mda was committed.  Making it depend on the result of
the internal lvmcache update was wrong.
2019-06-07 15:54:04 -05:00
David Teigland
650524b955 ability to keep track of bad mdas in lvmcache
mda's that cannot be processed by lvm because of
some corruption can be kept on a separate list.
These will be used for more advanced repair in a
subsequent commit.
2019-06-07 15:54:04 -05:00
David Teigland
aeafdc1f45 add flags to keep track of bad metadata
When reading metadata headers and text, use a new set
of flags to identify specific errors that are seen.
These will be used for more advanced repair in a
subsequent commit.
2019-06-07 15:54:04 -05:00
David Teigland
2b241eb1f6 pvck: use new dump routines for old output
Use the recently added dump routines to produce the
old/traditional pvck output, and remove the code that
had been used for that.

The validation/checking done by the new routines means
that new lines prefixed with CHECK are printed for
incorrect values.
2019-06-05 16:28:52 -05:00
Zdenek Kabelac
e3c4ab0cc7 cache: support no_discard_passdown
Recent kernel version from kernel commit:
de7180ff908b2bc0342e832dbdaa9a5f1ecaa33a
started to report in cache status line new flag:
no_discard_passdown

Whenever lvm spots unknown status it reports:
Unknown feature in status:

So add reconginzing this feature flag and also report this with

'lvs -o+kernel_discards'

When no_discard_passdown is found in status 'nopassdown' gets reported
for this field  (roughly matching what we report for thin-pools).
2019-06-05 15:48:41 +02:00
David Teigland
645dd27604 separate code for setting devices from metadata parsing
Pull the code that sets devs for PVs out of the metadata
parsing code and call it separately.
2019-05-23 11:57:38 -05:00
Zdenek Kabelac
d60d59a5f3 cleanup: use unsigned type 2019-05-03 13:17:22 +02:00
David Teigland
8c87dda195 locking: unify global lock for flock and lockd
There have been two file locks used to protect lvm
"global state": "ORPHANS" and "GLOBAL".

Commands that used the ORPHAN flock in exclusive mode:
  pvcreate, pvremove, vgcreate, vgextend, vgremove,
  vgcfgrestore

Commands that used the ORPHAN flock in shared mode:
  vgimportclone, pvs, pvscan, pvresize, pvmove,
  pvdisplay, pvchange, fullreport

Commands that used the GLOBAL flock in exclusive mode:
  pvchange, pvscan, vgimportclone, vgscan

Commands that used the GLOBAL flock in shared mode:
  pvscan --cache, pvs

The ORPHAN lock covers the important cases of serializing
the use of orphan PVs.  It also partially covers the
reporting of orphan PVs (although not correctly as
explained below.)

The GLOBAL lock doesn't seem to have a clear purpose
(it may have eroded over time.)

Neither lock correctly protects the VG namespace, or
orphan PV properties.

To simplify and correct these issues, the two separate
flocks are combined into the one GLOBAL flock, and this flock
is used from the locking sites that are in place for the
lvmlockd global lock.

The logic behind the lvmlockd (distributed) global lock is
that any command that changes "global state" needs to take
the global lock in ex mode.  Global state in lvm is: the list
of VG names, the set of orphan PVs, and any properties of
orphan PVs.  Reading this global state can use the global lock
in sh mode to ensure it doesn't change while being reported.

The locking of global state now looks like:

lockd_global()
  previously named lockd_gl(), acquires the distributed
  global lock through lvmlockd.  This is unchanged.
  It serializes distributed lvm commands that are changing
  global state.  This is a no-op when lvmlockd is not in use.

lockf_global()
  acquires an flock on a local file.  It serializes local lvm
  commands that are changing global state.

lock_global()
  first calls lockf_global() to acquire the local flock for
  global state, and if this succeeds, it calls lockd_global()
  to acquire the distributed lock for global state.

Replace instances of lockd_gl() with lock_global(), so that the
existing sites for lvmlockd global state locking are now also
used for local file locking of global state.  Remove the previous
file locking calls lock_vol(GLOBAL) and lock_vol(ORPHAN).

The following commands which change global state are now
serialized with the exclusive global flock:

pvchange (of orphan), pvresize (of orphan), pvcreate, pvremove,
vgcreate, vgextend, vgremove, vgreduce, vgrename,
vgcfgrestore, vgimportclone, vgmerge, vgsplit

Commands that use a shared flock to read global state (and will
be serialized against the prior list) are those that use
process_each functions that are based on processing a list of
all VG names, or all PVs.  The list of all VGs or all PVs is
global state and the shared lock prevents those lists from
changing while the command is processing them.

The ORPHAN lock previously attempted to produce an accurate
listing of orphan PVs, but it was only acquired at the end of
the command during the fake vg_read of the fake orphan vg.
This is not when orphan PVs were determined; they were
determined by elimination beforehand by processing all real
VGs, and subtracting the PVs in the real VGs from the list
of all PVs that had been identified during the initial scan.
This is fixed by holding the single global lock in shared mode
while processing all VGs to determine the list of orphan PVs.
2019-04-29 13:01:05 -05:00
David Teigland
ccd1386070 wipe_lv: initially open LV in writable mode
wipe_lv knows it's going to write the device, so it
can open rw from the start.  It was opening readonly,
and then dev_write needed to reopen it readwrite.
2019-04-26 14:49:27 -05:00
David Teigland
c33770c02d lvmlockd: do not allow mirror LV to be activated shared
This reverts 518a8e8cfb
  "lvmlockd: activate mirror LVs in shared mode with cmirrord"

because while activating a mirror LV with cmirrord worked,
changes to the active cmirror did not work.
2019-04-04 13:21:38 -05:00
Zdenek Kabelac
fcec6691f0 thin: fix maintenance of _pmspare
When metadata grows lvm2 may need to extend also _pmspare volume.
2019-04-03 13:28:54 +02:00
Zdenek Kabelac
e27d027155 thin: resize metadata with data
When data are growing, adapt also size of metadata.
As we get way too many reports from users doing huge growths of
data portion while keep metadata small and avoiding using monitoring.

So to enhance the user-experience in case user requests grown of
thin-pool (without passing PV list for growth) - lvm2 will automaticaly
grown also the metadata part of thin-pool (if possible).
2019-04-03 13:28:22 +02:00
Zdenek Kabelac
7c3de2fd93 thin: introduce estimate_thin_pool_metadata_size
Add function for estimation of thin-pool metadata size for given size of
data. Function is using already existing internal API so it can
be reused for resize of thin-pool data.
2019-04-03 13:27:17 +02:00
David Teigland
85e68a8333 lvextend: refresh shared LV remotely using dlm/corosync
When lvextend extends an LV that is active with a shared
lock, use this as a signal that other hosts may also have
the LV active, with gfs2 mounted, and should have the LV
refreshed to reflect the new size.  Use the libdlmcontrol
run api, which uses dlm_controld/corosync to run an
lvchange --refresh command on other cluster nodes.
2019-03-21 12:38:20 -05:00
David Teigland
d369de8399 lvextend: allow on LV active with a shared lock
Detect when a shared lock exists, don't require the
normal exclusive lock, and allow the lvextend.
2019-03-21 12:38:20 -05:00
Zdenek Kabelac
677aa84be3 vdo: enable caching for vdopool LV and vdo LV
Allow using caching with VDO.
User can either cache a single vdopool or
a vdo LV - difference when the caching is put-in depends on a use-case
and it's upto user to decide which kind of speed is expected.
2019-03-20 14:38:31 +01:00
Zdenek Kabelac
0db22c5f81 lv_manip: insert remove layer skips pools
Fixing renaming of subLVs when removing and inserting layers - this
got visible when using stacked VDO pools.
2019-03-20 14:38:05 +01:00
Zdenek Kabelac
1cc690e911 thin: max thin 2019-03-20 14:37:44 +01:00
David Teigland
4e20ebd6a1 pvscan: ignore online for shared and foreign PVs
Activation would not be allowed anyway, but we can
check for these cases early and avoid wasted time in
pvscan managing online files an attempting activation.
2019-03-05 15:19:05 -06:00
David Teigland
a9eaab6beb Use "cachevol" to refer to cache on a single LV
and "cachepool" to refer to a cache on a cache pool object.

The problem was that the --cachepool option was being used
to refer to both a cache pool object, and to a standard LV
used for caching.  This could be somewhat confusing, and it
made it less clear when each kind would be used.  By
separating them, it's clear when a cachepool or a cachevol
should be used.

Previously:

- lvm would use the cache pool approach when the user passed
  a cache-pool LV to the --cachepool option.

- lvm would use the cache vol approach when the user passed
  a standard LV in the --cachepool option.

Now:

- lvm will always use the cache pool approach when the user
  uses the --cachepool option.

- lvm will always use the cache vol approach when the user
  uses the --cachevol option.
2019-02-27 08:52:34 -06:00
Zdenek Kabelac
d19e372795 cleanup: indent 2019-01-28 22:39:10 +01:00
Zdenek Kabelac
78dd9d820d thin: select chunk size as power of 2
Whenever thin-pool chunk size is unspecified and left for lvm calculation
try to select the size as nearest highest power-of-2 instead of
just being a multiple of 64KiB.
2019-01-28 22:17:25 +01:00
Zdenek Kabelac
58ad831c72 cache: select chunk size as power of 2
When cache chunk size is not configured, and left for lvm deduction,
select the value which is power-of-2.
2019-01-28 22:17:14 +01:00
Zdenek Kabelac
105a8edea1 lv_manip: better work with PERCENT_VG modifier with lvresize
Fixing recent commit 022ebb0cfe
Resize already has size that needs to be counted with,
otherwise upsizing operation could turn into size reduction one.
2019-01-21 15:39:24 +01:00
Zdenek Kabelac
f3c52a515b vdo: enable dmeventd resize 2019-01-21 12:53:16 +01:00
Zdenek Kabelac
a16d914d34 cleanup: better naming 2019-01-21 12:53:16 +01:00
Zdenek Kabelac
08cabe9b83 vdo: allow resize of VDO and VDO pool volumes
Now with newer VDO kvdo target we can start to use standard mechanism
to enable resize of VDO volumes.

VDO pool can be grown.

Virtual volume grows on top of VDO pool when is not big enough.
Reduced VDOLV is calling discard for reduced areas - this can
take long time!

TODO: implement some pollable mechanism for out-of-lock TRIM.
2019-01-21 12:53:16 +01:00
Zdenek Kabelac
bd6709cec6 vdo: size reduction requires VDO to be active
To be able to send discard to reduced areas - the VDO LV needs to
be active.
2019-01-21 12:53:16 +01:00
Zdenek Kabelac
f1ad4b0679 vdo: discard reduced area
Implement sending discard to reduced LV area.
2019-01-21 12:53:16 +01:00
Zdenek Kabelac
ca72d19691 vdo: estimate virtual size after resize 2019-01-21 12:53:16 +01:00
Zdenek Kabelac
ab031d673d vdo: introduce function for estimation of virtual size 2019-01-21 12:53:16 +01:00
Zdenek Kabelac
022ebb0cfe lv_manip: better work with PERCENT_VG modifier
When using 'lvcreate -l100%VG' and there is big disproportion between
real available space and requested setting - automatically fallback
to 100%FREE.

Difference can be seen when VG is big and already most space was
allocated, so the requestion 100%VG can end (and by spec for % modifier
it's correct) as LV with size of 1%VG.  Usually this is not a big
problem - buit in some cases - like cache-pool allocation, this
can result a big difference for chunksize selection.

With this patch it's more closely match common-sense logic without
the need of reitteration of too big changes in lvm2 core ATM.

TODO: in the future there should be allocator solving all allocations
in a single call.
2019-01-21 12:53:15 +01:00
Zdenek Kabelac
26ead4bf45 cov: extent_size cannot be 0
Make this obvious to coverity.
2018-12-21 21:45:08 +01:00
Zdenek Kabelac
9dfb1a11b7 cov: drop unneeded header file
MAX macro no longer needed in pe_align.
2018-12-21 21:45:08 +01:00
Zdenek Kabelac
3320ab8334 lib: move towards v2 version of VDO format
Drop very old original format of VDO target and focus on V2 version.
So some variables were renamed or replaced.
There is no compatibility preserved (with assumption so far this is
experimental feature and there is no real user).

Note - version currently VDO calls this version 6.2.
2018-12-20 13:26:55 +01:00
Heinz Mauelshagen
e82303fd6a lvcreate/lvconvert: optionally reenable mirrored mirror log for testing purposes only
This is a followup patch to commit edb72cb70c
to support related lvm2 test suite tests.

A 'global/support_mirrored_mirror_log' bool configuration variable gets
introduced allowing the creation of, or conversion to mirrored 'mirror'
logs if set.  The capability to create these in turn allows the rest of
the tests to perform activation of such existing LVs and their conversions
to disk/core 'mirror' logs.

Display a disclaimer warning if enabled that this is not for regular use.

Add definition of the enabled config option to respective test scripts.

Related: rhbz1643562
2018-12-17 19:28:54 +01:00
Ming-Hung Tsai
859feb81e5 lvmanip: uninitialized members in struct pv_list (#10)
Scenario: Given an existed LV `lvol0`, I want to create another LV
on the PVs used by `lvol0`.

I use `build_parallel_areas_from_lv()` to obtain the `pv_list` of each segments.
However, the returned `pv_list` is not properly initialized, which causes
segfault in subsequent operations.
2018-12-14 15:23:18 +01:00
Heinz Mauelshagen
dd5716ddf2 raid: fix (de)activation of RaidLVs with visible SubLVs
There's a small window during creation of a new RaidLV when
rmeta SubLVs are made visible to wipe them in order to prevent
erroneous discovery of stale RAID metadata.  In case a crash
prevents the SubLVs from being committed hidden after such
wiping, the RaidLV can still be activated with the SubLVs visible.
During deactivation though, a deadlock occurs because the visible
SubLVs are deactivated before the RaidLV.

The patch adds _check_raid_sublvs to the raid validation in merge.c,
an activation check to activate.c (paranoid, because the merge.c check
will prevent activation in case of visible SubLVs) and shares the
existing wiping function _clear_lvs in raid_manip.c moved to lv_manip.c
and renamed to activate_and_wipe_lvlist to remove code duplication.
Whilst on it, introduce activate_and_wipe_lv to share with
(lvconvert|lvchange).c.

Resolves: rhbz1633167
2018-12-11 16:35:34 +01:00
Heinz Mauelshagen
edb72cb70c lvcreate/lvconvert: prohibit creation of/conversion to mirrored mirror logs
In RHEL7 we marked mirrored mirror logs as deprecated and
added a related message.  This patch prohibits creating new
'mirror' LVs with that log type or converting existing LVs
to have one.

Existing LVs with mirrored mirror log can be activated
and converted to disk/core logs.

Avoid double deprecation message when running lvconvert.

Resolves: rhbz1643562
2018-12-08 02:52:50 +01:00
David Teigland
904e1e3d26 Place the first PE at 1 MiB for all defaults
. When using default settings, this commit should change
  nothing.  The first PE continues to be placed at 1 MiB
  resulting in a metadata area size of 1020 KiB (for
  4K page sizes; slightly smaller for larger page sizes.)

. When default_data_alignment is disabled in lvm.conf,
  align pe_start at 1 MiB, based on a default metadata area
  size that adapts to the page size.  Previously, disabling
  this option would result in mda_size that was too small
  for common use, and produced a 64 KiB aligned pe_start.

. Customized pe_start and mda_size values continue to be
  set as before in lvm.conf and command line.

. Remove the configure option for setting default_data_alignment
  at build time.

. Improve alignment related option descriptions.

. Add section about alignment to pvcreate man page.

Previously, DEFAULT_PVMETADATASIZE was 255 sectors.
However, the fact that the config setting named
"default_data_alignment" has a default value of 1 (MiB)
meant that DEFAULT_PVMETADATASIZE was having no effect.

The metadata area size is the space between the start of
the metadata area (page size offset from the start of the
device) and the first PE (1 MiB by default due to
default_data_alignment 1.)  The result is a 1020 KiB metadata
area on machines with 4KiB page size (1024 KiB - 4 KiB),
and smaller on machines with larger page size.

If default_data_alignment was set to 0 (disabled), then
DEFAULT_PVMETADATASIZE 255 would take effect, and produce a
metadata area that was 188 KiB and pe_start of 192 KiB.
This was too small for common use.

This is fixed by making the default metadata area size a
computed value that matches the value produced by
default_data_alignment.
2018-11-26 16:36:50 -06:00
David Teigland
3ae5569570 Add dm-writecache support
dm-writecache is used like dm-cache with a standard LV
as the cache.

$ lvcreate -n main -L 128M -an foo /dev/loop0

$ lvcreate -n fast -L 32M -an foo /dev/pmem0

$ lvconvert --type writecache --cachepool fast foo/main

$ lvs -a foo -o+devices
  LV            VG  Attr       LSize   Origin        Devices
  [fast]        foo -wi-------  32.00m               /dev/pmem0(0)
  main          foo Cwi------- 128.00m [main_wcorig] main_wcorig(0)
  [main_wcorig] foo -wi------- 128.00m               /dev/loop0(0)

$ lvchange -ay foo/main

$ dmsetup table
foo-main_wcorig: 0 262144 linear 7:0 2048
foo-main: 0 262144 writecache p 253:4 253:3 4096 0
foo-fast: 0 65536 linear 259:0 2048

$ lvchange -an foo/main

$ lvconvert --splitcache foo/main

$ lvs -a foo -o+devices
  LV   VG  Attr       LSize   Devices
  fast foo -wi-------  32.00m /dev/pmem0(0)
  main foo -wi------- 128.00m /dev/loop0(0)
2018-11-06 14:18:41 -06:00
David Teigland
cac4a9743a Allow dm-cache cache device to be standard LV
If a single, standard LV is specified as the cache, use
it directly instead of converting it into a cache-pool
object with two separate LVs (for data and metadata).

With a single LV as the cache, lvm will use blocks at the
beginning for metadata, and the rest for data.  Separate
dm linear devices are set up to point at the metadata and
data areas of the LV.  These dm devs are given to the
dm-cache target to use.

The single LV cache cannot be resized without recreating it.

If the --poolmetadata option is used to specify an LV for
metadata, then a cache pool will be created (with separate
LVs for data and metadata.)

Usage:

$ lvcreate -n main -L 128M vg /dev/loop0

$ lvcreate -n fast -L 64M vg /dev/loop1

$ lvs -a vg
  LV   VG Attr       LSize   Type   Devices
  main vg -wi-a----- 128.00m linear /dev/loop0(0)
  fast vg -wi-a-----  64.00m linear /dev/loop1(0)

$ lvconvert --type cache --cachepool fast vg/main

$ lvs -a vg
  LV           VG Attr       LSize   Origin       Pool  Type   Devices
  [fast]       vg Cwi---C---  64.00m                     linear /dev/loop1(0)
  main         vg Cwi---C--- 128.00m [main_corig] [fast] cache  main_corig(0)
  [main_corig] vg owi---C--- 128.00m                     linear /dev/loop0(0)

$ lvchange -ay vg/main

$ dmsetup ls
vg-fast_cdata   (253:4)
vg-fast_cmeta   (253:5)
vg-main_corig   (253:6)
vg-main (253:24)
vg-fast (253:3)

$ dmsetup table
vg-fast_cdata: 0 98304 linear 253:3 32768
vg-fast_cmeta: 0 32768 linear 253:3 0
vg-main_corig: 0 262144 linear 7:0 2048
vg-main: 0 262144 cache 253:5 253:4 253:6 128 2 metadata2 writethrough mq 0
vg-fast: 0 131072 linear 7:1 2048

$ lvchange -an vg/min

$ lvconvert --splitcache vg/main

$ lvs -a vg
  LV   VG Attr       LSize   Type   Devices
  fast vg -wi-------  64.00m linear /dev/loop1(0)
  main vg -wi------- 128.00m linear /dev/loop0(0)
2018-11-06 13:44:54 -06:00
David Teigland
a686391eca cache: reorganize cache_set_policy
to prepare for future addition
2018-11-06 11:36:29 -06:00
David Teigland
23948e99b3 cache: improve error message about flush 2018-11-06 11:36:29 -06:00
David Teigland
3e547fa952 cache: improve warning message about cached thin data 2018-11-06 11:36:28 -06:00
David Teigland
e26dacf30a cache: factor getting cache mode
so part can be called separately
2018-11-06 11:36:28 -06:00
David Teigland
8d7075528f cache: add cache_mode_num_to_str
Requires only string and number, no specific lv/seg type.
2018-11-06 11:36:28 -06:00
Zdenek Kabelac
70e3d0a613 cov: remove unused assigns 2018-11-05 17:25:11 +01:00
David Teigland
aecf542126 metadata: prevent writing beyond metadata area
lvm uses a bcache block size of 128K.  A bcache block
at the end of the metadata area will overlap the PEs
from which LVs are allocated.  How much depends on
alignments.  When lvm reads and writes one of these
bcache blocks to update VG metadata, it can also be
reading and writing PEs that belong to an LV.

If these overlapping PEs are being written to by the
LV user (e.g. filesystem) at the same time that lvm
is modifying VG metadata in the overlapping bcache
block, then the user's updates to the PEs can be lost.

This patch is a quick hack to prevent lvm from writing
past the end of the metadata area.
2018-10-29 16:53:17 -05:00
Heinz Mauelshagen
8df2dd66ce Revert "raid: fix left behind SubLVs"
This reverts commit 16ae968d24.

We need to come up with a better fix, because we fall short
wiping all known signatures when not using the wipe_lv API.
2018-10-25 14:35:56 +02:00
Heinz Mauelshagen
16ae968d24 raid: fix left behind SubLVs
lvm metadata writes, commits and activations are performed
for (newly) allocated RAID metadata SubLVs to wipe any preexisiting
data thus avoid false raid superblock positives on RaidLV activation.

This process can be interrupted by command or system crashs
thus leaving stale SubLVs in the lvm metadata as a problem.

Because we hold an exclusive lock in this metadata SubLV wiping
process, we can address this problem by avoiding aforementioned
commits/writes/activations altogether wiping the respective first
sector of the first physical extent allocated to any metadata SubLV
directly via the existing dev_set() API.

Succeeds all LVM RAID tests.

Related: rhbz1633167
2018-10-24 16:35:30 +02:00
Zdenek Kabelac
fdd76da33d cov: drop uneeded header files 2018-10-15 17:49:44 +02:00
Zdenek Kabelac
253989ecd9 cov: fix error path
Avoid calling 'bad:' section since we have not set 'fd' yet
and instead directly return failing 0 value.
2018-10-15 17:49:44 +02:00
Zdenek Kabelac
fbfbbf6d6a cov: drop check for pointer
Pointer must be always set and it's been already dereferenced.
2018-10-15 14:24:28 +02:00
Heinz Mauelshagen
989626926c lvconvert: allow raid4 -> linear conversion request
Allow "lvconvert --type linear RaidLV" on a raid4 LV
providing convenient interim steps to convert to linear.

Add respective new test
   lvconvert-raid-takeover-raid4_to_linear.sh
and
   lvconvert-raid-takeover-linear_to_raid4.sh
for linear to raid4 once on it.
2018-09-10 18:43:21 +02:00
Heinz Mauelshagen
e2e30a64ab lvconvert: fix interim segtype regression on raid6 conversions
When converting from striped/raid0/raid0_meta
to raid6 with > 2 stripes, allow possible
direct conversion (to raid6_n_6).

In case of 2 stripes, first convert to raid5_n to restripe
to at least 3 data stripes (the raid6 minimum in lvm2) in
a second conversion before finally converting to raid6_n_6.

As before, raid6_n_6 then can be converted
to any other raid6 layout.

Enhance lvconvert-raid-takeover.sh to test the
2 stripes conversions to raid6.

Resolves: rhbz1624038
2018-09-07 13:48:19 +02:00
Heinz Mauelshagen
22a1304368 lvconvert: avoid superfluous interim raid type
When converting striped/raid0*/raid6_n_6 <-> raid4,
avoid superfluous interim raid5_n layout.

Related: rhbz1447809
2018-08-31 19:04:19 +02:00
Heinz Mauelshagen
e83c4f07ca lvconvert: fix conversion attempts to linear
"lvconvert --type linear RaidLV" on striped and raid4/5/6/10
have to provide the convenient interim layouts.  Fix involves
a cleanup to the convenience type function.

As a result of testing, add missing sync waits to
lvconvert-raid-reshape-linear_to_raid6-single-type.sh.

Resolves: rhbz1447809
2018-08-22 17:12:43 +02:00
Heinz Mauelshagen
4578411633 lvconvert: fix regression preventing direct striped conversion
Conversion to striped from raid0/raid0_meta is directly possible.

Fix a regression setting superfluous interim raid5_n conversion type
introduced by commit bd7cdd0b09.

Add new test script lvconvert-raid0-striped.sh.

Resolves: rhbz1608067
2018-08-21 17:28:56 +02:00
Zdenek Kabelac
acab591378 mirror: fix splitmirrors for mirror type
With improved mirror activation code --splitmirror issue poppedup
since there was missing proper preload code and deactivation
for splitted mirror leg.
2018-08-07 17:58:30 +02:00
Zdenek Kabelac
c34291e3bf cache: drop metadata_format validation
Allow to use any combination of cache metadata format for policy.
2018-08-07 17:57:00 +02:00
David Teigland
778ce8d808 lvconvert: improve text about splitmirrors
in messages and man page.
2018-07-23 12:28:48 -05:00
David Teigland
117160b27e Remove lvmetad
Native disk scanning is now both reduced and
async/parallel, which makes it comparable in
performance (and often faster) when compared
to lvm using lvmetad.

Autoactivation now uses local temp files to record
online PVs, and no longer requires lvmetad.

There should be no apparent command-level change
in behavior.
2018-07-11 11:26:42 -05:00
Zdenek Kabelac
12213445b5 vgchange: vdo support
Support vgchange usage with VDO segtype.
Also changing extent size need small update for vdo virtual extent.

TODO: API needs enhancements so it's not about adding ifs() everywhere.
2018-07-09 15:29:16 +02:00
Zdenek Kabelac
c58733ca15 lvcreate: vdo support
Supports basic:  'lvcreate --vdo -LXXXG -VYYYG vg/vdoname -n lvname'
Allows to create basic VDO pool volume and virtual VDO volume.
2018-07-09 15:29:12 +02:00
Zdenek Kabelac
6945bbdbc6 lvresize: vdo support
Unsupported ATM.

Wait till VDO kernel target starts to use updated resize sequence,
LOAD, SUSPEND, RESUME.
2018-07-09 15:28:35 +02:00
Zdenek Kabelac
44c99a8822 vdo: data percentage
Display percentage of used virtual size of vdo-pool volume.
2018-07-09 15:28:35 +02:00