1
0
mirror of git://sourceware.org/git/lvm2.git synced 2024-12-22 17:35:59 +03:00
Commit Graph

948 Commits

Author SHA1 Message Date
Zdenek Kabelac
77fdc17d70 alloc: improve estimation of sufficient_pes_free
Metadata size was calculated correctly only for raids.

Fixes problem for crash during lvcreate when thin-pool was created
on a VG where remaining free space had the size to only fit a single
metadata LV and not also its _pmspare.

Lvcreate crashed with this assert message:

lvcreate: metadata/pv_map.c:198: consume_pv_area: Assertion `to_go <= pva->count' failed.
Aborted (core dumped)

TODO: there is probably to large overload of several alloc_handle
variables.

Reported-by: Wu Guanghao<wuguanghao3@huawei.com>
Reported-by: Zhiqiang Liu <liuzhiqiang26@huawei.com>
2020-09-11 21:51:24 +02:00
Zdenek Kabelac
9f78acfee9 thin: compensate metadata size by extra percent
When using --use-policy for automatic extension of thin-pool,
the extension of thin-pool's metadata itself can actually take
some extra space.
Since I'm not aware of exact compensation formula, add just
1% extra to calculated amount and hope it fits.

Wanted target is to always have usable thin-pool that fits
bellow pool_metadata_min_threshold().
2020-09-11 21:42:37 +02:00
Zdenek Kabelac
b798554a20 lv_manip: even better rounding 2020-09-11 13:37:04 +02:00
Zdenek Kabelac
bc09803628 lv_manip: relocate check to proper function 2020-09-10 23:54:33 +02:00
Zdenek Kabelac
e7f5acdfa6 lvextend: improve percentage estimation
Correcting rounding rules for percentage evaluation.

Validate supported range of percentage.
(although ranges are already validated earlier on code path)
2020-09-10 23:54:31 +02:00
Zdenek Kabelac
3e6bb77228 lv_manip: add synchronization points 2020-09-08 21:23:03 +02:00
Zdenek Kabelac
56c41b7522 cov: avoid duplicated assign 2020-09-01 17:57:50 +02:00
Zdenek Kabelac
fd96f1014b gcc: zero-sized array to fexlible array C99
Switch remaining zero sized struct to flexible arrays to be C99
complient.

These simple rules should apply:

- The incomplete array type must be the last element within the structure.
- There cannot be an array of structures that contain a flexible array member.
- Structures that contain a flexible array member cannot be used as a member of another structure.
- The structure must contain at least one named member in addition to the flexible array member.

Although some of the code pieces should be still improved.
2020-09-01 17:57:50 +02:00
Zdenek Kabelac
ff4827ffb1 lv_manip: get_default_region_size return uint32_t 2020-08-28 21:43:02 +02:00
Zdenek Kabelac
46d15b5e4d wipe_lv: close devices on error path
Device was kept open preventing its deactivated and removed
on error path.
2020-08-19 15:09:09 +02:00
Zdenek Kabelac
9b9bf8786f raid: no wiping when zeroing raid metadata device
Currently lvm2 is not wiping signatures when creating 'metadata' volumes
and raid _rmeta was the only exception - so make the behavior consistent
with other metadata devices and drop wiping ATM.
Drop also some extra debug since they are now more explanatory in
wipe_lv() function.
Also note - although lvm2 now does not wipe signatures - the error
from such wipping used to be actually 'ignored' before wipe_lv()
started to return error (with recent commit) and raid creation
continued with 'unzeroed' metadata device.

TODO: Several issues to resolve:

1. We may want to flip to wipping with all LVs (in that case we need to
support passing --yet & --force).

2. Also we may want to clear whole metadata device - however current
function is also used for wipping i.e. snapshot COW device which
is likely not a good candidate for full device zeroing.
We may also need to think about better logic when extent size is
enforcing very large LVs, when only a small portion of LV is ever
being used.

3. Using TRIM instead of zeroing metadata device might be worth to
implement.

mm
2020-07-08 11:40:55 +02:00
Zdenek Kabelac
fe78cd4082 wipe_lv: always zero at least 4K
When zero_sectors passed value like 1 - we could zero only 1 sector.
Reinstantiate we always zero at least 4K block.
2020-07-08 11:12:54 +02:00
Zdenek Kabelac
bc39d5bec6 pool: zero metadata
To avoid polution of metadata with some 'garbage' content or eventualy
some leak of stale data in case user want to upload metadata somewhere,
ensure upon allocation the metadata device is fully zeroed.

Behaviour may slow down allocation of thin-pool or cache-pool a bit
so the old behaviour can be restored with lvm.conf setting:
allocation/zero_metadata=0

TODO: add zeroing for extension of metadata volume.
2020-06-24 15:01:03 +02:00
Zdenek Kabelac
edbc5a62b2 wipe_lv: make error a fatal event
Failure in wiping/zeroing stop the command.
If user wants to avoid command abortion he should use -Zn or -Wn
to avoid wiping.

Note: there is no easy way to distinguish which kind of failure has
happend - so it's safe to not proceed any futher.
2020-06-24 15:01:03 +02:00
Heinz Mauelshagen
04bba5ea42 lv{resize,extend,reduce}: also check for 2-legged raid4
Users can also convert 2-legged raid1 to raid4 thus causing 'Bus error'
on resize requests.

Related: https://bugzilla.redhat.com/show_bug.cgi?id=1784351
2020-06-24 14:02:31 +02:00
Heinz Mauelshagen
2cf0f90780 lv{resize,extend,reduce}: reject size change on 2-legged raid5*
Reject size changing request in to avoid 'Bus error' and
display hint to convert to more stripes.

Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1784351
2020-06-24 13:52:56 +02:00
David Teigland
240062a183 writecache: remove from an active lv 2020-06-10 12:13:31 -05:00
David Teigland
d9e8895a96 Allow dm-integrity to be used for raid images
dm-integrity stores checksums of the data written to an
LV, and returns an error if data read from the LV does
not match the previously saved checksum.  When used on
raid images, dm-raid will correct the error by reading
the block from another image, and the device user sees
no error.  The integrity metadata (checksums) are stored
on an internal LV allocated by lvm for each linear image.
The internal LV is allocated on the same PV as the image.

Create a raid LV with an integrity layer over each
raid image (for raid levels 1,4,5,6,10):

lvcreate --type raidN --raidintegrity y [options]

Add an integrity layer to images of an existing raid LV:

lvconvert --raidintegrity y LV

Remove the integrity layer from images of a raid LV:

lvconvert --raidintegrity n LV

Settings

Use --raidintegritymode journal|bitmap (journal is default)
to configure the method used by dm-integrity to ensure
crash consistency.

Initialization

When integrity is added to an LV, the kernel needs to
initialize the integrity metadata/checksums for all blocks
in the LV.  The data corruption checking performed by
dm-integrity will only operate on areas of the LV that
are already initialized.  The progress of integrity
initialization is reported by the "syncpercent" LV
reporting field (and under the Cpy%Sync lvs column.)

Example: create a raid1 LV with integrity:

$ lvcreate --type raid1 -m1 --raidintegrity y -n rr -L1G foo
  Creating integrity metadata LV rr_rimage_0_imeta with size 12.00 MiB.
  Logical volume "rr_rimage_0_imeta" created.
  Creating integrity metadata LV rr_rimage_1_imeta with size 12.00 MiB.
  Logical volume "rr_rimage_1_imeta" created.
  Logical volume "rr" created.
$ lvs -a foo
  LV                  VG  Attr       LSize  Origin              Cpy%Sync
  rr                  foo rwi-a-r---  1.00g                     4.93
  [rr_rimage_0]       foo gwi-aor---  1.00g [rr_rimage_0_iorig] 41.02
  [rr_rimage_0_imeta] foo ewi-ao---- 12.00m
  [rr_rimage_0_iorig] foo -wi-ao----  1.00g
  [rr_rimage_1]       foo gwi-aor---  1.00g [rr_rimage_1_iorig] 39.45
  [rr_rimage_1_imeta] foo ewi-ao---- 12.00m
  [rr_rimage_1_iorig] foo -wi-ao----  1.00g
  [rr_rmeta_0]        foo ewi-aor---  4.00m
  [rr_rmeta_1]        foo ewi-aor---  4.00m
2020-04-15 12:10:32 -05:00
David Teigland
81d0333067 writecache: allow removing wcorig lv
like removing corig
2020-02-21 12:41:52 -06:00
David Teigland
2a6078f961 writecache: fix splitcache when origin is raid 2020-02-04 16:12:09 -06:00
Zdenek Kabelac
336361b2f2 lv_manip: add extra check for existin origin_lv
clang: it's supposedly impossible path to hit, as we should always
have origin_lv defined when running this path, but adding protection
isn't a big issue to make this obvious to analyzer.
2020-02-04 17:22:06 +01:00
Zdenek Kabelac
409362c127 lv_manip: add error handling for _reserve_area
Since _reserve_area() may fail due to error allocation failure,
add support to report this already reported failure upward.

FIXME: it's log_error() without causing direct command failure.
2020-02-04 17:22:06 +01:00
Zdenek Kabelac
08f36dd093 lvextend: fix resizing volumes of different segtype
When resizing 2 volumes like  thin-pool and it's metadata and they
would be of a different type - command would be actually expecting
both LVs being of a same segtype - and would throw an error in
case they are different.

This patch fixes is by setting a new segtype from last segment of
2nd. extented device.

Also it fixes the possible 'percentage' extension setup that
might have been used for 'primary' volume - while the 'secondary'
LV always goes with direct size - as we do not support 'percentage'
setup for them

This affects maily usage of thin-pool where the extension of
thin-pool data size may also lead to extension of metadata size.
2019-11-11 22:44:25 +01:00
Zdenek Kabelac
2266a1863f lv_manip: add lv_uniq_rename_update
Add function to rename LV to either passed name or if
the name is already in use, generate new lvol% name.
2019-10-21 12:14:15 +02:00
Zdenek Kabelac
ec85dfe0f8 cachevol: support removal of cachevol
Removal of cachevol is equivalent of lvconvert --uncache
and works the same way as with cachepool.
2019-10-17 13:03:50 +02:00
Zdenek Kabelac
5938cde11b cache: single code for removal of cached volume
Use same routine for dropping cached LV for cachevol and cachepool.
2019-10-17 13:03:50 +02:00
Zdenek Kabelac
9969361b51 debug: missing trace 2019-10-17 13:03:50 +02:00
Zdenek Kabelac
201ffbd04a cachevol: use lv_cache_remove
Use same routine for dropping cache.
2019-10-14 15:20:25 +02:00
Zdenek Kabelac
6a9a4b4534 resize: continue change for getting vdo status before resize
Continue commit a98b77c164.
There needs to be error reported when status can't be obtained.
2019-10-04 17:31:55 +02:00
Zdenek Kabelac
a98b77c164 vdo: properly check percentage for resize
Avoid checking 'lv_is_active()' since special LV types does this
validation anyway what calling  _percent() function  and call it
ONLY when none of special types is queried.

This restores support for VDO resize (as with support for
separate VDO pool activation, plain query for lv_is_active()
is not working in this case).
2019-09-30 13:34:34 +02:00
David Teigland
26596ce7fa writecache: allow removing LV with attached writecache 2019-09-24 15:51:05 -05:00
David Teigland
56aadd7fe2 lvremove: remove attached cachevol with removed LV
When an LV is removed that has an attached cachevol,
also remove the cachevol LV.
2019-09-24 15:51:05 -05:00
David Teigland
27c3c1d7c8 writecache: display layout and role fields 2019-09-20 14:55:11 -05:00
David Teigland
5d3bced5ea lvconvert: detaching cachevol with missing PVs
. For dm-cache in writethrough, always allow splitcache,
  whether the cache is missing PVs or not.

. For dm-cache in writeback, if the cache is missing PVs,
  allow splitcache with force and yes.

. For dm-writecache, if the cache is missing PVs,
  allow splitcache with force and yes.
2019-09-20 09:59:37 -05:00
Zdenek Kabelac
4b1dcc2eeb lv_manip: add synchronizations
New udev in rawhide seems to be 'dropping' udev rule operations for devices
that are no longer existing - while this is 'probably' a bug - it's
revealing moments in lvm2 that likely should not run in a single
transaction and we should wait for a cookie before submitting more work.

TODO: it seem more 'error' paths should always include synchronization
before starting deactivating 'just activated' devices.
We should probably figure out some 'automatic' solution for this instead
of placing sync_local_dev_name() all over the place...
2019-08-26 15:32:19 +02:00
Zdenek Kabelac
c98e34e4d0 cache: improve vgremove loop
Support internal removal of 'cache origin' volume - which we
do not normally expose to a user - however internal processing
loops may hit this condition (depending on order of list LVs).

So when this operation is internally requested - we automatically
try to remove it's 'holding' LV (cache LV) - which will also
remove the origin.
2019-08-26 15:32:12 +02:00
David Teigland
ccd1386070 wipe_lv: initially open LV in writable mode
wipe_lv knows it's going to write the device, so it
can open rw from the start.  It was opening readonly,
and then dev_write needed to reopen it readwrite.
2019-04-26 14:49:27 -05:00
Zdenek Kabelac
fcec6691f0 thin: fix maintenance of _pmspare
When metadata grows lvm2 may need to extend also _pmspare volume.
2019-04-03 13:28:54 +02:00
Zdenek Kabelac
e27d027155 thin: resize metadata with data
When data are growing, adapt also size of metadata.
As we get way too many reports from users doing huge growths of
data portion while keep metadata small and avoiding using monitoring.

So to enhance the user-experience in case user requests grown of
thin-pool (without passing PV list for growth) - lvm2 will automaticaly
grown also the metadata part of thin-pool (if possible).
2019-04-03 13:28:22 +02:00
David Teigland
85e68a8333 lvextend: refresh shared LV remotely using dlm/corosync
When lvextend extends an LV that is active with a shared
lock, use this as a signal that other hosts may also have
the LV active, with gfs2 mounted, and should have the LV
refreshed to reflect the new size.  Use the libdlmcontrol
run api, which uses dlm_controld/corosync to run an
lvchange --refresh command on other cluster nodes.
2019-03-21 12:38:20 -05:00
David Teigland
d369de8399 lvextend: allow on LV active with a shared lock
Detect when a shared lock exists, don't require the
normal exclusive lock, and allow the lvextend.
2019-03-21 12:38:20 -05:00
Zdenek Kabelac
677aa84be3 vdo: enable caching for vdopool LV and vdo LV
Allow using caching with VDO.
User can either cache a single vdopool or
a vdo LV - difference when the caching is put-in depends on a use-case
and it's upto user to decide which kind of speed is expected.
2019-03-20 14:38:31 +01:00
Zdenek Kabelac
0db22c5f81 lv_manip: insert remove layer skips pools
Fixing renaming of subLVs when removing and inserting layers - this
got visible when using stacked VDO pools.
2019-03-20 14:38:05 +01:00
David Teigland
a9eaab6beb Use "cachevol" to refer to cache on a single LV
and "cachepool" to refer to a cache on a cache pool object.

The problem was that the --cachepool option was being used
to refer to both a cache pool object, and to a standard LV
used for caching.  This could be somewhat confusing, and it
made it less clear when each kind would be used.  By
separating them, it's clear when a cachepool or a cachevol
should be used.

Previously:

- lvm would use the cache pool approach when the user passed
  a cache-pool LV to the --cachepool option.

- lvm would use the cache vol approach when the user passed
  a standard LV in the --cachepool option.

Now:

- lvm will always use the cache pool approach when the user
  uses the --cachepool option.

- lvm will always use the cache vol approach when the user
  uses the --cachevol option.
2019-02-27 08:52:34 -06:00
Zdenek Kabelac
d19e372795 cleanup: indent 2019-01-28 22:39:10 +01:00
Zdenek Kabelac
105a8edea1 lv_manip: better work with PERCENT_VG modifier with lvresize
Fixing recent commit 022ebb0cfe
Resize already has size that needs to be counted with,
otherwise upsizing operation could turn into size reduction one.
2019-01-21 15:39:24 +01:00
Zdenek Kabelac
f3c52a515b vdo: enable dmeventd resize 2019-01-21 12:53:16 +01:00
Zdenek Kabelac
a16d914d34 cleanup: better naming 2019-01-21 12:53:16 +01:00
Zdenek Kabelac
08cabe9b83 vdo: allow resize of VDO and VDO pool volumes
Now with newer VDO kvdo target we can start to use standard mechanism
to enable resize of VDO volumes.

VDO pool can be grown.

Virtual volume grows on top of VDO pool when is not big enough.
Reduced VDOLV is calling discard for reduced areas - this can
take long time!

TODO: implement some pollable mechanism for out-of-lock TRIM.
2019-01-21 12:53:16 +01:00
Zdenek Kabelac
bd6709cec6 vdo: size reduction requires VDO to be active
To be able to send discard to reduced areas - the VDO LV needs to
be active.
2019-01-21 12:53:16 +01:00