1
0
mirror of git://sourceware.org/git/lvm2.git synced 2024-12-21 13:34:40 +03:00
Commit Graph

1067 Commits

Author SHA1 Message Date
Zdenek Kabelac
0b6ee6a912 alloc: enhance estimation of sufficient_pes_free
Since commit 77fdc17d70 always include
log_len size into needed extents - however now we may need sometimes
more extents then necessary - mainly when multiple PVs are involved
into allocation.

Add logs_still_needed into calculation of sufficient_pes_free()
2021-01-13 12:54:45 +01:00
Zdenek Kabelac
7bafae48bb gcc: cleanup warns from older gcc 2020-10-26 13:06:53 +01:00
Zdenek Kabelac
9740e98cbd lv_manip: add space into message
Just add space between %s(.
2020-10-24 01:42:16 +02:00
Zdenek Kabelac
b75c2dfe1b debug: shorten error message
Just check for sigint during log_error().
2020-10-19 16:53:18 +02:00
Zdenek Kabelac
e7fff97b8d wipe_lv: use BLKZEROOUT when possible
Since BLKZEROOUT ioctl should be supposedly fastest
way how to clear block device start using this ioctl
for zeroing a device. Commonly we do zero typically
small portion of a device (8KiB) - however since we now
also started to zero metadata devices, in the case
of i.e. thin-pool metadata this can go upto ~16GiB
and here the performance starts to be noticable.
2020-10-02 21:04:16 +02:00
Zdenek Kabelac
c65d3a6b8a wipe_lv: interruptible wiping
Since we now block signals and wiping may take unexpectedly long
time - support breaking command while wipe is in progress.
2020-10-02 21:03:19 +02:00
Zdenek Kabelac
7396f1cfee wipe_lv: drop label_scan_invalidate on error path
Since dev_set_bytes() now closes  dev on error path itself,
remove this unneeded call now (introduced few commits back
in history thus removing comment from WHATS_NEW)
2020-10-02 21:02:04 +02:00
David Teigland
2272a32e6f lvmlockd vdo: add support
lvmlockd handling for vdo lv and vdo pool is like
thin lv and thin pool.
2020-09-29 14:43:27 -05:00
Zdenek Kabelac
bd0d4de4e2 active: fix compilation without devmapper
Better support for compilation without device-mapper.
2020-09-29 10:43:56 +02:00
Zdenek Kabelac
4de6f58085 thin: use lv_status_thin and lv_status_thin_pool
Introduce structures lv_status_thin_pool and
lv_status_thin  (pair to lv_status_cache, lv_status_vdo)

Convert lv_thin_percent() -> lv_thin_status()
and  lv_thin_pool_percent() + lv_thin_pool_transaction_id() ->
lv_thin_pool_status().

This way a function user can see not only percentages, but also
other important status info about thin-pool.

TODO:
This patch tries to not change too many other things,
but pool_below_threshold() now uses new thin-pool info to return
failure if thin-pool cannot be actually modified.
This should be handle separately in a better way.
2020-09-29 10:43:56 +02:00
Zdenek Kabelac
92c0e8c17f writecache: archive before modification of metadata
Archive before we start to modify metadata.
2020-09-29 10:43:56 +02:00
Zdenek Kabelac
08e838f488 cleanup: avoid unneeded check
Since creation of thin snapshot already makes sure,
the message list is empty, there is no need to check
this again.
2020-09-29 10:43:56 +02:00
Zdenek Kabelac
ef59c83f2d thin: enhance lvcreate error paths
Improve error response and reporting, when creating thin snapshots.
If the thin pool kernel metadata already have device with ID lvm2
tries to create, give more meanigful error message and also properly
restore transaction id to the value known to thin-pool in this case.

Before it's been possible to divert by one from kernel TID value,
and lvm2 stacked delete message for such thin device.
2020-09-25 22:56:40 +02:00
Zdenek Kabelac
7c19186271 vdo: disable support for online rename of vdopool LV
Since ATM kernel does not support this operation,
disable 'lvrename' of an active vdopool.

As a workaround, user may simply deactivate, rename and activate.
2020-09-23 13:18:23 +02:00
Zdenek Kabelac
3a3307c0d8 vdo: enhance vdo pool extension
When user tries to extend vdo pool - he needs to go always
at least by 1 full VDO slab  (defined as  vdo_slab_size_mb).

To avoid all trouble around find 'workable' size - lvm2 automatically
increases the passed (or by --use-policies calculated) extension size
(and informs a user about sometimes possibly large increase as slab
size can go upto 32GiB)

With VDO users need to always 'think-big' anyway and expect such
operation to be in GiB domain range.
2020-09-22 23:28:43 +02:00
Zdenek Kabelac
642ef54399 vdo: correct message about policy extend support
Policy extend is already supported for vdo pools as well,
so correct the error message.
2020-09-22 23:25:16 +02:00
Zdenek Kabelac
5bc66532c7 activation: use revert_lv on tree suspend failure
When thetable reload fails during suspend() - we were only calling
plain resume() - and this will reload only those devices,
which were left suspend, but will not try to restore
metadata state according to lvm2 reverted metadata.
So if we were reloading device tree - we have restored
only top-level LV and rest of reverted device manipulation
were left alone and possibly mismatched what is in committed
metadata.

FIXME: There are several cases were such revert will likely not work
properly anyway as some operation are currenly handled in single commit,
while they need multiple commits, but it's step towards better correctness.
At least we catch there errors now earlier.
2020-09-22 21:02:14 +02:00
Zdenek Kabelac
2b36542f41 wipe: dev_set_bytes resolves zeroing
Since dev_write_zeros() is just subset of dev_set_bytes()
use it directly and simplify code.
2020-09-15 23:07:06 +02:00
Zdenek Kabelac
39198eb2ce lvcreate: add extra synchronization at error path
Put explict udev synchronization before we try to deactive devices.
2020-09-15 22:52:25 +02:00
Zdenek Kabelac
77fdc17d70 alloc: improve estimation of sufficient_pes_free
Metadata size was calculated correctly only for raids.

Fixes problem for crash during lvcreate when thin-pool was created
on a VG where remaining free space had the size to only fit a single
metadata LV and not also its _pmspare.

Lvcreate crashed with this assert message:

lvcreate: metadata/pv_map.c:198: consume_pv_area: Assertion `to_go <= pva->count' failed.
Aborted (core dumped)

TODO: there is probably to large overload of several alloc_handle
variables.

Reported-by: Wu Guanghao<wuguanghao3@huawei.com>
Reported-by: Zhiqiang Liu <liuzhiqiang26@huawei.com>
2020-09-11 21:51:24 +02:00
Zdenek Kabelac
9f78acfee9 thin: compensate metadata size by extra percent
When using --use-policy for automatic extension of thin-pool,
the extension of thin-pool's metadata itself can actually take
some extra space.
Since I'm not aware of exact compensation formula, add just
1% extra to calculated amount and hope it fits.

Wanted target is to always have usable thin-pool that fits
bellow pool_metadata_min_threshold().
2020-09-11 21:42:37 +02:00
Zdenek Kabelac
b798554a20 lv_manip: even better rounding 2020-09-11 13:37:04 +02:00
Zdenek Kabelac
bc09803628 lv_manip: relocate check to proper function 2020-09-10 23:54:33 +02:00
Zdenek Kabelac
e7f5acdfa6 lvextend: improve percentage estimation
Correcting rounding rules for percentage evaluation.

Validate supported range of percentage.
(although ranges are already validated earlier on code path)
2020-09-10 23:54:31 +02:00
Zdenek Kabelac
3e6bb77228 lv_manip: add synchronization points 2020-09-08 21:23:03 +02:00
Zdenek Kabelac
56c41b7522 cov: avoid duplicated assign 2020-09-01 17:57:50 +02:00
Zdenek Kabelac
fd96f1014b gcc: zero-sized array to fexlible array C99
Switch remaining zero sized struct to flexible arrays to be C99
complient.

These simple rules should apply:

- The incomplete array type must be the last element within the structure.
- There cannot be an array of structures that contain a flexible array member.
- Structures that contain a flexible array member cannot be used as a member of another structure.
- The structure must contain at least one named member in addition to the flexible array member.

Although some of the code pieces should be still improved.
2020-09-01 17:57:50 +02:00
Zdenek Kabelac
ff4827ffb1 lv_manip: get_default_region_size return uint32_t 2020-08-28 21:43:02 +02:00
Zdenek Kabelac
46d15b5e4d wipe_lv: close devices on error path
Device was kept open preventing its deactivated and removed
on error path.
2020-08-19 15:09:09 +02:00
Zdenek Kabelac
9b9bf8786f raid: no wiping when zeroing raid metadata device
Currently lvm2 is not wiping signatures when creating 'metadata' volumes
and raid _rmeta was the only exception - so make the behavior consistent
with other metadata devices and drop wiping ATM.
Drop also some extra debug since they are now more explanatory in
wipe_lv() function.
Also note - although lvm2 now does not wipe signatures - the error
from such wipping used to be actually 'ignored' before wipe_lv()
started to return error (with recent commit) and raid creation
continued with 'unzeroed' metadata device.

TODO: Several issues to resolve:

1. We may want to flip to wipping with all LVs (in that case we need to
support passing --yet & --force).

2. Also we may want to clear whole metadata device - however current
function is also used for wipping i.e. snapshot COW device which
is likely not a good candidate for full device zeroing.
We may also need to think about better logic when extent size is
enforcing very large LVs, when only a small portion of LV is ever
being used.

3. Using TRIM instead of zeroing metadata device might be worth to
implement.

mm
2020-07-08 11:40:55 +02:00
Zdenek Kabelac
fe78cd4082 wipe_lv: always zero at least 4K
When zero_sectors passed value like 1 - we could zero only 1 sector.
Reinstantiate we always zero at least 4K block.
2020-07-08 11:12:54 +02:00
Zdenek Kabelac
bc39d5bec6 pool: zero metadata
To avoid polution of metadata with some 'garbage' content or eventualy
some leak of stale data in case user want to upload metadata somewhere,
ensure upon allocation the metadata device is fully zeroed.

Behaviour may slow down allocation of thin-pool or cache-pool a bit
so the old behaviour can be restored with lvm.conf setting:
allocation/zero_metadata=0

TODO: add zeroing for extension of metadata volume.
2020-06-24 15:01:03 +02:00
Zdenek Kabelac
edbc5a62b2 wipe_lv: make error a fatal event
Failure in wiping/zeroing stop the command.
If user wants to avoid command abortion he should use -Zn or -Wn
to avoid wiping.

Note: there is no easy way to distinguish which kind of failure has
happend - so it's safe to not proceed any futher.
2020-06-24 15:01:03 +02:00
Heinz Mauelshagen
04bba5ea42 lv{resize,extend,reduce}: also check for 2-legged raid4
Users can also convert 2-legged raid1 to raid4 thus causing 'Bus error'
on resize requests.

Related: https://bugzilla.redhat.com/show_bug.cgi?id=1784351
2020-06-24 14:02:31 +02:00
Heinz Mauelshagen
2cf0f90780 lv{resize,extend,reduce}: reject size change on 2-legged raid5*
Reject size changing request in to avoid 'Bus error' and
display hint to convert to more stripes.

Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1784351
2020-06-24 13:52:56 +02:00
David Teigland
240062a183 writecache: remove from an active lv 2020-06-10 12:13:31 -05:00
David Teigland
d9e8895a96 Allow dm-integrity to be used for raid images
dm-integrity stores checksums of the data written to an
LV, and returns an error if data read from the LV does
not match the previously saved checksum.  When used on
raid images, dm-raid will correct the error by reading
the block from another image, and the device user sees
no error.  The integrity metadata (checksums) are stored
on an internal LV allocated by lvm for each linear image.
The internal LV is allocated on the same PV as the image.

Create a raid LV with an integrity layer over each
raid image (for raid levels 1,4,5,6,10):

lvcreate --type raidN --raidintegrity y [options]

Add an integrity layer to images of an existing raid LV:

lvconvert --raidintegrity y LV

Remove the integrity layer from images of a raid LV:

lvconvert --raidintegrity n LV

Settings

Use --raidintegritymode journal|bitmap (journal is default)
to configure the method used by dm-integrity to ensure
crash consistency.

Initialization

When integrity is added to an LV, the kernel needs to
initialize the integrity metadata/checksums for all blocks
in the LV.  The data corruption checking performed by
dm-integrity will only operate on areas of the LV that
are already initialized.  The progress of integrity
initialization is reported by the "syncpercent" LV
reporting field (and under the Cpy%Sync lvs column.)

Example: create a raid1 LV with integrity:

$ lvcreate --type raid1 -m1 --raidintegrity y -n rr -L1G foo
  Creating integrity metadata LV rr_rimage_0_imeta with size 12.00 MiB.
  Logical volume "rr_rimage_0_imeta" created.
  Creating integrity metadata LV rr_rimage_1_imeta with size 12.00 MiB.
  Logical volume "rr_rimage_1_imeta" created.
  Logical volume "rr" created.
$ lvs -a foo
  LV                  VG  Attr       LSize  Origin              Cpy%Sync
  rr                  foo rwi-a-r---  1.00g                     4.93
  [rr_rimage_0]       foo gwi-aor---  1.00g [rr_rimage_0_iorig] 41.02
  [rr_rimage_0_imeta] foo ewi-ao---- 12.00m
  [rr_rimage_0_iorig] foo -wi-ao----  1.00g
  [rr_rimage_1]       foo gwi-aor---  1.00g [rr_rimage_1_iorig] 39.45
  [rr_rimage_1_imeta] foo ewi-ao---- 12.00m
  [rr_rimage_1_iorig] foo -wi-ao----  1.00g
  [rr_rmeta_0]        foo ewi-aor---  4.00m
  [rr_rmeta_1]        foo ewi-aor---  4.00m
2020-04-15 12:10:32 -05:00
David Teigland
81d0333067 writecache: allow removing wcorig lv
like removing corig
2020-02-21 12:41:52 -06:00
David Teigland
2a6078f961 writecache: fix splitcache when origin is raid 2020-02-04 16:12:09 -06:00
Zdenek Kabelac
336361b2f2 lv_manip: add extra check for existin origin_lv
clang: it's supposedly impossible path to hit, as we should always
have origin_lv defined when running this path, but adding protection
isn't a big issue to make this obvious to analyzer.
2020-02-04 17:22:06 +01:00
Zdenek Kabelac
409362c127 lv_manip: add error handling for _reserve_area
Since _reserve_area() may fail due to error allocation failure,
add support to report this already reported failure upward.

FIXME: it's log_error() without causing direct command failure.
2020-02-04 17:22:06 +01:00
Zdenek Kabelac
08f36dd093 lvextend: fix resizing volumes of different segtype
When resizing 2 volumes like  thin-pool and it's metadata and they
would be of a different type - command would be actually expecting
both LVs being of a same segtype - and would throw an error in
case they are different.

This patch fixes is by setting a new segtype from last segment of
2nd. extented device.

Also it fixes the possible 'percentage' extension setup that
might have been used for 'primary' volume - while the 'secondary'
LV always goes with direct size - as we do not support 'percentage'
setup for them

This affects maily usage of thin-pool where the extension of
thin-pool data size may also lead to extension of metadata size.
2019-11-11 22:44:25 +01:00
Zdenek Kabelac
2266a1863f lv_manip: add lv_uniq_rename_update
Add function to rename LV to either passed name or if
the name is already in use, generate new lvol% name.
2019-10-21 12:14:15 +02:00
Zdenek Kabelac
ec85dfe0f8 cachevol: support removal of cachevol
Removal of cachevol is equivalent of lvconvert --uncache
and works the same way as with cachepool.
2019-10-17 13:03:50 +02:00
Zdenek Kabelac
5938cde11b cache: single code for removal of cached volume
Use same routine for dropping cached LV for cachevol and cachepool.
2019-10-17 13:03:50 +02:00
Zdenek Kabelac
9969361b51 debug: missing trace 2019-10-17 13:03:50 +02:00
Zdenek Kabelac
201ffbd04a cachevol: use lv_cache_remove
Use same routine for dropping cache.
2019-10-14 15:20:25 +02:00
Zdenek Kabelac
6a9a4b4534 resize: continue change for getting vdo status before resize
Continue commit a98b77c164.
There needs to be error reported when status can't be obtained.
2019-10-04 17:31:55 +02:00
Zdenek Kabelac
a98b77c164 vdo: properly check percentage for resize
Avoid checking 'lv_is_active()' since special LV types does this
validation anyway what calling  _percent() function  and call it
ONLY when none of special types is queried.

This restores support for VDO resize (as with support for
separate VDO pool activation, plain query for lv_is_active()
is not working in this case).
2019-09-30 13:34:34 +02:00
David Teigland
26596ce7fa writecache: allow removing LV with attached writecache 2019-09-24 15:51:05 -05:00
David Teigland
56aadd7fe2 lvremove: remove attached cachevol with removed LV
When an LV is removed that has an attached cachevol,
also remove the cachevol LV.
2019-09-24 15:51:05 -05:00
David Teigland
27c3c1d7c8 writecache: display layout and role fields 2019-09-20 14:55:11 -05:00
David Teigland
5d3bced5ea lvconvert: detaching cachevol with missing PVs
. For dm-cache in writethrough, always allow splitcache,
  whether the cache is missing PVs or not.

. For dm-cache in writeback, if the cache is missing PVs,
  allow splitcache with force and yes.

. For dm-writecache, if the cache is missing PVs,
  allow splitcache with force and yes.
2019-09-20 09:59:37 -05:00
Zdenek Kabelac
4b1dcc2eeb lv_manip: add synchronizations
New udev in rawhide seems to be 'dropping' udev rule operations for devices
that are no longer existing - while this is 'probably' a bug - it's
revealing moments in lvm2 that likely should not run in a single
transaction and we should wait for a cookie before submitting more work.

TODO: it seem more 'error' paths should always include synchronization
before starting deactivating 'just activated' devices.
We should probably figure out some 'automatic' solution for this instead
of placing sync_local_dev_name() all over the place...
2019-08-26 15:32:19 +02:00
Zdenek Kabelac
c98e34e4d0 cache: improve vgremove loop
Support internal removal of 'cache origin' volume - which we
do not normally expose to a user - however internal processing
loops may hit this condition (depending on order of list LVs).

So when this operation is internally requested - we automatically
try to remove it's 'holding' LV (cache LV) - which will also
remove the origin.
2019-08-26 15:32:12 +02:00
David Teigland
ccd1386070 wipe_lv: initially open LV in writable mode
wipe_lv knows it's going to write the device, so it
can open rw from the start.  It was opening readonly,
and then dev_write needed to reopen it readwrite.
2019-04-26 14:49:27 -05:00
Zdenek Kabelac
fcec6691f0 thin: fix maintenance of _pmspare
When metadata grows lvm2 may need to extend also _pmspare volume.
2019-04-03 13:28:54 +02:00
Zdenek Kabelac
e27d027155 thin: resize metadata with data
When data are growing, adapt also size of metadata.
As we get way too many reports from users doing huge growths of
data portion while keep metadata small and avoiding using monitoring.

So to enhance the user-experience in case user requests grown of
thin-pool (without passing PV list for growth) - lvm2 will automaticaly
grown also the metadata part of thin-pool (if possible).
2019-04-03 13:28:22 +02:00
David Teigland
85e68a8333 lvextend: refresh shared LV remotely using dlm/corosync
When lvextend extends an LV that is active with a shared
lock, use this as a signal that other hosts may also have
the LV active, with gfs2 mounted, and should have the LV
refreshed to reflect the new size.  Use the libdlmcontrol
run api, which uses dlm_controld/corosync to run an
lvchange --refresh command on other cluster nodes.
2019-03-21 12:38:20 -05:00
David Teigland
d369de8399 lvextend: allow on LV active with a shared lock
Detect when a shared lock exists, don't require the
normal exclusive lock, and allow the lvextend.
2019-03-21 12:38:20 -05:00
Zdenek Kabelac
677aa84be3 vdo: enable caching for vdopool LV and vdo LV
Allow using caching with VDO.
User can either cache a single vdopool or
a vdo LV - difference when the caching is put-in depends on a use-case
and it's upto user to decide which kind of speed is expected.
2019-03-20 14:38:31 +01:00
Zdenek Kabelac
0db22c5f81 lv_manip: insert remove layer skips pools
Fixing renaming of subLVs when removing and inserting layers - this
got visible when using stacked VDO pools.
2019-03-20 14:38:05 +01:00
David Teigland
a9eaab6beb Use "cachevol" to refer to cache on a single LV
and "cachepool" to refer to a cache on a cache pool object.

The problem was that the --cachepool option was being used
to refer to both a cache pool object, and to a standard LV
used for caching.  This could be somewhat confusing, and it
made it less clear when each kind would be used.  By
separating them, it's clear when a cachepool or a cachevol
should be used.

Previously:

- lvm would use the cache pool approach when the user passed
  a cache-pool LV to the --cachepool option.

- lvm would use the cache vol approach when the user passed
  a standard LV in the --cachepool option.

Now:

- lvm will always use the cache pool approach when the user
  uses the --cachepool option.

- lvm will always use the cache vol approach when the user
  uses the --cachevol option.
2019-02-27 08:52:34 -06:00
Zdenek Kabelac
d19e372795 cleanup: indent 2019-01-28 22:39:10 +01:00
Zdenek Kabelac
105a8edea1 lv_manip: better work with PERCENT_VG modifier with lvresize
Fixing recent commit 022ebb0cfe
Resize already has size that needs to be counted with,
otherwise upsizing operation could turn into size reduction one.
2019-01-21 15:39:24 +01:00
Zdenek Kabelac
f3c52a515b vdo: enable dmeventd resize 2019-01-21 12:53:16 +01:00
Zdenek Kabelac
a16d914d34 cleanup: better naming 2019-01-21 12:53:16 +01:00
Zdenek Kabelac
08cabe9b83 vdo: allow resize of VDO and VDO pool volumes
Now with newer VDO kvdo target we can start to use standard mechanism
to enable resize of VDO volumes.

VDO pool can be grown.

Virtual volume grows on top of VDO pool when is not big enough.
Reduced VDOLV is calling discard for reduced areas - this can
take long time!

TODO: implement some pollable mechanism for out-of-lock TRIM.
2019-01-21 12:53:16 +01:00
Zdenek Kabelac
bd6709cec6 vdo: size reduction requires VDO to be active
To be able to send discard to reduced areas - the VDO LV needs to
be active.
2019-01-21 12:53:16 +01:00
Zdenek Kabelac
f1ad4b0679 vdo: discard reduced area
Implement sending discard to reduced LV area.
2019-01-21 12:53:16 +01:00
Zdenek Kabelac
ca72d19691 vdo: estimate virtual size after resize 2019-01-21 12:53:16 +01:00
Zdenek Kabelac
022ebb0cfe lv_manip: better work with PERCENT_VG modifier
When using 'lvcreate -l100%VG' and there is big disproportion between
real available space and requested setting - automatically fallback
to 100%FREE.

Difference can be seen when VG is big and already most space was
allocated, so the requestion 100%VG can end (and by spec for % modifier
it's correct) as LV with size of 1%VG.  Usually this is not a big
problem - buit in some cases - like cache-pool allocation, this
can result a big difference for chunksize selection.

With this patch it's more closely match common-sense logic without
the need of reitteration of too big changes in lvm2 core ATM.

TODO: in the future there should be allocator solving all allocations
in a single call.
2019-01-21 12:53:15 +01:00
Ming-Hung Tsai
859feb81e5 lvmanip: uninitialized members in struct pv_list (#10)
Scenario: Given an existed LV `lvol0`, I want to create another LV
on the PVs used by `lvol0`.

I use `build_parallel_areas_from_lv()` to obtain the `pv_list` of each segments.
However, the returned `pv_list` is not properly initialized, which causes
segfault in subsequent operations.
2018-12-14 15:23:18 +01:00
Heinz Mauelshagen
dd5716ddf2 raid: fix (de)activation of RaidLVs with visible SubLVs
There's a small window during creation of a new RaidLV when
rmeta SubLVs are made visible to wipe them in order to prevent
erroneous discovery of stale RAID metadata.  In case a crash
prevents the SubLVs from being committed hidden after such
wiping, the RaidLV can still be activated with the SubLVs visible.
During deactivation though, a deadlock occurs because the visible
SubLVs are deactivated before the RaidLV.

The patch adds _check_raid_sublvs to the raid validation in merge.c,
an activation check to activate.c (paranoid, because the merge.c check
will prevent activation in case of visible SubLVs) and shares the
existing wiping function _clear_lvs in raid_manip.c moved to lv_manip.c
and renamed to activate_and_wipe_lvlist to remove code duplication.
Whilst on it, introduce activate_and_wipe_lv to share with
(lvconvert|lvchange).c.

Resolves: rhbz1633167
2018-12-11 16:35:34 +01:00
David Teigland
3ae5569570 Add dm-writecache support
dm-writecache is used like dm-cache with a standard LV
as the cache.

$ lvcreate -n main -L 128M -an foo /dev/loop0

$ lvcreate -n fast -L 32M -an foo /dev/pmem0

$ lvconvert --type writecache --cachepool fast foo/main

$ lvs -a foo -o+devices
  LV            VG  Attr       LSize   Origin        Devices
  [fast]        foo -wi-------  32.00m               /dev/pmem0(0)
  main          foo Cwi------- 128.00m [main_wcorig] main_wcorig(0)
  [main_wcorig] foo -wi------- 128.00m               /dev/loop0(0)

$ lvchange -ay foo/main

$ dmsetup table
foo-main_wcorig: 0 262144 linear 7:0 2048
foo-main: 0 262144 writecache p 253:4 253:3 4096 0
foo-fast: 0 65536 linear 259:0 2048

$ lvchange -an foo/main

$ lvconvert --splitcache foo/main

$ lvs -a foo -o+devices
  LV   VG  Attr       LSize   Devices
  fast foo -wi-------  32.00m /dev/pmem0(0)
  main foo -wi------- 128.00m /dev/loop0(0)
2018-11-06 14:18:41 -06:00
David Teigland
cac4a9743a Allow dm-cache cache device to be standard LV
If a single, standard LV is specified as the cache, use
it directly instead of converting it into a cache-pool
object with two separate LVs (for data and metadata).

With a single LV as the cache, lvm will use blocks at the
beginning for metadata, and the rest for data.  Separate
dm linear devices are set up to point at the metadata and
data areas of the LV.  These dm devs are given to the
dm-cache target to use.

The single LV cache cannot be resized without recreating it.

If the --poolmetadata option is used to specify an LV for
metadata, then a cache pool will be created (with separate
LVs for data and metadata.)

Usage:

$ lvcreate -n main -L 128M vg /dev/loop0

$ lvcreate -n fast -L 64M vg /dev/loop1

$ lvs -a vg
  LV   VG Attr       LSize   Type   Devices
  main vg -wi-a----- 128.00m linear /dev/loop0(0)
  fast vg -wi-a-----  64.00m linear /dev/loop1(0)

$ lvconvert --type cache --cachepool fast vg/main

$ lvs -a vg
  LV           VG Attr       LSize   Origin       Pool  Type   Devices
  [fast]       vg Cwi---C---  64.00m                     linear /dev/loop1(0)
  main         vg Cwi---C--- 128.00m [main_corig] [fast] cache  main_corig(0)
  [main_corig] vg owi---C--- 128.00m                     linear /dev/loop0(0)

$ lvchange -ay vg/main

$ dmsetup ls
vg-fast_cdata   (253:4)
vg-fast_cmeta   (253:5)
vg-main_corig   (253:6)
vg-main (253:24)
vg-fast (253:3)

$ dmsetup table
vg-fast_cdata: 0 98304 linear 253:3 32768
vg-fast_cmeta: 0 32768 linear 253:3 0
vg-main_corig: 0 262144 linear 7:0 2048
vg-main: 0 262144 cache 253:5 253:4 253:6 128 2 metadata2 writethrough mq 0
vg-fast: 0 131072 linear 7:1 2048

$ lvchange -an vg/min

$ lvconvert --splitcache vg/main

$ lvs -a vg
  LV   VG Attr       LSize   Type   Devices
  fast vg -wi-------  64.00m linear /dev/loop1(0)
  main vg -wi------- 128.00m linear /dev/loop0(0)
2018-11-06 13:44:54 -06:00
Zdenek Kabelac
6945bbdbc6 lvresize: vdo support
Unsupported ATM.

Wait till VDO kernel target starts to use updated resize sequence,
LOAD, SUSPEND, RESUME.
2018-07-09 15:28:35 +02:00
Zdenek Kabelac
493ffe7a0f lv_manip: layout and role support for vdo segment 2018-07-09 15:28:35 +02:00
Zdenek Kabelac
4b7a57c9ed vdo: with created names use vpool
When user create vdo-pool - use different automatic name.
So unlike with traditional LVs using  lvol0, lvol1
use vpool0, vpool1...

TODO: apply similar for thin-pool  & cache-pool...
2018-07-09 15:28:35 +02:00
Zdenek Kabelac
a8f84f7801 vdo: introduce segment types and manip functions
Core functionality introducing lvm VDO support.
2018-07-09 15:28:35 +02:00
Zdenek Kabelac
e9d1f676b3 allocation: add check for passing log allocation
Updates previous commit.
2018-07-09 00:59:34 +02:00
Zdenek Kabelac
6d1c983122 cleanup: use last_seg
More readable code.
2018-07-09 00:23:35 +02:00
Zdenek Kabelac
b697aa9646 allocator: fix thin-pool allocation
When allocating thin-pool with more then 1 device - try to
allocate 'metadataLV' with reuse of log-type allocation for mirror LV.
It should be naturally place on other device then 'dataLV'.

However due to somewhat hard to follow allocation logic code,
it's been rejected allocation in cases where there was not
enough space for data or metadata on single PV, thus to successed,
usage of segments was mandatory.

While user may use:

allocation/thin_pool_metadata_require_separate_pvs=1

to enforce separe meta and data LV - on default settings, this is not
enable thus segment allocation is meant to work.

NOTE:

As already said - the original intention of this whole  'if()' is unclear,
so try to split this test into multiple more simple tests that are more readable.

TODO: more validation.
2018-07-09 00:19:30 +02:00
Zdenek Kabelac
f2b856c994 lv_manip: do not check extents for any virtual target
Allow creation of any virtual segment type with just --virtualsize
specified without any real extent size give.

TODO: likely --type error,zero might be later enhanced to use -V
(along with -L) - but since those targets do not allocate real
space, supporting -V makes sense with them.
2018-07-02 10:24:23 +02:00
Zdenek Kabelac
2bb9627d01 lv_manip: add name of failing LV into error message 2018-07-02 10:24:23 +02:00
Zdenek Kabelac
cea88a9e4e lv_manip: use vgmem pool
Switch to vgmem pool for allocation associated with modification
of particular VG.
2018-06-25 15:07:55 +02:00
Zdenek Kabelac
9c0d92d957 lv_manip: add new internal api function 2018-06-25 15:07:55 +02:00
Zdenek Kabelac
106ee05ba0 lv_manip: add extra internal error
Catch error early, when trying to store data into non-allocated area.
2018-06-22 23:37:02 +02:00
David Teigland
8eab37593e Add cmd arg to more functions
so that it can be used in the filter code
2018-06-15 11:03:55 -05:00
Joe Thornber
d5da55ed85 device_mapper: remove dbg_malloc.
I wrote dbg_malloc before we had valgrind.  These days there's just
no need.
2018-06-08 13:40:53 +01:00
David Teigland
18259d5559 Remove unused clvm variations for active LVs
Different flavors of activate_lv() and lv_is_active()
which are meaningful in a clustered VG can be eliminated
and replaced with whatever that flavor already falls back
to in a local VG.

e.g. lv_is_active_exclusive_locally() is distinct from
lv_is_active() in a clustered VG, but in a local VG they
are equivalent.  So, all instances of the variant are
replaced with the basic local equivalent.

For local VGs, the same behavior remains as before.
For shared VGs, lvmlockd was written with the explicit
requirement of local behavior from these functions
(lvmlockd requires locking_type 1), so the behavior
in shared VGs also remains the same.
2018-06-07 16:17:04 +01:00
David Teigland
3e781ea446 Remove clvmd and associated code
More code reduction and simplification can follow.
2018-06-05 11:09:13 -05:00
David Teigland
b6f0f20da2 lvmlockd: primarily use vg_is_shared
to check if a vg uses an lvmlockd lock_type,
instead of the equivalent but longer is_lockd_type.
2018-06-01 13:15:22 -05:00
Joe Thornber
dbba1e9b93 Merge branch 'master' into 2018-05-11-fork-libdm 2018-06-01 13:04:12 +01:00
David Teigland
b9c1cef817 lvmlockd: fix reverting new lv in error path
The wrong name was being used to free the LV lock
in lvmlockd in the error exit path.
2018-05-31 15:35:48 -05:00
David Teigland
c516321325 lvmlockd: enable lvcreate of new LV plus existing cache pool
In this command, lvcreate creates a new LV and then combines
it with an existing cache pool, producing a cache LV.  This
command was previously not allowed in in a shared VG.
2018-05-30 15:24:24 -05:00
David Teigland
948f2d9979 lvmlockd: enable lvcreate of thin pool and thin lv in one command
Previously, thin pools and thin lvs need needed to be
created with separate commands, now the combined command
is permitted.
2018-05-30 09:25:45 -05:00
Joe Thornber
7f97c7ea9a build: Don't generate symlinks in include/ dir
As we start refactoring the code to break dependencies (see doc/refactoring.txt),
I want us to use full paths in the includes (eg, #include "base/data-struct/list.h").
This makes it more obvious when we're breaking abstraction boundaries, eg, including a file in
metadata/ from base/
2018-05-14 10:30:20 +01:00
Joe Thornber
39ce38eb88 label/lv_manip: squash some warnings 2018-05-10 15:14:39 +01:00
David Teigland
9a5bd01b0c io: replace dev_set with bcache equivalents 2018-05-09 11:29:52 -05:00
David Teigland
c1cd18f21e Remove lvm1 and pool disk formats
There are likely more bits of code that can be removed,
e.g. lvm1/pool-specific bits of code that were identified
using FMT flags.

The vgconvert command can likely be reduced further.

The lvm1-specific config settings should probably have
some other fields set for proper deprecation.
2018-04-30 16:55:02 -05:00
Zdenek Kabelac
c492fbb51c debug: more explanatory error message 2018-04-23 22:42:18 +02:00
Zdenek Kabelac
9731d48691 cleanup: enhance debug message 2018-04-20 12:17:01 +02:00
Zdenek Kabelac
27a1a0e5c0 cleanup: reorder condition
There is no point to wait for sync for non-locally active LV.
2018-04-20 12:17:01 +02:00
Zdenek Kabelac
05f954ee9b mirror: checking for mirror segtype
Checking more correctly for mirror segtype here instead of
mirrored one which can be also 'raid'.
2018-04-20 12:16:14 +02:00
Zdenek Kabelac
66400d003d mirror: fix region_size for clustered VG
When adjusting region size for clustered VG it always needs to fit
2 full bitset into 1MB due to old limits of CPG.

This is relatively big amount of bits, but we have still limitation
for region size to fit into 32bits (0x8000000).

So for too big mirrors this operation needs to fail - so whenever
function returns now 0, it means we can't find matching region_size.

Since return 0 is now 'error' we need to also pass proper region_size
when creating pvmove mirror.
2018-04-20 12:13:48 +02:00
Zdenek Kabelac
91965af9b1 mirror: improve mirror log size estimation
Drop mirrored mirror log limitation that applies only in very limited
use-case and actually mirrored mirror log is deprecated anyway.

So 'disk' mirror log is selecting the correct minimal size, and
bigger size is only enforced with real mirrored mirror log.

Also for mirrored mirror log we let use 'smalled' region size if needed
so if user uses  1G region size, we still keep small mirror log
with much smaller region size in this case when needed.

Also mirror log extent calculation is now properly detecting error
with too big mirrors where previosly trimmed uint32_t was applies
unintentionally.
2018-04-20 12:11:42 +02:00
Zdenek Kabelac
73189170f5 mirror: fix 32bit size calculation
On 32bit arch  size_t remains 4-byte wide - so size can't
get correct result for multiplication of 32bit numbers.
2018-04-20 12:08:57 +02:00
Heinz Mauelshagen
d68d71013f lvcreate: remove RaidLV on creation failure
In case a newly created RaidLV is blacklisted using config
\"activation { volume list = [ ... ] }\" (i.e. its SubLVs stay inactive),
the metadata SubLVs can't get wiped thus failing the creation.

As a result, the RaidLV together with its SubLVs
is left behind in an inconsistent state.

Fix by removing the RaidLV and provide a hint about volume_list reasoning.

Resolves: rhbz1161347
2018-03-16 15:57:53 +01:00
Zdenek Kabelac
ee37838b11 cache: fix lock usage for cache conversion
Just like with lvcreate, this lvconvert case also need to properly
check which LV actually holds lock for cached origin - as it might
be i.e. thin-pool tdata subLV.
2018-03-08 10:39:47 +01:00
Zdenek Kabelac
6134a71a90 lvconvert: support for convertsion with active component devices
If componet devices could be activated alone, ensure they are not breaking
common commands.

TODO: mostly likely this is not a definite list of all needed checks
and more will come later.
2018-03-06 15:42:07 +01:00
Zdenek Kabelac
f92b6f9930 lvremove: ensure no subLV is active
Since component activation is going to be enabled, enusure,
no subLV is active when we deactivate LV.
2018-03-06 15:42:07 +01:00
Zdenek Kabelac
73e93ef5e5 lvremove: validate removed component LV is not active
This is the 'last' place where a LV is present in metadata.
Any removed device should not be left active in dm table.
So this check is an extra validation protection to capture any
forgotten deactivation (adding 1 extra ioctl into lvremove path)
2018-03-06 15:42:07 +01:00
Zdenek Kabelac
6481471c9d debug: update comment 2018-03-06 15:40:34 +01:00
Zdenek Kabelac
f04abd1f8a lvremove: drop duplicate check for active LV
Since this code branch already tested LV is active,
avoid repeating same query.
2018-03-06 15:40:31 +01:00
Zdenek Kabelac
b2f1254c14 raid: move VG update after archiving happened
Update of LV le_count needs to happen after archive().
2018-03-06 15:38:15 +01:00
Zdenek Kabelac
406d6de651 cleanup: indent 2018-02-28 21:15:55 +01:00
Zdenek Kabelac
16c209c613 cleanup: use lv_is_used_cache_pool
Use lv_is_used_cache_pool() to simplify the code.
Function was introduced later and this code missed to use it.
2018-02-28 21:15:55 +01:00
Zdenek Kabelac
052f28746d lvresize: check external origin with new size
Instead of checking with existing size of external origin LV,
use correctly the new 'wanted' size of this LV whether it fits
the limitiation requirements for older thin-pool target.

Otherwise code started to the the resize, updates metadata and
just fails during 'resize' in case the LV was active. For
inactive LV operation could have actually passed.
2018-02-28 21:15:55 +01:00
Zdenek Kabelac
b09ea3b6f7 lvremove: drop unneded check
Checking here for cache_pool is not necessary and in effect
the check is not even right - since there are internal
states that do allow to active such LV.
2018-02-28 21:08:40 +01:00
Zdenek Kabelac
bc1adc32cb lv_manip: enhance for_each_sub_lv
Fix missing 'externalLV' traversing for thins with external origins.

Replace extra for_each_sub_lv_except_pools() with better
internal logic allowing selectively to cut of processed subLV tree.

Extend error code for function 'fn()' when it returns -1 it will
stop futher tree scan for given LV.

Also a bit simplify code to have only one place that
is calling 'fn()' and use level counter to know
depth of traversing.

Update renaming travering to skip trees for pools
and external origins.
2018-02-28 21:08:38 +01:00
Alasdair G Kergon
bacc942333 allocation: Avoid exceeding array bounds in allocation tag code
If _limit_to_one_area_per_tag() changes nothing it writes beyond
the array.
2018-01-10 15:48:03 +00:00
Alasdair G Kergon
e4805e4883 device: categorise block i/o
Introduce enum dev_io_reason to categorise block device I/O
in debug messages so it's obvious what it is for.

DEV_IO_SIGNATURES   /* Scanning device signatures */
DEV_IO_LABEL        /* LVM PV disk label */
DEV_IO_MDA_HEADER   /* Text format metadata area header */
DEV_IO_MDA_CONTENT  /* Text format metadata area content */
DEV_IO_FMT1         /* Original LVM1 metadata format */
DEV_IO_POOL         /* Pool metadata format */
DEV_IO_LV           /* Content written to an LV */
DEV_IO_LOG          /* Logging messages */
2017-12-04 23:45:26 +00:00
Heinz Mauelshagen
4daad1cf11 lv_manip: allow extension on --nosync raid lv
If the recovery of the repleced leg(s) of a RaidLV created without
initial resynchronization (i.e. "lvcreate --nosync ...") got
interrupted, it can't be extended because of the < 100% sync rate.
2017-12-01 18:38:18 +01:00
Zdenek Kabelac
c489dd2e17 pvmove: add missing segment merging
When pvmove is finished and metadata are updated, the code missed
to merge possible mergable segments - so add explicit merging
call after pvmoved volumes are unlocked.

This avoids weird results where i.e. lvs could have been reporting
non-matching segments as lvs upon metadata read is doing silent segment
merging while dm table left after pvmove was still preserving
non-merged segments.
2017-12-01 12:19:09 +01:00
Zdenek Kabelac
fbd8b456db pvmove: move code from tools to lib
Move code manipulating with locking flags into /lib part of lvm.
2017-12-01 12:18:32 +01:00
Zdenek Kabelac
eab9097b46 layers: collect only lock holding LVs 2017-11-15 12:11:33 +01:00
Zdenek Kabelac
52cee9dd83 lvremove: for unused cache deactive sublv 2017-11-11 00:59:19 +01:00
Zdenek Kabelac
55b8204ca3 reload: do not take backup with suspended devices
If the suspend/resume sequence would leave some device in suspend
for possible later resume, backup cannot be takes (fs holding backups
could be still frozen in critical section())
2017-11-11 00:58:11 +01:00
Zdenek Kabelac
7a394575fb cleanup: use segtype_is_raid_with_meta
Replace with common macro.
2017-11-01 00:59:22 +01:00
Zdenek Kabelac
373372c8ab lv_manip: hide layered LV temporarily
Since vg_validate() now rejects LVs without segments and
insert_layer_for_segments_on_pv() gets just created
'layer_lv' without segment,  it needs to be hidden
from vg->lvs during processing of _align_segment_boundary_to_pe_range()
as this function calls  lv_validate() and now requires
vg to be consistent.  LV is then put back into vg->lvs.
2017-11-01 00:55:24 +01:00
Zdenek Kabelac
2b6391538c raid: setup LV size earlier
New validation code which does require to not store LV with no size
(no segments) revealed this size setup code needs to happen
earlier.
2017-10-30 17:23:56 +01:00
Zdenek Kabelac
83d5db056b lvreduce: check LV has segment
Before accessing content make sure LV has segment.
This can be used in case code removes LV without segments
(i.e. on some error path)
2017-10-30 14:39:16 +01:00
Zdenek Kabelac
63c50ced89 snapshot: relocate common code validation for snapshot origin
Since both lvcreate and lvconvert needs to check for same
type of allowed origin for snapshot - move the code into
a single function.

This way we also fix several inconsitencies where snapshot
has been allowed by mistake either through lvcreate or
lvconvert path.
2017-10-27 17:07:42 +02:00
Zdenek Kabelac
0e7edd1d24 snapshot: improve validation
Do not allow to take snapshot of mirror/raid leg or log or metadata LV.
This was actually never supported, but user was able to create it,
and this put device stack in hardly fixable state (needs manual work).

This prevents such creation to pass.

Also improve validation when recreating snapshot volume type
from origin and COW volume.
2017-10-25 21:58:01 +02:00
Zdenek Kabelac
d6fcab900b lvextend: detect stacked cache lv used for thinpool
Ensure, that cacheLV is not tried to be resize until full support is
added.
2017-10-23 12:00:43 +02:00
Alasdair G Kergon
f1cc5b12fd tidy: Add missing underscores to statics. 2017-10-18 15:58:13 +01:00
David Teigland
6ac1e04b3a replicator: remove the code
It has not been used in a long time and is not
expected to be used further.
2017-10-13 16:20:42 -05:00
Heinz Mauelshagen
cf13a30eaa lvcreate: allow 100%FREE creation of "--type mirror" to work
Fixes the following case with 3PVs and 3 legs "mirror" LV:

# lvcreate -l100%FREE --type mirror -m2 vg3
  Insufficient free space for log allocation for logical volume .
  Unable to allocate extents for mirror log.

Related: rhbz1269533
2017-10-12 17:43:24 +02:00
Heinz Mauelshagen
5f13e33d54 lvcreate: fix region size on striped RaidLVs
Creating striped RaidLVs with lv size not divisible by region size
caused the region size to be adjusted:

# lvcreate   --type raid5 -n region_check.32.00m_3 -i 3 -L 1g --nosync -R 32.00m raid_sanity
  Using default stripesize 64.00 KiB.
  Rounding size 1.00 GiB (256 extents) up to stripe boundary size <1.01 GiB(258 extents).
  WARNING: New raid5 won't be synchronised. Don't read what you didn't write!
  Using reduced mirror region size of 8.00 MiB
  Logical volume region_check.32.00m_3 created.

Fix by not imposing "mirror" constraints on "raid".

Resolves: rhbz1404007
2017-10-09 14:35:06 +02:00
Zdenek Kabelac
4ef6cfc882 tidy: else after continue
Similar as with 'else' after 'return'  unindent whole block
for better readability of code.
2017-07-20 11:18:29 +02:00
Zdenek Kabelac
0bf836aa14 tidy: prefer not using else after return
clang-tidy: avoid using  'else' after return - give more readable code,
and also saves indention level.
2017-07-20 11:18:29 +02:00
Eric Ren
4c94371005 comment: update
Use 'is' for both forms.
2017-07-10 14:58:01 +02:00
David Teigland
3797f47ecf lvmlockd: fix revert in lvcreate
If the activation step in lvcreate fails (e.g. the specified
minor number is already used), then the lvcreate is reverted,
but the LV lock in lvmlockd was not being unlocked or properly
freed.
2017-07-07 14:42:25 -05:00
Zdenek Kabelac
e9c60f874e coverity: extra check for find_pool_seg
find_pool_seg may return NULL in some internal error stats.
Handle it explicitely.
2017-06-27 12:15:15 +02:00
Zdenek Kabelac
41c10034aa debug: show message only when origin_only was set 2017-06-22 20:17:20 +02:00
Zdenek Kabelac
e3f63693a4 lvresize: support passing --yes to fsadm
Since fsadm now needs --yes to pass prompting operations,
we need to pass --yes from  lvresize to fsadm.
2017-06-21 14:03:29 +02:00
Zdenek Kabelac
1ea41b6d48 activation: fix usage of origin_only
When lock-holding LV differs from actually request locked LV,
we drop  origin_only flag as it has no use - it'd be applied
on completely different LV.

Example of problem:

Raid is  thin-pool _tdata LV.
Raid run  origin_only locking on stacked device.
As lock holder is discovered thinLV.
Whole origin_only operation is then applied only on thinLV
changing the meaning of whole operation.

NOTE: this patch does not change anything for LV that are
already top-level lock holding LVs (i.e. thinLVs, snahoshots/origins).
2017-06-20 18:23:24 +02:00
Heinz Mauelshagen
83cdba75bd mirror/raid: display adjusted region size with units
Display adjusted region size in units (e.g. "4.00 MiB") rather than sectors.
2017-04-20 20:42:21 +02:00
Zdenek Kabelac
3018cdcaa7 fsadm: support configurable full path
Just like with other tools lvm2 is using allow to define
fully configurable path.

Default is selected by $PREFIX/sbin/fsadm
2017-04-12 21:34:08 +02:00