1
0
mirror of git://sourceware.org/git/lvm2.git synced 2024-12-21 13:34:40 +03:00
Commit Graph

492 Commits

Author SHA1 Message Date
Zdenek Kabelac
04fbffb116 label: cache dm device list
Since we check for present DM devices - cache result for
futher use of checking presence of such device.

lvm2 uses cache result for label scan, but also when
it tries to activate or deactivate LV - however only simple
target 'striped' is reasonably supported.

Use disable_dm_devs to be able to control when lv_info()
get cache or uncached results.

TODO: support more type, however this is getting very complicated.
2021-12-20 16:13:28 +01:00
Zdenek Kabelac
0d67bc96fd activate: add get_device_list
Add funtion get_device_list() to get list of active DM devices.
Handled through new dm_task_get_device_list().
2021-12-20 16:13:28 +01:00
Zdenek Kabelac
65236ee722 activate: device_is_usable
Move checking of usable uuid into separate function
and pass in also cmd context.
2021-12-20 16:13:28 +01:00
Zdenek Kabelac
47ac2659d5 activate: cache driver_version result 2021-12-20 16:13:28 +01:00
Zdenek Kabelac
a8ee13900d cov: initialize attr 2021-09-13 12:34:41 +02:00
Zdenek Kabelac
b7edda8a98 cov: guard index
Analyzer wants explicit protect to not underflow index.
2021-07-28 00:49:28 +02:00
Zdenek Kabelac
d38fdb25e4 thin: fix component detection of external origin
When check active componet of thinLV with external origin,
we need to check if the external origin isn't already active.
For this however we need to use layered check for -real device.
2021-07-14 12:56:08 +02:00
Zdenek Kabelac
b725b5ea6e vdo: fix preload of kvdo
Commit 5bf1dba9eb broke load of kvdo
kernel module - correct it by loading kvdo instead of trying dm-vdo.
2021-05-26 16:12:20 +02:00
Zdenek Kabelac
79d8d06217 raid: move non dm functions from DEVMAPPER ifdef
When lvm is compiled without device-mapper - this functions
do not need this kernel support so move them from ifdef DEVMAPPER
sections.
2021-03-19 23:20:23 +01:00
Zdenek Kabelac
a9b4acd511 dev_manager: add lv_raid_status
Just like with other segtype use this function to get whole
raid status info available per a single ioctl call.

Also it nicely simplifies read of percentage info about
in_sync portion of raid volume.

TODO: drop use of other calls then lv_raid_status call,
since all such calls could already use status - so it just
adds unnecessary duplication.
2021-03-18 18:34:57 +01:00
Zdenek Kabelac
e5a600860c dev_manager: status check with info check included
Reduce ioctl count and avoid separate info check,
when we can get the same info from status ioctl.

When devmanager calls return 0, then the exists value 0
means the reason of failure is missing device in table.
In such case we avoid stack trace.

Swap the flush parameter for the vdo status function
to match thin pool status.
2021-03-18 18:34:57 +01:00
Zdenek Kabelac
a3bb8f2ec1 activation: use interruptible_usleep
Support interruption while waiting on device close.
2021-03-14 16:34:38 +01:00
Zdenek Kabelac
127c2fc6e2 lv_check_not_in_use: correct check
Since lv_info() may return 0 without setting info struct,
make the test correct and even more readable.
2021-03-10 23:32:12 +01:00
Zdenek Kabelac
94712e3233 cov: defined flv 2021-03-10 01:35:02 +01:00
Zdenek Kabelac
64447e9d9b cleanup: move code
just evaluate later in code path.
2021-03-08 15:43:27 +01:00
Zdenek Kabelac
dceef4709d deactivation: reduce ioctl count
When LV is deactivativate, we check for presence, and later
for some LV types also for being in use.

We can however do this check in 1 step for them a remove extra ioctl.

Add return value '2' to lv_check_not_in_use() to recognize LV is not
present.

Existing users were just testing for 0, so no change for them.
2021-03-08 15:30:18 +01:00
David Teigland
e9d10f3711 filters: better message for excluding LV
Make the generic "device is not usable" message from filter-usable
more specific in case the device is not usable because it's an LV.
(i.e. when scan_lvs=0)
2021-03-03 12:07:57 -06:00
Zdenek Kabelac
51c83f1483 lvcreate: use lv_passes_readonly_filter
Check if created LV is going to be activated read-only
because such LV cannot be zeroed (equals to use
option '-pr').
2021-02-02 21:23:39 +01:00
David Teigland
020d1edaa0 writecache: disallow partial or degraded activation
when either main or fast lvs are incomplete
2020-10-26 15:48:58 -05:00
Zdenek Kabelac
0c89c5a40f debug: update debug message 2020-09-29 10:43:56 +02:00
Zdenek Kabelac
bd0d4de4e2 active: fix compilation without devmapper
Better support for compilation without device-mapper.
2020-09-29 10:43:56 +02:00
Zdenek Kabelac
4de6f58085 thin: use lv_status_thin and lv_status_thin_pool
Introduce structures lv_status_thin_pool and
lv_status_thin  (pair to lv_status_cache, lv_status_vdo)

Convert lv_thin_percent() -> lv_thin_status()
and  lv_thin_pool_percent() + lv_thin_pool_transaction_id() ->
lv_thin_pool_status().

This way a function user can see not only percentages, but also
other important status info about thin-pool.

TODO:
This patch tries to not change too many other things,
but pool_below_threshold() now uses new thin-pool info to return
failure if thin-pool cannot be actually modified.
This should be handle separately in a better way.
2020-09-29 10:43:56 +02:00
Zdenek Kabelac
af5f29c7e2 activation: move locking of critical section
Move begining of 'suspending' critical section closer to _lv_suspend_lv()
for better correctness of error paths.
2020-09-29 10:43:56 +02:00
Zdenek Kabelac
5bc66532c7 activation: use revert_lv on tree suspend failure
When thetable reload fails during suspend() - we were only calling
plain resume() - and this will reload only those devices,
which were left suspend, but will not try to restore
metadata state according to lvm2 reverted metadata.
So if we were reloading device tree - we have restored
only top-level LV and rest of reverted device manipulation
were left alone and possibly mismatched what is in committed
metadata.

FIXME: There are several cases were such revert will likely not work
properly anyway as some operation are currenly handled in single commit,
while they need multiple commits, but it's step towards better correctness.
At least we catch there errors now earlier.
2020-09-22 21:02:14 +02:00
Zdenek Kabelac
a8ea1817ab Revert "raid: do not enforce flushing of raids when it is not required"
This reverts commit ce5ea07411.
More thinking needed.
2020-09-09 00:58:32 +02:00
Zdenek Kabelac
ce5ea07411 raid: do not enforce flushing of raids when it is not required
This is probably somewhat experimantal patch - but when i.e. raid device
is just extend, there should not be a technical need for flush,
unless the target would stricly need it.  It should allow faster
processing of lvm command not being blocked by possibly longer flush.
2020-09-08 21:23:03 +02:00
Zdenek Kabelac
dbb19f6ace cleanup: matching declaration order
Cosmetic
2020-09-01 17:57:50 +02:00
David Teigland
d9e8895a96 Allow dm-integrity to be used for raid images
dm-integrity stores checksums of the data written to an
LV, and returns an error if data read from the LV does
not match the previously saved checksum.  When used on
raid images, dm-raid will correct the error by reading
the block from another image, and the device user sees
no error.  The integrity metadata (checksums) are stored
on an internal LV allocated by lvm for each linear image.
The internal LV is allocated on the same PV as the image.

Create a raid LV with an integrity layer over each
raid image (for raid levels 1,4,5,6,10):

lvcreate --type raidN --raidintegrity y [options]

Add an integrity layer to images of an existing raid LV:

lvconvert --raidintegrity y LV

Remove the integrity layer from images of a raid LV:

lvconvert --raidintegrity n LV

Settings

Use --raidintegritymode journal|bitmap (journal is default)
to configure the method used by dm-integrity to ensure
crash consistency.

Initialization

When integrity is added to an LV, the kernel needs to
initialize the integrity metadata/checksums for all blocks
in the LV.  The data corruption checking performed by
dm-integrity will only operate on areas of the LV that
are already initialized.  The progress of integrity
initialization is reported by the "syncpercent" LV
reporting field (and under the Cpy%Sync lvs column.)

Example: create a raid1 LV with integrity:

$ lvcreate --type raid1 -m1 --raidintegrity y -n rr -L1G foo
  Creating integrity metadata LV rr_rimage_0_imeta with size 12.00 MiB.
  Logical volume "rr_rimage_0_imeta" created.
  Creating integrity metadata LV rr_rimage_1_imeta with size 12.00 MiB.
  Logical volume "rr_rimage_1_imeta" created.
  Logical volume "rr" created.
$ lvs -a foo
  LV                  VG  Attr       LSize  Origin              Cpy%Sync
  rr                  foo rwi-a-r---  1.00g                     4.93
  [rr_rimage_0]       foo gwi-aor---  1.00g [rr_rimage_0_iorig] 41.02
  [rr_rimage_0_imeta] foo ewi-ao---- 12.00m
  [rr_rimage_0_iorig] foo -wi-ao----  1.00g
  [rr_rimage_1]       foo gwi-aor---  1.00g [rr_rimage_1_iorig] 39.45
  [rr_rimage_1_imeta] foo ewi-ao---- 12.00m
  [rr_rimage_1_iorig] foo -wi-ao----  1.00g
  [rr_rmeta_0]        foo ewi-aor---  4.00m
  [rr_rmeta_1]        foo ewi-aor---  4.00m
2020-04-15 12:10:32 -05:00
Zdenek Kabelac
80b2de9e6a mirror: fix leg splitting
Enhance lv_info with lv_info_with_name_check.
This 'variant' not only check existance if UUID in DM table
but also compares its  DM name  whether it's matching expected LV name.
Otherwise activation may 'skip' activation with rename in case the
DM UUID already exists, just device is different name.

This change make fairly easier manipulation with i.e. detached mirror
leg which ATM is using same UUID - just the LV name have been changed.

Used code was not able to run 'activation' (and do a rename) and just
skipped the call. So the code used to do a workaround and 'tried'
to deactivate such LV firts - this however work only in non-clvmd case,
as cluster was not having the lock for deactivated LV.

With this extended lv_info code will run 'activation' and will
synchronize the name to match expected LV name.

Patch extends _lv_info() with new paramter 'with_name_check',
which is later translated into 'name_check' argument for
_info_run() which in case of name mismatch evaluates the
check as if device does not exists.

Such call is only used in one place _lv_activate() which then
let activation run.  All other invocation of _info() calls
are left intact.

TODO: fix mirror table manipulation (and raid)....
2019-10-31 15:31:30 +01:00
Zdenek Kabelac
855b16ce14 snapshot: fix checking of merged thin volume
When merging of thin snapshot is taking place, the origin target will
be of thin type.
2019-10-26 00:49:16 +02:00
Zdenek Kabelac
5c0264d689 vdo: restore monitoring of vdo pool
Switch to -vpool layered name needs to monitor proper device.
2019-09-30 13:34:34 +02:00
Zdenek Kabelac
c813db8fc2 vdo: deactivate forgotten vdo pool
If the linear mapping is lost (for whatever reason, i.e.
test suite forcible  'dmsetup remove' linear LV,
lvm2 had hard times figuring out how to deactivate such DM table.

So add function which is in case inactive VDO pool LV checks if
the pool is actually still active (-vpool device present) and
it has open count == 0.  In this case deactivation is allowed
to continue and cleanup DM table.
2019-09-30 13:34:34 +02:00
Zdenek Kabelac
6612d8dd5e vdo: enhance activation with layer -vpool
Enhance 'activation' experience for VDO pool to more closely match
what happens for thin-pools where we do use a 'fake' LV to keep pool
running even when no thinLVs are active. This gives user a choice
whether he want to keep thin-pool running (wihout possibly lenghty
activation/deactivation process)

As we do plan to support multple VDO LVs to be mapped into a single VDO,
we want to give user same experience and 'use-patter' as with thin-pools.

This patch gives option to activate VDO pool only without activating
VDO LV.

Also due to 'fake' layering LV we can protect usage of VDO pool from
command like 'mkfs' which do require exlusive access to the volume,
which is no longer possible.

Note: VDO pool contains 1024 initial sectors as 'empty' header - such
header is also exposed in layered LV (as read-only LV).
For blkid we are indentified as LV with UUID suffix - thus private DM
device of lvm2 - so we do not need to store any extra info in this
header space (aka zero is good enough).
2019-09-17 13:17:19 +02:00
Zdenek Kabelac
1eeb2fa3f6 dev_manager: add dev_manager_remove_dm_major_minor
Move DM usage into dev_manager.c source file.
Also convert STATUS to INFO ioctl - as that's enough
to obtain UUID - this also avoid issuing unwanted flush on checked DM
device for being mpath.
2019-03-20 14:37:10 +01:00
Zdenek Kabelac
e689bfb5d5 vdo: minor API cleanup
Since the parse_vdo_pool_status() become vdo_manip API part,
and there will be no 'dm' matching status parser,
the API can be simplified and closely match thin API here.
2019-01-21 12:53:16 +01:00
Zdenek Kabelac
3d367f3348 vdo: add simple wrapper for getting pool percentage
Just like with i.e. thins provide simple function for
getting percentage of VDO Pool usage (uses existing
status function).
2019-01-21 12:53:16 +01:00
Heinz Mauelshagen
dd5716ddf2 raid: fix (de)activation of RaidLVs with visible SubLVs
There's a small window during creation of a new RaidLV when
rmeta SubLVs are made visible to wipe them in order to prevent
erroneous discovery of stale RAID metadata.  In case a crash
prevents the SubLVs from being committed hidden after such
wiping, the RaidLV can still be activated with the SubLVs visible.
During deactivation though, a deadlock occurs because the visible
SubLVs are deactivated before the RaidLV.

The patch adds _check_raid_sublvs to the raid validation in merge.c,
an activation check to activate.c (paranoid, because the merge.c check
will prevent activation in case of visible SubLVs) and shares the
existing wiping function _clear_lvs in raid_manip.c moved to lv_manip.c
and renamed to activate_and_wipe_lvlist to remove code duplication.
Whilst on it, introduce activate_and_wipe_lv to share with
(lvconvert|lvchange).c.

Resolves: rhbz1633167
2018-12-11 16:35:34 +01:00
Zdenek Kabelac
0d61a17152 gcc: avoid shadowing activate_lv
Function activate_lv() is already declared, avoid its shadowing.
activate.h:133: warning: shadowed declaration is here
2018-12-01 01:06:57 +01:00
David Teigland
3ae5569570 Add dm-writecache support
dm-writecache is used like dm-cache with a standard LV
as the cache.

$ lvcreate -n main -L 128M -an foo /dev/loop0

$ lvcreate -n fast -L 32M -an foo /dev/pmem0

$ lvconvert --type writecache --cachepool fast foo/main

$ lvs -a foo -o+devices
  LV            VG  Attr       LSize   Origin        Devices
  [fast]        foo -wi-------  32.00m               /dev/pmem0(0)
  main          foo Cwi------- 128.00m [main_wcorig] main_wcorig(0)
  [main_wcorig] foo -wi------- 128.00m               /dev/loop0(0)

$ lvchange -ay foo/main

$ dmsetup table
foo-main_wcorig: 0 262144 linear 7:0 2048
foo-main: 0 262144 writecache p 253:4 253:3 4096 0
foo-fast: 0 65536 linear 259:0 2048

$ lvchange -an foo/main

$ lvconvert --splitcache foo/main

$ lvs -a foo -o+devices
  LV   VG  Attr       LSize   Devices
  fast foo -wi-------  32.00m /dev/pmem0(0)
  main foo -wi------- 128.00m /dev/loop0(0)
2018-11-06 14:18:41 -06:00
Zdenek Kabelac
6235861e64 cov: remove uneeded code
Since clvmd was dropped this code become useless.
2018-11-03 16:09:36 +01:00
Zdenek Kabelac
fdd76da33d cov: drop uneeded header files 2018-10-15 17:49:44 +02:00
Zdenek Kabelac
acab591378 mirror: fix splitmirrors for mirror type
With improved mirror activation code --splitmirror issue poppedup
since there was missing proper preload code and deactivation
for splitted mirror leg.
2018-08-07 17:58:30 +02:00
David Teigland
9adae653e9 mirrors: fix read_only_volume_list
If a mirror LV is listed in read_only_volume_list, it would
still be activated rw.  The activation would initially be
readonly, but the monitoring function would immediately
change it to rw.  This was a regression from commit

fade45b1d1 mirror: improve table update

The monitoring function needs to copy the read_only setting
into the new set of mirror activation options it uses.
2018-08-02 11:42:33 -05:00
Zdenek Kabelac
44c99a8822 vdo: data percentage
Display percentage of used virtual size of vdo-pool volume.
2018-07-09 15:28:35 +02:00
Zdenek Kabelac
4f708e8709 dev_manager: add dev_manager_vdo_pool_status 2018-07-09 15:28:35 +02:00
Zdenek Kabelac
2e05f6018b activate: kvdo modprobe workaround
To support autoloading of VDO dm target driver loading of 'kvdo'
kernel module is needed - ATM it's not using 'dm-vdo' name.
So to support this strange name - add temporarily solution to
autoload  kvdo kernel module in this case.
2018-07-09 15:28:35 +02:00
Zdenek Kabelac
77d5caae90 snapshot: improve checking of merging snapshot
Add runtime detection for 'lvs -o+seg_monitor' and 'vgchange --monitor'.
This fix should avoid unnecessary timeout on systemd shutdown.
2018-06-11 22:25:42 +02:00
Joe Thornber
d5da55ed85 device_mapper: remove dbg_malloc.
I wrote dbg_malloc before we had valgrind.  These days there's just
no need.
2018-06-08 13:40:53 +01:00
David Teigland
da30b4a786 Remove locking infrastructure from activation paths
Basic LV functions:

  activate_lv(), deactivate_lv(),
  suspend_lv(), resume_lv()

were routed through the locking infrastruture on the way to:

  lv_activate_with_filter(), lv_deactivate(),
  lv_suspend_if_active(), lv_resume_if_active()

This commit removes the locking infrastructure from the
middle and calls the later functions directly from the former.

There were a couple of ancillary steps that the locking
infrastructure added along the way which are still included:

  - critical section inc/dec during suspend/resume
  - checking for active component LVs during activate

The "activation" file lock (serializing activation) has not
been kept because activation commands have been changed to
take the VG file lock exclusively which makes the activation
lock unused and unnecessary.
2018-06-07 16:17:04 +01:00
David Teigland
18259d5559 Remove unused clvm variations for active LVs
Different flavors of activate_lv() and lv_is_active()
which are meaningful in a clustered VG can be eliminated
and replaced with whatever that flavor already falls back
to in a local VG.

e.g. lv_is_active_exclusive_locally() is distinct from
lv_is_active() in a clustered VG, but in a local VG they
are equivalent.  So, all instances of the variant are
replaced with the basic local equivalent.

For local VGs, the same behavior remains as before.
For shared VGs, lvmlockd was written with the explicit
requirement of local behavior from these functions
(lvmlockd requires locking_type 1), so the behavior
in shared VGs also remains the same.
2018-06-07 16:17:04 +01:00