1
0
mirror of git://sourceware.org/git/lvm2.git synced 2024-12-21 13:34:40 +03:00
Commit Graph

922 Commits

Author SHA1 Message Date
Zdenek Kabelac
5c0264d689 vdo: restore monitoring of vdo pool
Switch to -vpool layered name needs to monitor proper device.
2019-09-30 13:34:34 +02:00
Zdenek Kabelac
c813db8fc2 vdo: deactivate forgotten vdo pool
If the linear mapping is lost (for whatever reason, i.e.
test suite forcible  'dmsetup remove' linear LV,
lvm2 had hard times figuring out how to deactivate such DM table.

So add function which is in case inactive VDO pool LV checks if
the pool is actually still active (-vpool device present) and
it has open count == 0.  In this case deactivation is allowed
to continue and cleanup DM table.
2019-09-30 13:34:34 +02:00
David Teigland
5191057d9d drop cvol dm uuid suffix for cachevol LVs
The "-cvol" suffix on the uuid is interfering with
activation code, so drop the suffix for now.
2019-09-23 14:13:31 -05:00
David Teigland
515e37b6dd cachevol: add dm uuid suffixes to hidden lvs
to indicate they are private lvm devs
2019-09-20 09:59:37 -05:00
Zdenek Kabelac
6612d8dd5e vdo: enhance activation with layer -vpool
Enhance 'activation' experience for VDO pool to more closely match
what happens for thin-pools where we do use a 'fake' LV to keep pool
running even when no thinLVs are active. This gives user a choice
whether he want to keep thin-pool running (wihout possibly lenghty
activation/deactivation process)

As we do plan to support multple VDO LVs to be mapped into a single VDO,
we want to give user same experience and 'use-patter' as with thin-pools.

This patch gives option to activate VDO pool only without activating
VDO LV.

Also due to 'fake' layering LV we can protect usage of VDO pool from
command like 'mkfs' which do require exlusive access to the volume,
which is no longer possible.

Note: VDO pool contains 1024 initial sectors as 'empty' header - such
header is also exposed in layered LV (as read-only LV).
For blkid we are indentified as LV with UUID suffix - thus private DM
device of lvm2 - so we do not need to store any extra info in this
header space (aka zero is good enough).
2019-09-17 13:17:19 +02:00
Zdenek Kabelac
66f69e766e thin: activate layer pool aas read-only LV
When lvm2 is activating layered pool LV (to basically keep pool opened,
the other function used to be 'locking' be in sync with DM table)
use this LV in read-only mode - this prevents 'write' access into
data volume content of thin-pool.

Note: since EMPTY/unused thin-pool is created as 'public LV' for generic
use by any user who i.e. wish to maintain thin-pool and thins himself.
At this moment, thin-pool appears as writable LV.  As soon as the 1st.
thinLV is created, layer volume will appear is 'read-only' LV from this moment.
2019-09-17 13:16:50 +02:00
Zdenek Kabelac
693215716b devices: crypto skip
Devices with UUID signature CRYPT-SUBDEV are internal crypto devices.
2019-09-17 13:15:22 +02:00
Zdenek Kabelac
b2885b7103 activation: use cmd pending mem for pending_delete
Since we need to preserve allocated strings across 2 separate
activation calls of '_tree_action()' we need to use other mem
pool them dm->mem - but since cmd->mem is released between
individual lvm2 locking calls, we rather introduce a new separate
mem pool just for pending deletes with easy to see life-span.
(not using 'libmem' as it would basicaly keep allocations over
the whole lifetime of clvmd)

This patch is fixing previous commmit where the memory was
improperly used after pool release.
2019-08-27 15:54:42 +02:00
Zdenek Kabelac
7833c45fbe activation: extend handling of pending_delete
With previous patch 30a98e4d67 we
started to put devices one pending_delete list instead
of directly scheduling their removal.

However we have operations like 'snapshot merge' where we are
resuming device tree in 2 subsequent activation calls - so
1st such call will still have suspened devices and no chance
to push 'remove' ioctl.

Since we curently cannot easily solve this by doing just single
activation call (which would be preferred solution) - we introduce
a preservation of pending_delete via command structure and
then restore it on next activation call.

This way we keep to remove devices later - although it might be
not the best moment - this may need futher tunning.

Also we don't keep the list of operation in 1 trasaction
(unless we do verify udev symlinks) - this could probably
also make it more correct in terms of which 'remove' can
be combined we already running 'resume'.
2019-08-26 15:16:38 +02:00
Zdenek Kabelac
30a98e4d67 activation: add synchronization point
Resuming of 'error' table entry followed with it's dirrect removal
is now troublesame with latest udev as it may skip processing of
udev rules for already 'dropped' device nodes.

As we cannot 'synchronize' with udev while we know we have devices
in suspended state - rework 'cleanup' so it collects nodes
for removal into pending_delete list and process the list with
synchronization once we are without any suspended nodes.
2019-08-20 12:46:11 +02:00
Zdenek Kabelac
0451225c19 pvmove: correcting read_ahead setting
When pvmove is finished, we do a tricky operation since we try to
resume multiple different device that were all joined into 1 big tree.

Currently we use the infromation from existing live DM table,
where we can get list of all holders of pvmove device.
We look for these nodes (by uuid) in new metadata, and we do now a full
regular device add into dm tree structure.  All devices should be
already PRELOAD with correct table before entering suspend state,
however for correctly working readahead we need to put correct info
also into RESUME tree.  Since table are preloaded, the same table
is skip and resume, but correct read ahead is now set.
2019-08-20 12:37:32 +02:00
Zdenek Kabelac
4411fe2ba8 activation: synchronize before removing devices
Udev is running udev-rule action upon 'resume'.

However lvm2 in special case is doing replacement of
'soon-to-be-removed' device with 'error' target for resuming
and then follows actual removal - the sequence is usually quick,
so when udev start action - it can result in 'strange' error
message in kernel log like:

Process '/usr/sbin/dmsetup info -j 253 -m 17 -c --nameprefixes --noheadings --rows -o name,uuid,suspended' failed with exit code 1.

To avoid this - we need to ensure there is synchronization wait for udev
between 'resume'  and 'remove' part of this process.

However existing code put strict requirement to avoid synchronizing with
udev inside critical section - but this originally came from requirement
to not do anything special while there could be devices in
suspend-state. Now we are able to see differnce between critical section
with or without suspended devices.  For udev synchronization only
suspended devices are prohibited to be there - so slightly relax
condition and allow calling and using 'fs_sync()' even inside critical
section - but there must not be any suspended device.
2019-03-20 14:39:09 +01:00
Zdenek Kabelac
1eeb2fa3f6 dev_manager: add dev_manager_remove_dm_major_minor
Move DM usage into dev_manager.c source file.
Also convert STATUS to INFO ioctl - as that's enough
to obtain UUID - this also avoid issuing unwanted flush on checked DM
device for being mpath.
2019-03-20 14:37:10 +01:00
David Teigland
a9eaab6beb Use "cachevol" to refer to cache on a single LV
and "cachepool" to refer to a cache on a cache pool object.

The problem was that the --cachepool option was being used
to refer to both a cache pool object, and to a standard LV
used for caching.  This could be somewhat confusing, and it
made it less clear when each kind would be used.  By
separating them, it's clear when a cachepool or a cachevol
should be used.

Previously:

- lvm would use the cache pool approach when the user passed
  a cache-pool LV to the --cachepool option.

- lvm would use the cache vol approach when the user passed
  a standard LV in the --cachepool option.

Now:

- lvm will always use the cache pool approach when the user
  uses the --cachepool option.

- lvm will always use the cache vol approach when the user
  uses the --cachevol option.
2019-02-27 08:52:34 -06:00
Zdenek Kabelac
e689bfb5d5 vdo: minor API cleanup
Since the parse_vdo_pool_status() become vdo_manip API part,
and there will be no 'dm' matching status parser,
the API can be simplified and closely match thin API here.
2019-01-21 12:53:16 +01:00
Zdenek Kabelac
3d367f3348 vdo: add simple wrapper for getting pool percentage
Just like with i.e. thins provide simple function for
getting percentage of VDO Pool usage (uses existing
status function).
2019-01-21 12:53:16 +01:00
Heinz Mauelshagen
dd5716ddf2 raid: fix (de)activation of RaidLVs with visible SubLVs
There's a small window during creation of a new RaidLV when
rmeta SubLVs are made visible to wipe them in order to prevent
erroneous discovery of stale RAID metadata.  In case a crash
prevents the SubLVs from being committed hidden after such
wiping, the RaidLV can still be activated with the SubLVs visible.
During deactivation though, a deadlock occurs because the visible
SubLVs are deactivated before the RaidLV.

The patch adds _check_raid_sublvs to the raid validation in merge.c,
an activation check to activate.c (paranoid, because the merge.c check
will prevent activation in case of visible SubLVs) and shares the
existing wiping function _clear_lvs in raid_manip.c moved to lv_manip.c
and renamed to activate_and_wipe_lvlist to remove code duplication.
Whilst on it, introduce activate_and_wipe_lv to share with
(lvconvert|lvchange).c.

Resolves: rhbz1633167
2018-12-11 16:35:34 +01:00
Zdenek Kabelac
0d61a17152 gcc: avoid shadowing activate_lv
Function activate_lv() is already declared, avoid its shadowing.
activate.h:133: warning: shadowed declaration is here
2018-12-01 01:06:57 +01:00
Zdenek Kabelac
c1703845c3 activation: trimming string is expected
Commit 813347cf84 added extra validation,
however in this particular we do want to trim suffix out so rather ignore
resulting error code here intentionaly.
2018-11-08 12:20:57 +01:00
David Teigland
3ae5569570 Add dm-writecache support
dm-writecache is used like dm-cache with a standard LV
as the cache.

$ lvcreate -n main -L 128M -an foo /dev/loop0

$ lvcreate -n fast -L 32M -an foo /dev/pmem0

$ lvconvert --type writecache --cachepool fast foo/main

$ lvs -a foo -o+devices
  LV            VG  Attr       LSize   Origin        Devices
  [fast]        foo -wi-------  32.00m               /dev/pmem0(0)
  main          foo Cwi------- 128.00m [main_wcorig] main_wcorig(0)
  [main_wcorig] foo -wi------- 128.00m               /dev/loop0(0)

$ lvchange -ay foo/main

$ dmsetup table
foo-main_wcorig: 0 262144 linear 7:0 2048
foo-main: 0 262144 writecache p 253:4 253:3 4096 0
foo-fast: 0 65536 linear 259:0 2048

$ lvchange -an foo/main

$ lvconvert --splitcache foo/main

$ lvs -a foo -o+devices
  LV   VG  Attr       LSize   Devices
  fast foo -wi-------  32.00m /dev/pmem0(0)
  main foo -wi------- 128.00m /dev/loop0(0)
2018-11-06 14:18:41 -06:00
David Teigland
cac4a9743a Allow dm-cache cache device to be standard LV
If a single, standard LV is specified as the cache, use
it directly instead of converting it into a cache-pool
object with two separate LVs (for data and metadata).

With a single LV as the cache, lvm will use blocks at the
beginning for metadata, and the rest for data.  Separate
dm linear devices are set up to point at the metadata and
data areas of the LV.  These dm devs are given to the
dm-cache target to use.

The single LV cache cannot be resized without recreating it.

If the --poolmetadata option is used to specify an LV for
metadata, then a cache pool will be created (with separate
LVs for data and metadata.)

Usage:

$ lvcreate -n main -L 128M vg /dev/loop0

$ lvcreate -n fast -L 64M vg /dev/loop1

$ lvs -a vg
  LV   VG Attr       LSize   Type   Devices
  main vg -wi-a----- 128.00m linear /dev/loop0(0)
  fast vg -wi-a-----  64.00m linear /dev/loop1(0)

$ lvconvert --type cache --cachepool fast vg/main

$ lvs -a vg
  LV           VG Attr       LSize   Origin       Pool  Type   Devices
  [fast]       vg Cwi---C---  64.00m                     linear /dev/loop1(0)
  main         vg Cwi---C--- 128.00m [main_corig] [fast] cache  main_corig(0)
  [main_corig] vg owi---C--- 128.00m                     linear /dev/loop0(0)

$ lvchange -ay vg/main

$ dmsetup ls
vg-fast_cdata   (253:4)
vg-fast_cmeta   (253:5)
vg-main_corig   (253:6)
vg-main (253:24)
vg-fast (253:3)

$ dmsetup table
vg-fast_cdata: 0 98304 linear 253:3 32768
vg-fast_cmeta: 0 32768 linear 253:3 0
vg-main_corig: 0 262144 linear 7:0 2048
vg-main: 0 262144 cache 253:5 253:4 253:6 128 2 metadata2 writethrough mq 0
vg-fast: 0 131072 linear 7:1 2048

$ lvchange -an vg/min

$ lvconvert --splitcache vg/main

$ lvs -a vg
  LV   VG Attr       LSize   Type   Devices
  fast vg -wi-------  64.00m linear /dev/loop1(0)
  main vg -wi------- 128.00m linear /dev/loop0(0)
2018-11-06 13:44:54 -06:00
Zdenek Kabelac
aa8b2d6a0f cleanup: move cast to det_t into MKDEV macro 2018-11-05 17:25:11 +01:00
Zdenek Kabelac
813347cf84 cov: add missing check for dm_strncpy 2018-11-03 16:10:32 +01:00
Zdenek Kabelac
6235861e64 cov: remove uneeded code
Since clvmd was dropped this code become useless.
2018-11-03 16:09:36 +01:00
Zdenek Kabelac
fdd76da33d cov: drop uneeded header files 2018-10-15 17:49:44 +02:00
David Teigland
bfcecbbce1 filter: add config setting to skip scanning LVs
devices/scan_lvs (default 1) determines whether lvm
will scan LVs for layered PVs.  The lvm behavior has
always been to scan LVs, but it's rare for LVs to have
layered PVs, and much more common for there to be many
LVs that substantially slow down scanning with no benefit.

This is implemented in the usable filter, and has the
same effect as listing all LVs in the global_filter.
2018-08-30 09:59:50 -05:00
Zdenek Kabelac
acab591378 mirror: fix splitmirrors for mirror type
With improved mirror activation code --splitmirror issue poppedup
since there was missing proper preload code and deactivation
for splitted mirror leg.
2018-08-07 17:58:30 +02:00
David Teigland
9adae653e9 mirrors: fix read_only_volume_list
If a mirror LV is listed in read_only_volume_list, it would
still be activated rw.  The activation would initially be
readonly, but the monitoring function would immediately
change it to rw.  This was a regression from commit

fade45b1d1 mirror: improve table update

The monitoring function needs to copy the read_only setting
into the new set of mirror activation options it uses.
2018-08-02 11:42:33 -05:00
Zdenek Kabelac
44c99a8822 vdo: data percentage
Display percentage of used virtual size of vdo-pool volume.
2018-07-09 15:28:35 +02:00
Zdenek Kabelac
4f708e8709 dev_manager: add dev_manager_vdo_pool_status 2018-07-09 15:28:35 +02:00
Zdenek Kabelac
0dafd159a8 vdo_manip: parsing status of VDO device 2018-07-09 15:28:35 +02:00
Zdenek Kabelac
a8f84f7801 vdo: introduce segment types and manip functions
Core functionality introducing lvm VDO support.
2018-07-09 15:28:35 +02:00
Zdenek Kabelac
2e05f6018b activate: kvdo modprobe workaround
To support autoloading of VDO dm target driver loading of 'kvdo'
kernel module is needed - ATM it's not using 'dm-vdo' name.
So to support this strange name - add temporarily solution to
autoload  kvdo kernel module in this case.
2018-07-09 15:28:35 +02:00
Zdenek Kabelac
77d5caae90 snapshot: improve checking of merging snapshot
Add runtime detection for 'lvs -o+seg_monitor' and 'vgchange --monitor'.
This fix should avoid unnecessary timeout on systemd shutdown.
2018-06-11 22:25:42 +02:00
Joe Thornber
7c4b19c335 Merge branch '2018-06-04-data-structs' 2018-06-08 14:21:07 +01:00
Joe Thornber
d5da55ed85 device_mapper: remove dbg_malloc.
I wrote dbg_malloc before we had valgrind.  These days there's just
no need.
2018-06-08 13:40:53 +01:00
Zdenek Kabelac
0c62ae3f89 pvmove: improve lvs
When pvmoving LV - the target for LV is a mirror so the validation
that checked the type is matching was incorrect.

While we need a more generic enhancment of LVS output for pvmoved LVs,
for now at least stop showing internal errors and  'X' symbols in attrs.
2018-06-08 14:35:42 +02:00
David Teigland
da30b4a786 Remove locking infrastructure from activation paths
Basic LV functions:

  activate_lv(), deactivate_lv(),
  suspend_lv(), resume_lv()

were routed through the locking infrastruture on the way to:

  lv_activate_with_filter(), lv_deactivate(),
  lv_suspend_if_active(), lv_resume_if_active()

This commit removes the locking infrastructure from the
middle and calls the later functions directly from the former.

There were a couple of ancillary steps that the locking
infrastructure added along the way which are still included:

  - critical section inc/dec during suspend/resume
  - checking for active component LVs during activate

The "activation" file lock (serializing activation) has not
been kept because activation commands have been changed to
take the VG file lock exclusively which makes the activation
lock unused and unnecessary.
2018-06-07 16:17:04 +01:00
David Teigland
18259d5559 Remove unused clvm variations for active LVs
Different flavors of activate_lv() and lv_is_active()
which are meaningful in a clustered VG can be eliminated
and replaced with whatever that flavor already falls back
to in a local VG.

e.g. lv_is_active_exclusive_locally() is distinct from
lv_is_active() in a clustered VG, but in a local VG they
are equivalent.  So, all instances of the variant are
replaced with the basic local equivalent.

For local VGs, the same behavior remains as before.
For shared VGs, lvmlockd was written with the explicit
requirement of local behavior from these functions
(lvmlockd requires locking_type 1), so the behavior
in shared VGs also remains the same.
2018-06-07 16:17:04 +01:00
David Teigland
3e781ea446 Remove clvmd and associated code
More code reduction and simplification can follow.
2018-06-05 11:09:13 -05:00
Zdenek Kabelac
1140d70893 build: fixes 2018-06-04 12:28:13 +02:00
Joe Thornber
7f97c7ea9a build: Don't generate symlinks in include/ dir
As we start refactoring the code to break dependencies (see doc/refactoring.txt),
I want us to use full paths in the includes (eg, #include "base/data-struct/list.h").
This makes it more obvious when we're breaking abstraction boundaries, eg, including a file in
metadata/ from base/
2018-05-14 10:30:20 +01:00
David Teigland
a5e13f2eef clvmd: defer freeing saved vgs
To avoid the chance of freeing a saved vg while another
code path is using it, defer freeing saved vgs until
all the lvmcache content is dropped for the vg.
2018-05-03 14:54:48 -05:00
David Teigland
c1cd18f21e Remove lvm1 and pool disk formats
There are likely more bits of code that can be removed,
e.g. lvm1/pool-specific bits of code that were identified
using FMT flags.

The vgconvert command can likely be reduced further.

The lvm1-specific config settings should probably have
some other fields set for proper deprecation.
2018-04-30 16:55:02 -05:00
Zdenek Kabelac
fade45b1d1 mirror: improve table update
Shift refresh of mirror table right into monitor_dev_for_events().
Use  !vg_write_lock_held() to recognize use of lvchange/vgchange.
(this shall change if this would no longer work, but requires
futher some API changes).

With this patch  dm mirror table is only refreshed when necassary.

Also update WARNING message about mirror usage without monitoring
and display LV name.
2018-04-30 10:41:51 +02:00
David Teigland
5b6e62dc1f clvmd: drop old saved_vg when returning new saved_vg
In some pvmove tests, clvmd uses the new (precommitted)
saved_vg, but then requests the old saved_vg, and
expects that the new saved_vg be returned instead of
the old.  So, when returning the new saved_vg, forget
the old one so we don't return it again.
2018-04-26 14:57:45 -05:00
David Teigland
4670e9f698 skip some clvmd-specific code in common cases
This, or something like it, can probably be done
in many other places.
2018-04-25 16:40:08 -05:00
David Teigland
1fec86571f clvmd: reuse a vg struct for sequential LV operations
After reading a VG, stash it in lvmcache as "saved_vg".
Before reading the VG again, try to use the saved_vg.
The saved_vg is dropped on VG lock operations.
2018-04-25 16:39:43 -05:00
David Teigland
f8616ac2d8 lvmcache: rename suspended_vg to saved_vg
The copy of the VG which clvmd stashes in lvmcache should
not only be used between suspend and resume, but between
sequential LV operations in clvmd, so that clvmd does not
need to reread the VG for each one.  Prepare for that by
renaming the stashed VG as "saved_vg".
2018-04-25 16:39:43 -05:00
David Teigland
d9a77e8bb4 lvmcache: simplify metadata cache
The copy of VG metadata stored in lvmcache was not being used
in general.  It pretended to be a generic VG metadata cache,
but was not being used except for clvmd activation.  There
it was used to avoid reading from disk while devices were
suspended, i.e. in resume.

This removes the code that attempted to make this look
like a generic metadata cache, and replaces with with
something narrowly targetted to what it's actually used for.

This is a way of passing the VG from suspend to resume in
clvmd.  Since in the case of clvmd one caller can't simply
pass the same VG to both suspend and resume, suspend needs
to stash the VG somewhere that resume can grab it from.
(resume doesn't want to read it from disk since devices
are suspended.)  The lvmcache vginfo struct is used as a
convenient place to stash the VG to pass it from suspend
to resume, even though it isn't related to the lvmcache
or vginfo.  These suspended_vg* vginfo fields should
not be used or touched anywhere else, they are only to
be used for passing the VG data from suspend to resume
in clvmd.  The VG data being passed between suspend and
resume is never modified, and will only exist in the
brief period between suspend and resume in clvmd.

suspend has both old (current) and new (precommitted)
copies of the VG metadata.  It stashes both of these in
the vginfo prior to suspending devices.  When vg_commit
is successful, it sets a flag in vginfo as before,
signaling the transition from old to new metadata.

resume grabs the VG stashed by suspend.  If the vg_commit
happened, it grabs the new VG, and if the vg_commit didn't
happen it grabs the old VG.  The VG is then used to resume
LVs.

This isolates clvmd-specific code and usage from the
normal lvm vg_read code, making the code simpler and
the behavior easier to verify.

Sequence of operations:

- lv_suspend() has both vg_old and vg_new
  and stashes a copy of each onto the vginfo:
  lvmcache_save_suspended_vg(vg_old);
  lvmcache_save_suspended_vg(vg_new);

- vg_commit() happens, which causes all clvmd
  instances to call lvmcache_commit_metadata(vg).
  A flag is set in the vginfo indicating the
  transition from the old to new VG:
  vginfo->suspended_vg_committed = 1;

- lv_resume() needs either vg_old or vg_new
  to use in resuming LVs.  It doesn't want to
  read the VG from disk since devices are
  suspended, so it gets the VG stashed by
  lv_suspend:
  vg = lvmcache_get_suspended_vg(vgid);

If the vg_commit did not happen, suspended_vg_committed
will not be set, and in this case, lvmcache_get_suspended_vg()
will return the old VG instead of the new VG, and it will
resume LVs based on the old metadata.
2018-04-20 11:22:45 -05:00