1
0
mirror of git://sourceware.org/git/lvm2.git synced 2024-10-28 20:25:52 +03:00
Commit Graph

1839 Commits

Author SHA1 Message Date
Alasdair G Kergon
a432066c7c mirror: Explicit cast in region_size_max 2015-02-26 19:49:25 +00:00
Alasdair G Kergon
cb727a1ccc mirror: Avoid region size compiler warning.
format ‘%u’ expects type ‘unsigned int’, but argument 7 has type ‘uint64_t’
2015-02-26 19:45:55 +00:00
David Teigland
dd6a202831 lvchange: deactivate is always possible in foreign vgs
The only realistic way for a host to have active LVs in a
foreign VG is if the host's system_id (or system_id_source)
is changed while LVs are active.

In this case, the active LVs produce an warning, and access
to the VG is implicitly allowed (without requiring --foreign.)
This allows the active LVs to be deactivated.

In this case, rescanning PVs for the VG offers no benefit.
It is not possible that rescanning would reveal an LV that
is active but wasn't previously in the VG metadata.
2015-02-25 14:58:49 -06:00
Jonathan Brassow
dd0ee35378 cmirror: Adjust region size to work around CPG msg limit to avoid hang.
cmirror uses the CPG library to pass messages around the cluster and maintain
its bitmaps.  When a cluster mirror starts-up, it must send the current state
to any joining members - a checkpoint.  When mirrors are large (or the region
size is small), the bitmap size can exceed the message limit of the CPG
library.  When this happens, the CPG library returns CPG_ERR_TRY_AGAIN.
(This is also a bug in CPG, since the message will never be successfully sent.)

There is an outstanding bug (bug 682771) that is meant to lift this message
length restriction in CPG, but for now we work around the issue by increasing
the mirror region size.  This limits the size of the bitmap and avoids any
issues we would otherwise have around checkpointing.

Since this issue only affects cluster mirrors, the region size adjustments
are only made on cluster mirrors.  This patch handles cluster mirror issues
involving pvmove, lvconvert (from linear to mirror), and lvcreate.  It also
ensures that when users convert a VG from single-machine to clustered, any
mirrors with too many regions (i.e. a bitmap that would be too large to
properly checkpoint) are trapped.
2015-02-25 14:42:15 -06:00
David Teigland
8668a9e81c systemid: silently ignore foreign vgs unless named
A foreign VG should be silently ignored by a reporting/display
command like 'vgs'.  If the reporting/display command specifies
a foreign VG by name on the command line, it should produce an
error message.

Scanning commands pvscan/vgscan/lvscan are always allowed to
read and update caches from all PVs, including those that belong
to foreign VGs.

Other non-report/display/scan commands always ignore a foreign
VG, or report an error if they attempt to use a foreign VG.

vgimport should always invalidate the lvmetad cache because
lvmetad likely holds a pre-vgexported copy of the VG.
(This is unrelated to using foreign VGs; the pre-vgexported
VG may have had no system_id at all.)
2015-02-25 10:53:52 -06:00
Petr Rockai
7d615a3fe5 cache: Fix a segfault when passing --cachepolicy without --cachesettings. 2015-02-24 11:39:35 +01:00
Alasdair G Kergon
b18feb98e5 systemid: Fix access restrictions.
When checking whether the system ID permits access to a VG, check for
each permitted situation first, and only then issue the appropriate
error message.  Always issue a message for now.  (We'll try to
suppress some of those later when the VG concerned wasn't explicitly
requested.)
Add more messages to try to ensure every return code is checked and
every error path (and only an error path) contains a log_error().
Add self-correction to vgchange -c to deal with situations where
the cluster state and system ID state are out-of-sync (e.g. if
old tools were used).
2015-02-23 23:19:36 +00:00
Alasdair G Kergon
df227be37c lvm1: Reenable sys ID.
Move the lvm1 sys ID into vg->lvm1_system_id and reenable the #if 0
LVM1 code.  Still display the new-style system ID in the same
reporting field, though, as only one can be set.
Add a format feature flag FMT_SYSTEM_ON_PVS for LVM1 and disallow
access to LVM1 VGs if a new-style system ID has been set.
Treat the new vg->system_id as const.
2015-02-23 23:03:52 +00:00
Alasdair G Kergon
2fc2928978 config: Rename allow_system_id to extra_system_ids.
Add warnings to the config file templates and briefly document
each value.
Configure lvmlocal.conf and install in /etc/lvm.
2015-02-23 22:19:08 +00:00
Zdenek Kabelac
a18d789684 cleanup: simplify error path code
Mempool needs to free only with first alllocated element,
everything allocated afterwards is released as well.
2015-02-19 14:44:04 +01:00
Zdenek Kabelac
4c184e9d6b cleanup: drop unused value assign
Dop unused value assignments.

Unknown is detected via other combination
(!linear && !striped).

Also change the log_error() message into a warning,
since the function is not really returning error,
but still keep the INTERNAL_ERROR.

Ret value is always set later.
2015-02-19 14:43:25 +01:00
Peter Rajnoha
ed420fb691 pvcreate: switch to "none" dev-ext source during pvcreate
The dev ext source must be reset for the dev_cache_get call
(which evaluates filters), not lvmcache_label_scan - so fix
original commit 727c7ff85d.

Also, add comments in _pvcreate_check fn explaining why
refresh filter and rescan is needed and exactly in which
situations.
2015-02-19 14:34:55 +01:00
Peter Rajnoha
6b4066585f filters: no need to refresh filters/rescan if no signature is wiped during pvcreate at all
Before, we refreshed filters and we did full rescan of devices if
we passed through wiping (wipe_known_signatures fn call). However,
this fn returns success even if no signatures were found and so
nothing was wiped. In this case, it's not necessary to do the
filter refresh/rescan of devices as nothing changed clearly.

This patch exports number of wiped signatures from all the
wiping functions below. The caller (_pvcreate_check) then checks
whether any wiping was done at all and if not, no refresh/rescan
is done, saving some time and resources.
2015-02-17 09:46:34 +01:00
Peter Rajnoha
727c7ff85d pvcreate: switch to "none" dev-ext source during pvcreate
pvcreate code path executes signature wiping if there are any signatures
found on device to prepare the device for PV. When the signature is wiped,
the WATCH udev rule triggers the event which then updates udev database
with fresh info, clearing the old record about previous signature.

However, when we're using udev db as dev-ext source, we'd need to wait
for this WATCH-triggered event. But we can't synchronize against such
events (at least not at this moment). Without this sync, if the code
continues, the device could still be marked as containing the old
signature if reading udev db. This may end up even with the device
to be still filtered, though the signature is already wiped.

This problem is then exposed as (an example with md components):

$  mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sda /dev/sdb --run
$  mdadm -S /dev/md0
$  pvcreate -y /dev/sda
Wiping linux_raid_member signature on /dev/sda.
/dev/sda: Couldn't find device.  Check your filters?
$ echo $?
5

So we need to temporarily switch off "udev" dev-ext source here
in this part of pvcreate code until we find a way how to sync
with WATCH events.

(This problem does not occur with signature wiping which we do
on newly created LVs since we already handle this properly with
our udev flags - the LV_NOSCAN/LV_TEMPORARY flag. But we can't use
this technique for non-dm devices to keep WATCH rule under control.)
2015-02-16 15:07:00 +01:00
David Teigland
8cdec4c434 system_id: use for VG ownership
See included lvmsystemid(7) for full description.
2015-02-13 10:10:27 -06:00
Zdenek Kabelac
434031719e raid: check lock holding LV
Since raid could be used as stacked LV - check lock holding LV
for proper locking type for clustered usage.
2015-01-30 14:16:27 +01:00
Zdenek Kabelac
2055b04c11 cleanup: indent tabs 2015-01-30 12:33:52 +01:00
Zdenek Kabelac
2e35c68122 lv_manip: add for_each_sub_lv_except_pools()
for_each_sub_lv() now scans in depth also pools, however for
rename we actually do want to skip pools.

So add a new for_each_sub_lv_except_pools() to be used by rename,
every other user of for_each_sub_lv() scans every sub LV with pools
included.

This is i.e. necessary for properly working preload of pools
that are using raid arrays.
2015-01-30 12:33:52 +01:00
Peter Rajnoha
531cc58d89 lvm2app: fix lvm_lv_get_attr regression causing unknown values
This is a regression from v115 where some of the fields/properties
were converted to using the common "struct lvinfo" and
"struct lv_seg_status" so we don't need to issue info and status
ioctl several times per one reported line. Not all fields are
converted yet, but one that *is* converted is the lv_attr field
with the lv_attr_dup counterpart used in lvm_lv_get_attr lvm2app fn.

These changes were introduced with e34b004422
and later - this patch introduced the "info_ok" field in the
lv_with_info_and_seg_status structure which encapsulates the lvinfo
and lv_seg_status struct.

For the lv_attr_dup, the lv_attr_dup code  missed the
assignment for the "info_ok" flag which saves the result of the
lv_info_with_seg_status call. Hence such info was marked
as unusable - unknown and it was returned as such via lvm_lv_get_attr
lvm2app fn.
2015-01-30 09:53:34 +01:00
Zdenek Kabelac
553f37da71 raid: lock holder will skip visible raid LVs
RAID marks legs as VISIBLE with notion it's not longer
true raid leg - so skip tree scannig and take this LV
as top-level LV.
2015-01-28 13:45:27 +01:00
Zdenek Kabelac
93b9015760 raid: fix raid image splitting
When raid leg is extracted, now the preload code handles this state
correctly and put proper new table entry into dm tree,
so the activation of extracted leg and removed metadata works
after commit.
2015-01-28 13:45:18 +01:00
Peter Rajnoha
0fddc5ab5c coverity: missing return value check
Reported by coverity for code added recently - _avoid_pvs_with_other_images_of_lv
which calls process_each_sub_lv and not checking return value.
2015-01-22 10:11:19 +01:00
Peter Rajnoha
338d98be97 cleanup: for commit 7bcb3fb02d 2015-01-21 11:29:12 +01:00
Peter Rajnoha
7bcb3fb02d report: rename lv_error_when_full field to lv_when_full and display either "error", "queue" or ""
Rename original lv_error_when_full field to lv_when_full and also
convert it from binary field to string field displaying three
possible values: "error", "queueu" or "" (blank for undefined).

$ lvs vg/pool vg/pool1 vg/linear_lv -o+lv_when_full
  LV        VG   Attr       LSize Data%  Meta%  WhenFull
  linear_lv vg   -wi-a----- 4.00m
  pool      vg   twi-aotz-- 4.00m 0.00   0.98   queue
  pool1     vg   twi-a-tz-- 4.00m 0.00   0.88   error

For -S|--select these synonyms are recognized:

"error" -> "error when full", "error if no space"
"queue" -> "queue when full", "queue if no space"
   ""   -> "undefined"
2015-01-21 10:50:32 +01:00
Zdenek Kabelac
87e80b6aac report: proper lv_attr_dup emulation
We need to create a mempool for proper emulation of lv_attr_dup
for lvm2api.
2015-01-20 16:24:45 +01:00
Zdenek Kabelac
d80d832ae9 report: seg_monitor undefined
Add 'undefined' value for segment which do not support monitoring.
Fixes crash for commands like 'pvs -o+seg_monitor'.
2015-01-20 15:02:10 +01:00
Zdenek Kabelac
b3a348c03c report: use same info also for lv_attr
Recently the single 'status' code has been used for number of cache
features.

Extend the API a little bit to allow usage also for lv_attr_dup.

As the function itself is used in lvm2api - add a new function:
lv_attr_dup_with_info_and_seg_status() that is able to use
grabbed info & status information.

report_init() is now using directly passed lvdm struct pointer
which holds the infomation whether lv_info() was correctly obtained or
there was some error when trying to read it.

Move 'healt' attribute to status.
TODO convert raid function to use the already known status.
2015-01-20 14:58:41 +01:00
Zdenek Kabelac
07eb1c7dc8 cleanup: add lv_is_error_when_full() macro
Like with other status bits use macro for testing.
(in-release update)
2015-01-20 14:52:06 +01:00
Heinz Mauelshagen
302b6c99a7 raid_manip: v2 fix multi-segment misallocation on 'lvconvert --repair'
The previous patch felt short WRT disabling allocation on PVs holding other
legs of the RAID LV persistently; this patch introduces an internal,
transient PV flag PV_ALLOCATION_PROHIBITED to address this very problem.

General problem description for completeness:

An 'lvconvert --repair $RAID_LV" to replace a failed leg of a multi-segment
RAID10/4/5/6 logical volume can lead to allocation of (parts of) the replacement
image component pair on the physical volume of another image component
(e.g. image 0 allocated on the same PV as image 1 silently impeding resilience).

Patch fixes this severe resilince issue by prohibiting allocation on PVs
already holding other legs of the RAID set. It allows to allocate free space
on any operational PV already holding parts of the image component pair.
2015-01-16 13:44:16 +01:00
Zdenek Kabelac
2908ab3eed thin: errrorwhenfull support
Support error_if_no_space feature for thin pools.
Report more info about thinpool status:
(out_of_data (D), metadata_read_only (M), failed  (F) also as health
attribute.)
2015-01-14 14:52:05 +01:00
Heinz Mauelshagen
cdd17eee37 raid_manip: fix multi-segment misallocation on 'lvconvert --repair'
An 'lvconvert --repair $RAID_LV" to replace a failed leg of a multi-segment
RAID10/4/5/6 logical volume can lead to allocation of (parts of) the replacement
image component pair on the physical volume of another image component
(e.g. image 0 allocated on the same PV as image 1 silently impeding resilience).

Patch fixes this severe resilince issue by prohibiting allocation on PVs
already holding other legs of the RAID set. It allows to allocate free space
on any operational PV already holding parts of the image component pair.
2015-01-14 13:41:55 +01:00
Peter Rajnoha
fb7e2ff493 metadata: add "Failed to write VG <vg_name>." on failed vg_write and revert previous patch
Better than previous patch which changed log_warn to log_error -
we can have multiple MDAs and if one of them fails to be written,
we can still continue with other MDAs if we're in a mode where
we can handle missing PVs - so keep the log_warn for single
failed MDA write as it was before.

However, add log_error with "Failed to write VG <vg_name>." in
case we're not handling missing PVs or no MDA was written at all
during VG write process. This also prevents an internal error in
which the vg_write fails and we're not issuing any other log_error
in vg_write caller or above, so we end up with:
  "Internal error: Failed command did not use log_error".
2015-01-09 14:04:44 +01:00
Peter Rajnoha
db7351d313 metadata: log_error instead of log_warn on failed mda write 2015-01-09 12:00:03 +01:00
Heinz Mauelshagen
aaecbb1818 raid: fix mirror image naming when converting from mirror to raid1
$ lvcreate -l1 -m1 --type mirror vg
  Logical volume "lvol0" created.
$ lvconvert --type raid1 vg/lvol0

Before:
$ lvs -a vg
  LV                        VG     Active Attr       LSize   Cpy%Sync Layout     Role
  lvol0                     vg     active rwi-a-r---   4.00m 100.00   raid,raid1 public
  [lvol0_mimage_0_rimage_0] vg     active iwi-aor---   4.00m          linear     private,raid,image
  [lvol0_mimage_1_rimage_1] vg     active iwi-aor---   4.00m          linear     private,raid,image
  [lvol0_rmeta_0]           vg     active ewi-aor---   4.00m          linear     private,raid,metadata
  [lvol0_rmeta_1]           vg     active ewi-aor---   4.00m          linear     private,raid,metadata

Incorrect name: lvol0_mimage_0_rimage_0

With this patch applied:
$ lvs -a vg
  LV               VG   Active Attr       LSize Cpy%Sync Layout     Role
  lvol0            vg   active rwi-a-r--- 4.00m 100.00   raid,raid1 public
  [lvol0_rimage_0] vg   active iwi-aor--- 4.00m          linear     private,raid,image
  [lvol0_rimage_1] vg   active iwi-aor--- 4.00m          linear     private,raid,image
  [lvol0_rmeta_0]  vg   active ewi-aor--- 4.00m          linear     private,raid,metadata
  [lvol0_rmeta_1]  vg   active ewi-aor--- 4.00m          linear     private,raid,metadata

Proper name: lvol0_rimage_0
2015-01-07 13:25:08 +01:00
Peter Rajnoha
ff1eca3b6f mirror: do not try to reactivate inactive mirror when removing its LVs which have missing PVs
When mirror has missing PVs and there are mirror images on those missing
PVs, we delete the images and during this delete operation, we also
reactivate the LV. But if we're trying to reactivate the LV in cluster
which is not active and at the same time cmirrord is not running (which
is OK since we may have created the mirror LV as inactive), we end up
with:
  "Error locking on node <node_name>: Shared cluster mirrors are not available."

That is because we're trying to activate the mirror LV without cmirrord.
However, there's no need to do this reactivation if the mirror LV (and
hence it's sub LVs) were not activated before.

This issue caused failure in mirror-vgreduce-removemissing.sh test
recently with this sequence (excerpt from the test script):

  prepare_lvs_
  lvcreate -an -Zn -l2 --type mirror -m1 --nosync -n $lv1 $vg "$dev1" $dev2" "$dev3":$BLOCKS
  mimages_are_on_ $lv1 "$dev1" "$dev2"
  mirrorlog_is_on_ $lv1 "$dev3"
  aux disable_dev "$dev2"
  vgreduce --removemissing --force $vg

The important thing about that test is that we're not running cmirrord,
we're activating the mirror with "-an" so it's inactive and then
vgreduce --removemissing tries to reactivate the mirror images
as part of the _delete_lv function call inside and since cmirrord
is not running, we end up with the "Shared cluster mirrors are not
available." error.
2015-01-07 11:16:19 +01:00
Petr Rockai
e97023804a pvremove: Avoid metadata re-reads & related error messages. 2015-01-06 14:27:30 +01:00
Peter Rajnoha
509650ec4c cmirror: do not check for cmirror availability when creating deactivated cluster mirrors
When creating cluster mirrors while they're not supposed to be activated
immediately after creation, we don't need to check for cmirrord availability.
We can just create these mirrors and let the check to be done on activation
later on. This is addendum for commit cba6186325.
2015-01-06 09:59:04 +01:00
Peter Rajnoha
cba6186325 cmirror: check for cmirror availability during cluster mirror creation and activation
When creating/activating clustered mirrors, we should have cmirrord
available and running. If it's not, we ended up with rather cryptic
errors like:

$ lvcreate -l1 -m1 --type mirror vg
  Error locking on node 1: device-mapper: reload ioctl on  failed: Invalid argument
  Failed to activate new LV.

$ vgchange -ay vg
  Error locking on node node 1: device-mapper: reload ioctl on failed: Invalid argument

This patch adds check for cmirror availability and it errors out
properly, also giving a more precise error messge so users are able
to identify the source of the problem easily:

$ lvcreate -l1 -m1 --type mirror vg
  Shared cluster mirrors are not available.

$ vgchange -ay vg
  Error locking on node 1: Shared cluster mirrors are not available.

Exclusively activated cluster mirror LVs are OK even without cmirrord:

$ vgchange -aey vg
  1 logical volume(s) in volume group "vg" now active
2015-01-05 16:54:07 +01:00
Zdenek Kabelac
f3bd9a2797 raid: properly rename split image
When we split leg from raid - we take a proper new lock for a new LV.
However for now activation checks only 'existince' of device UUID,
but it's not validating device has a proper name.

As a quick fix call suspend()/resume() to rename after split mirror.
2014-12-05 13:39:42 +01:00
Peter Rajnoha
a5baf13a06 pool: fix typo in error message: then -> than 2014-12-04 09:18:16 +01:00
Alasdair G Kergon
a057f40155 mirror: Validate raid region size config setting.
If necessary, round down to a power of 2 the raid/mirror region size
taken from the config files.
2014-12-03 22:47:08 +00:00
Alasdair G Kergon
de53e0955d mirror: Restrict region size to power of 2. 2014-12-02 14:24:21 +00:00
Petr Rockai
2c3db52356 metadata: Add cache_policy to lvcreate_params and honour it. 2014-11-27 20:20:48 +01:00
Zdenek Kabelac
2de11c9e9e thin: add missing 64KB rounding
When chunk size needs to be estimated, the code missed to round
to proper 64kb boundaries  (or power of 2 for older thin pool driver).
So for some data and metadata size (i.e. 10GB and 4MB) it resulted
in incorrect chunk size (not being a multiple of 64KB)

Fix it by adding proper rounding and also use 1 routine for 2 places
where the same calculation is made.

Fix also incorrect printed warning that has used 'ffs()'
(which returns first 'least significant' bit in word)
and it was not really giving any useful size info and replace it
with properly estimated chunk size.
2014-11-26 09:29:25 +01:00
Peter Rajnoha
62f3a4d2d8 pvresize: fix size in 'Resizing to ...' verbose message to show proper result size 2014-11-25 15:19:10 +01:00
Petr Rockai
c75ae0846e cache: Implement 'default' as a policy settings value to clear the record. 2014-11-20 16:51:07 +01:00
Petr Rockai
d22ffd8c28 cache: Add lv_cache_setpolicy to cache_manip.c. 2014-11-20 16:51:06 +01:00
Zdenek Kabelac
9f2961f259 cache: check for internal error
Don't try to duplicate NULL on internal error path.
2014-11-20 16:35:46 +01:00
Zdenek Kabelac
d7985ebead thin: fix error path
Print pool name and not the origin name.
2014-11-19 18:58:30 +01:00
Zdenek Kabelac
38200c2000 cleanup: add '.' to log messages 2014-11-14 18:12:35 +01:00