1
0
mirror of git://sourceware.org/git/lvm2.git synced 2024-12-21 13:34:40 +03:00
Commit Graph

1824 Commits

Author SHA1 Message Date
Zdenek Kabelac
434031719e raid: check lock holding LV
Since raid could be used as stacked LV - check lock holding LV
for proper locking type for clustered usage.
2015-01-30 14:16:27 +01:00
Zdenek Kabelac
2055b04c11 cleanup: indent tabs 2015-01-30 12:33:52 +01:00
Zdenek Kabelac
2e35c68122 lv_manip: add for_each_sub_lv_except_pools()
for_each_sub_lv() now scans in depth also pools, however for
rename we actually do want to skip pools.

So add a new for_each_sub_lv_except_pools() to be used by rename,
every other user of for_each_sub_lv() scans every sub LV with pools
included.

This is i.e. necessary for properly working preload of pools
that are using raid arrays.
2015-01-30 12:33:52 +01:00
Peter Rajnoha
531cc58d89 lvm2app: fix lvm_lv_get_attr regression causing unknown values
This is a regression from v115 where some of the fields/properties
were converted to using the common "struct lvinfo" and
"struct lv_seg_status" so we don't need to issue info and status
ioctl several times per one reported line. Not all fields are
converted yet, but one that *is* converted is the lv_attr field
with the lv_attr_dup counterpart used in lvm_lv_get_attr lvm2app fn.

These changes were introduced with e34b004422
and later - this patch introduced the "info_ok" field in the
lv_with_info_and_seg_status structure which encapsulates the lvinfo
and lv_seg_status struct.

For the lv_attr_dup, the lv_attr_dup code  missed the
assignment for the "info_ok" flag which saves the result of the
lv_info_with_seg_status call. Hence such info was marked
as unusable - unknown and it was returned as such via lvm_lv_get_attr
lvm2app fn.
2015-01-30 09:53:34 +01:00
Zdenek Kabelac
553f37da71 raid: lock holder will skip visible raid LVs
RAID marks legs as VISIBLE with notion it's not longer
true raid leg - so skip tree scannig and take this LV
as top-level LV.
2015-01-28 13:45:27 +01:00
Zdenek Kabelac
93b9015760 raid: fix raid image splitting
When raid leg is extracted, now the preload code handles this state
correctly and put proper new table entry into dm tree,
so the activation of extracted leg and removed metadata works
after commit.
2015-01-28 13:45:18 +01:00
Peter Rajnoha
0fddc5ab5c coverity: missing return value check
Reported by coverity for code added recently - _avoid_pvs_with_other_images_of_lv
which calls process_each_sub_lv and not checking return value.
2015-01-22 10:11:19 +01:00
Peter Rajnoha
338d98be97 cleanup: for commit 7bcb3fb02d 2015-01-21 11:29:12 +01:00
Peter Rajnoha
7bcb3fb02d report: rename lv_error_when_full field to lv_when_full and display either "error", "queue" or ""
Rename original lv_error_when_full field to lv_when_full and also
convert it from binary field to string field displaying three
possible values: "error", "queueu" or "" (blank for undefined).

$ lvs vg/pool vg/pool1 vg/linear_lv -o+lv_when_full
  LV        VG   Attr       LSize Data%  Meta%  WhenFull
  linear_lv vg   -wi-a----- 4.00m
  pool      vg   twi-aotz-- 4.00m 0.00   0.98   queue
  pool1     vg   twi-a-tz-- 4.00m 0.00   0.88   error

For -S|--select these synonyms are recognized:

"error" -> "error when full", "error if no space"
"queue" -> "queue when full", "queue if no space"
   ""   -> "undefined"
2015-01-21 10:50:32 +01:00
Zdenek Kabelac
87e80b6aac report: proper lv_attr_dup emulation
We need to create a mempool for proper emulation of lv_attr_dup
for lvm2api.
2015-01-20 16:24:45 +01:00
Zdenek Kabelac
d80d832ae9 report: seg_monitor undefined
Add 'undefined' value for segment which do not support monitoring.
Fixes crash for commands like 'pvs -o+seg_monitor'.
2015-01-20 15:02:10 +01:00
Zdenek Kabelac
b3a348c03c report: use same info also for lv_attr
Recently the single 'status' code has been used for number of cache
features.

Extend the API a little bit to allow usage also for lv_attr_dup.

As the function itself is used in lvm2api - add a new function:
lv_attr_dup_with_info_and_seg_status() that is able to use
grabbed info & status information.

report_init() is now using directly passed lvdm struct pointer
which holds the infomation whether lv_info() was correctly obtained or
there was some error when trying to read it.

Move 'healt' attribute to status.
TODO convert raid function to use the already known status.
2015-01-20 14:58:41 +01:00
Zdenek Kabelac
07eb1c7dc8 cleanup: add lv_is_error_when_full() macro
Like with other status bits use macro for testing.
(in-release update)
2015-01-20 14:52:06 +01:00
Heinz Mauelshagen
302b6c99a7 raid_manip: v2 fix multi-segment misallocation on 'lvconvert --repair'
The previous patch felt short WRT disabling allocation on PVs holding other
legs of the RAID LV persistently; this patch introduces an internal,
transient PV flag PV_ALLOCATION_PROHIBITED to address this very problem.

General problem description for completeness:

An 'lvconvert --repair $RAID_LV" to replace a failed leg of a multi-segment
RAID10/4/5/6 logical volume can lead to allocation of (parts of) the replacement
image component pair on the physical volume of another image component
(e.g. image 0 allocated on the same PV as image 1 silently impeding resilience).

Patch fixes this severe resilince issue by prohibiting allocation on PVs
already holding other legs of the RAID set. It allows to allocate free space
on any operational PV already holding parts of the image component pair.
2015-01-16 13:44:16 +01:00
Zdenek Kabelac
2908ab3eed thin: errrorwhenfull support
Support error_if_no_space feature for thin pools.
Report more info about thinpool status:
(out_of_data (D), metadata_read_only (M), failed  (F) also as health
attribute.)
2015-01-14 14:52:05 +01:00
Heinz Mauelshagen
cdd17eee37 raid_manip: fix multi-segment misallocation on 'lvconvert --repair'
An 'lvconvert --repair $RAID_LV" to replace a failed leg of a multi-segment
RAID10/4/5/6 logical volume can lead to allocation of (parts of) the replacement
image component pair on the physical volume of another image component
(e.g. image 0 allocated on the same PV as image 1 silently impeding resilience).

Patch fixes this severe resilince issue by prohibiting allocation on PVs
already holding other legs of the RAID set. It allows to allocate free space
on any operational PV already holding parts of the image component pair.
2015-01-14 13:41:55 +01:00
Peter Rajnoha
fb7e2ff493 metadata: add "Failed to write VG <vg_name>." on failed vg_write and revert previous patch
Better than previous patch which changed log_warn to log_error -
we can have multiple MDAs and if one of them fails to be written,
we can still continue with other MDAs if we're in a mode where
we can handle missing PVs - so keep the log_warn for single
failed MDA write as it was before.

However, add log_error with "Failed to write VG <vg_name>." in
case we're not handling missing PVs or no MDA was written at all
during VG write process. This also prevents an internal error in
which the vg_write fails and we're not issuing any other log_error
in vg_write caller or above, so we end up with:
  "Internal error: Failed command did not use log_error".
2015-01-09 14:04:44 +01:00
Peter Rajnoha
db7351d313 metadata: log_error instead of log_warn on failed mda write 2015-01-09 12:00:03 +01:00
Heinz Mauelshagen
aaecbb1818 raid: fix mirror image naming when converting from mirror to raid1
$ lvcreate -l1 -m1 --type mirror vg
  Logical volume "lvol0" created.
$ lvconvert --type raid1 vg/lvol0

Before:
$ lvs -a vg
  LV                        VG     Active Attr       LSize   Cpy%Sync Layout     Role
  lvol0                     vg     active rwi-a-r---   4.00m 100.00   raid,raid1 public
  [lvol0_mimage_0_rimage_0] vg     active iwi-aor---   4.00m          linear     private,raid,image
  [lvol0_mimage_1_rimage_1] vg     active iwi-aor---   4.00m          linear     private,raid,image
  [lvol0_rmeta_0]           vg     active ewi-aor---   4.00m          linear     private,raid,metadata
  [lvol0_rmeta_1]           vg     active ewi-aor---   4.00m          linear     private,raid,metadata

Incorrect name: lvol0_mimage_0_rimage_0

With this patch applied:
$ lvs -a vg
  LV               VG   Active Attr       LSize Cpy%Sync Layout     Role
  lvol0            vg   active rwi-a-r--- 4.00m 100.00   raid,raid1 public
  [lvol0_rimage_0] vg   active iwi-aor--- 4.00m          linear     private,raid,image
  [lvol0_rimage_1] vg   active iwi-aor--- 4.00m          linear     private,raid,image
  [lvol0_rmeta_0]  vg   active ewi-aor--- 4.00m          linear     private,raid,metadata
  [lvol0_rmeta_1]  vg   active ewi-aor--- 4.00m          linear     private,raid,metadata

Proper name: lvol0_rimage_0
2015-01-07 13:25:08 +01:00
Peter Rajnoha
ff1eca3b6f mirror: do not try to reactivate inactive mirror when removing its LVs which have missing PVs
When mirror has missing PVs and there are mirror images on those missing
PVs, we delete the images and during this delete operation, we also
reactivate the LV. But if we're trying to reactivate the LV in cluster
which is not active and at the same time cmirrord is not running (which
is OK since we may have created the mirror LV as inactive), we end up
with:
  "Error locking on node <node_name>: Shared cluster mirrors are not available."

That is because we're trying to activate the mirror LV without cmirrord.
However, there's no need to do this reactivation if the mirror LV (and
hence it's sub LVs) were not activated before.

This issue caused failure in mirror-vgreduce-removemissing.sh test
recently with this sequence (excerpt from the test script):

  prepare_lvs_
  lvcreate -an -Zn -l2 --type mirror -m1 --nosync -n $lv1 $vg "$dev1" $dev2" "$dev3":$BLOCKS
  mimages_are_on_ $lv1 "$dev1" "$dev2"
  mirrorlog_is_on_ $lv1 "$dev3"
  aux disable_dev "$dev2"
  vgreduce --removemissing --force $vg

The important thing about that test is that we're not running cmirrord,
we're activating the mirror with "-an" so it's inactive and then
vgreduce --removemissing tries to reactivate the mirror images
as part of the _delete_lv function call inside and since cmirrord
is not running, we end up with the "Shared cluster mirrors are not
available." error.
2015-01-07 11:16:19 +01:00
Petr Rockai
e97023804a pvremove: Avoid metadata re-reads & related error messages. 2015-01-06 14:27:30 +01:00
Peter Rajnoha
509650ec4c cmirror: do not check for cmirror availability when creating deactivated cluster mirrors
When creating cluster mirrors while they're not supposed to be activated
immediately after creation, we don't need to check for cmirrord availability.
We can just create these mirrors and let the check to be done on activation
later on. This is addendum for commit cba6186325.
2015-01-06 09:59:04 +01:00
Peter Rajnoha
cba6186325 cmirror: check for cmirror availability during cluster mirror creation and activation
When creating/activating clustered mirrors, we should have cmirrord
available and running. If it's not, we ended up with rather cryptic
errors like:

$ lvcreate -l1 -m1 --type mirror vg
  Error locking on node 1: device-mapper: reload ioctl on  failed: Invalid argument
  Failed to activate new LV.

$ vgchange -ay vg
  Error locking on node node 1: device-mapper: reload ioctl on failed: Invalid argument

This patch adds check for cmirror availability and it errors out
properly, also giving a more precise error messge so users are able
to identify the source of the problem easily:

$ lvcreate -l1 -m1 --type mirror vg
  Shared cluster mirrors are not available.

$ vgchange -ay vg
  Error locking on node 1: Shared cluster mirrors are not available.

Exclusively activated cluster mirror LVs are OK even without cmirrord:

$ vgchange -aey vg
  1 logical volume(s) in volume group "vg" now active
2015-01-05 16:54:07 +01:00
Zdenek Kabelac
f3bd9a2797 raid: properly rename split image
When we split leg from raid - we take a proper new lock for a new LV.
However for now activation checks only 'existince' of device UUID,
but it's not validating device has a proper name.

As a quick fix call suspend()/resume() to rename after split mirror.
2014-12-05 13:39:42 +01:00
Peter Rajnoha
a5baf13a06 pool: fix typo in error message: then -> than 2014-12-04 09:18:16 +01:00
Alasdair G Kergon
a057f40155 mirror: Validate raid region size config setting.
If necessary, round down to a power of 2 the raid/mirror region size
taken from the config files.
2014-12-03 22:47:08 +00:00
Alasdair G Kergon
de53e0955d mirror: Restrict region size to power of 2. 2014-12-02 14:24:21 +00:00
Petr Rockai
2c3db52356 metadata: Add cache_policy to lvcreate_params and honour it. 2014-11-27 20:20:48 +01:00
Zdenek Kabelac
2de11c9e9e thin: add missing 64KB rounding
When chunk size needs to be estimated, the code missed to round
to proper 64kb boundaries  (or power of 2 for older thin pool driver).
So for some data and metadata size (i.e. 10GB and 4MB) it resulted
in incorrect chunk size (not being a multiple of 64KB)

Fix it by adding proper rounding and also use 1 routine for 2 places
where the same calculation is made.

Fix also incorrect printed warning that has used 'ffs()'
(which returns first 'least significant' bit in word)
and it was not really giving any useful size info and replace it
with properly estimated chunk size.
2014-11-26 09:29:25 +01:00
Peter Rajnoha
62f3a4d2d8 pvresize: fix size in 'Resizing to ...' verbose message to show proper result size 2014-11-25 15:19:10 +01:00
Petr Rockai
c75ae0846e cache: Implement 'default' as a policy settings value to clear the record. 2014-11-20 16:51:07 +01:00
Petr Rockai
d22ffd8c28 cache: Add lv_cache_setpolicy to cache_manip.c. 2014-11-20 16:51:06 +01:00
Zdenek Kabelac
9f2961f259 cache: check for internal error
Don't try to duplicate NULL on internal error path.
2014-11-20 16:35:46 +01:00
Zdenek Kabelac
d7985ebead thin: fix error path
Print pool name and not the origin name.
2014-11-19 18:58:30 +01:00
Zdenek Kabelac
38200c2000 cleanup: add '.' to log messages 2014-11-14 18:12:35 +01:00
Zdenek Kabelac
f36080a05d vg_read: correct warning
Use log_warn when we are effectively not creating an error -
we 'allowed' inconsistent read for a reason - so it's just warning
level we process inconsistent VG - it's upto caller later to decide
error level of command return value and in case of error it needs
to use log_error then.
2014-11-14 18:12:35 +01:00
Zdenek Kabelac
06e3f1757e vg_read: use new error flag
Failed recovery provides different (NULL) VG then FAILED_INCONSISTENT.
Mark it with different failure bit - since FAILED_INCONSISTENT is
supposed to contain something 'usable' (thought inconsistent).
2014-11-14 18:09:27 +01:00
Zdenek Kabelac
6308a8b06d cache: wrong feature in seg is internal error 2014-11-13 17:44:31 +01:00
Zdenek Kabelac
8cb79dad0b pool: fix removal of pool metadata spare
Since we support device stack of pools over pool
(thin-pool with cache data volume) the existing code
is no longer able to detect orphan _pmspare.

So instead do a _pmspare check after volume removal,
and remove spare afterwards.
2014-11-13 13:09:07 +01:00
Peter Rajnoha
c03d8473ea coverity: fix possible dereference of NULL pointer
This would be in case the pool segment was not found.

LVM2.2.02.112/lib/metadata/pool_manip.c:238:36: warning: Access to field 'segtype' results in a dereference of a null pointer (loaded from variable 'pool_seg')
2014-11-12 10:17:17 +01:00
Peter Rajnoha
ce8730b508 coverity: fix possible integer overflow
LVM2.2.02.112/lib/metadata/cache_manip.c:73: overflow_before_widen: Potentially overflowing expression "*pool_metadata_extents *vg->extent_size" with type "unsigned int" (32 bits, unsigned) is evaluated using 32-bit arithmetic, and then used in a context that expects an expression of type "uint64_t" (64 bits, unsigned).
LVM2.2.02.112/lib/activate/dev_manager.c:217: overflow_before_widen: Potentially overflowing expression "seg_status->seg->len * extent_size" with type "unsigned int" (32 bits, unsigned) is evaluated using 32-bit arithmetic, and then used in a context that expects an expression of type "uint64_t" (64 bits, unsigned).
LVM2.2.02.112/lib/activate/dev_manager.c:217: overflow_before_widen: Potentially overflowing expression "seg_status->seg->le * extent_size" with type "unsigned int" (32 bits, unsigned) is evaluated using 32-bit arithmetic, and then used in a context that expects an expression of type "uint64_t" (64 bits, unsigned).
2014-11-12 10:03:27 +01:00
Alasdair G Kergon
9a5910bdf9 pre-release 2014-11-11 14:13:00 +00:00
Zdenek Kabelac
8121074fda cache: pending_delete fixes 2014-11-11 13:32:41 +01:00
Zdenek Kabelac
20b22cd023 libdm: still better API
Do not use 'any' policy name as a value in config tree - so we stick
with 'policy_settings' and extra 'policy_name' for libdm params.

Update lvm2 API as well.

Example of supported metadata:

 policy = "mq"
 policy_settings {
      migration_threshold = 2048
      sequential_threshold = 512
      random_threshold = 4
      read_promote_adjustment = 10
 }
2014-11-11 00:54:03 +01:00
Zdenek Kabelac
a7fc108298 mirror: layer remove doesn't work properly with mirrors 2014-11-10 22:32:43 +01:00
Zdenek Kabelac
e5d3f81285 cleanup: indents comments backtraces 2014-11-10 22:05:49 +01:00
Zdenek Kabelac
f5e265a07f cache: use LV_PENDING_DELETE 2014-11-10 22:05:49 +01:00
Zdenek Kabelac
6effcb16fc cache: option 2014-11-10 22:05:49 +01:00
Zdenek Kabelac
3e230a8ad8 cache: new API for libdm 2014-11-10 22:05:49 +01:00
Zdenek Kabelac
3dbcd2a1c9 cleanup: cache API get/set 2014-11-10 22:05:48 +01:00