1
0
mirror of git://sourceware.org/git/lvm2.git synced 2024-10-27 10:25:13 +03:00
Commit Graph

5618 Commits

Author SHA1 Message Date
Heinz Mauelshagen
e1a1c20e95 lvconvert: enhance message
Enhance message introduced by last
commit f342e803ba.

Related: rhbz1439399
2017-06-19 21:40:38 +02:00
Heinz Mauelshagen
f342e803ba lvconvert: disable conversion of RAID LV under snapshot
Disable until we have a proper fix for reshape space allocation,
switching it to begin/end of rimages and activation.

Related: rhbz1439399
2017-06-19 21:08:52 +02:00
Heinz Mauelshagen
fb46175ce7 lvconvert: disable reshaping of RAID LVs in the cluster
Disable until we have a proper fix for reshape space allocation,
switching it to begin/end of rimages and activation in the cluster.

Related: rhbz1448116
Related: rhbz1461526
Related: rhbz1448123
2017-06-19 21:06:53 +02:00
Zdenek Kabelac
fbb3bffb22 debug: passing non-raid seg would be internal error 2017-06-16 17:04:02 +02:00
Zdenek Kabelac
9e96f96a41 cleanup: drop unused parameter 2017-06-16 17:04:02 +02:00
Zdenek Kabelac
cdb55c19cd cleanup: show what happens when passed prompt
When we show prompt and user passes --yes - we still
do tell user which action is going to happen.
2017-06-16 17:04:02 +02:00
Zdenek Kabelac
14816222a1 cleanup: improve debug tracing 2017-06-16 17:04:02 +02:00
Zdenek Kabelac
b7c9ec8a24 cleanup: use 'dm_get_status_raid'
Use single 'dm' call to parse raid status.
(Avoiding multiple parsers - even when we know it's slighly
less efficient).
2017-06-16 17:04:01 +02:00
Zdenek Kabelac
59d646167f raid: report percent with segtype info
Enhance reporting code, so it does not need to do 'extra' ioctl to
get 'status' of normal raid and provide percentage directly.

When we have 'merging' snapshot into raid origin, we still need to get
this secondary number with extra status call - however, since  'raid'
is always a single segment LV - we may skip 'copy_percent' call as
we directly know the percent and also with better precision.

NOTE: for mirror we still base reported number on the percetage of
transferred extents which might get quite imprecisse if big size
of extent is used while volume itself is smaller as reporting jump
steps are much bigger the actual reported number provides.

2nd.NOTE: raid lvs line report already requires quite a few extra status
calls for the same device - but fix will be need slight code improval.
2017-06-16 17:04:01 +02:00
Heinz Mauelshagen
40e0dcf70d raid: adjust reshape feature flag check
Relative to last comit ddf2a1d656:

adjust the dm-raid target version to 1.12.0 which shows
mandatory kernel MD deadlock fixes related to reshaping
are presant in the kernel.

Related: rhbz1443999
2017-06-16 15:58:47 +02:00
Heinz Mauelshagen
ddf2a1d656 Revert "lvconvert: reject changing number of stripes on single core
This reverts commit 3719f4bc54
to allow for single core testing on kernels with deadlock
fixes relative to rhbz1443999."
2017-06-16 15:43:23 +02:00
Jonathan Brassow
6c4b2a6aa1 clean-up: Very picky update to comment - hopefully making it clearer 2017-06-14 15:22:04 -05:00
Jonathan Brassow
1f57a5263e clean-ups: remove unused var, add 'static' for local fn, adjust test
For the test clean-up, I was providing too many devices to the first
command - possibly allowing it to allocate in the wrong place.  I was
also not providing a device for the second command - virtually ensuring
the test was not performing correctly at times.
2017-06-14 14:49:42 -05:00
Jonathan Brassow
ddb14b6b05 lvconvert: Disallow removal of primary when up-converting (recovering)
This patch ensures that under normal conditions (i.e. not during repair
operations) that users are prevented from removing devices that would
cause data loss.

When a RAID1 is undergoing its initial sync, it is ok to remove all but
one of the images because they have all existed since creation and
contain all the data written since the array was created.  OTOH, if the
RAID1 was created as a result of an up-convert from linear, it is very
important not to let the user remove the primary image (the source of
all the data).  They should be allowed to remove any devices they want
and as many as they want as long as one original (primary) device is left
during a "recover" (aka up-convert).

This fixes bug 1461187 and includes the necessary regression tests.
2017-06-14 08:41:05 -05:00
Jonathan Brassow
4c0e908b0a RAID (lvconvert/dmeventd): Cleanly handle primary failure during 'recover' op
Add the checks necessary to distiguish the state of a RAID when the primary
source for syncing fails during the "recover" process.

It has been possible to hit this condition before (like when converting from
2-way RAID1 to 3-way and having the first two devices die during the "recover"
process).  However, this condition is now more likely since we treat linear ->
RAID1 conversions as "recover" now - so it is especially important we cleanly
handle this condition.
2017-06-14 08:39:50 -05:00
Jonathan Brassow
d34d2068dd lvconvert: Don't require a 'force' option during RAID repair.
Previously, we were treating non-RAID to RAID up-converts as a "resync"
operation.  (The most common example being 'linear -> RAID1'.)  RAID to
RAID up-converts or rebuilds of specific RAID images are properly treated
as a "recover" operation.

Since we were treating some up-convert operations as "resync", it was
possible to have scenarios where data corruption or data loss were
possibilities if the RAID hadn't been able to sync completely before a
loss of the primary source devices.  In order to ensure that the user took
the proper precautions in such scenarios, we required a '--force' option
to be present.  Unfortuneately, the force option was rendered useless
because there was no way to distiguish the failure state of a potentially
destructive repair from a nominal one - making the '--force' option a
requirement for any RAID1 repair!

We now treat non-RAID to RAID up-converts properly as "recover" operations.
This eliminates the scenarios that can potentially cause data loss or
data corruption; and this eliminates the need for the '--force' requirement.
This patch removes the requirement to specify '--force' for RAID repairs.
2017-06-14 08:39:07 -05:00
Jonathan Brassow
c87907dcd5 lvconvert: linear -> raid1 upconvert should cause "recover" not "resync"
Two of the sync actions performed by the kernel (aka MD runtime) are
"resync" and "recover".  The "resync" refers to when an entirely new array
is going through the process of initializing (or resynchronizing after an
unexpected shutdown).  The "recover" is the process of initializing a new
member device to the array.  So, a brand new array with all new devices
will undergo "resync".  An array with replaced or added sub-LVs will undergo
"recover".

These two states are treated very differently when failures happen.  If any
device is lost or replaced while "resync", there are no worries.  This is
because any writes created from the inception of the array have occurred to
all the devices and can be safely recovered.  Even though non-initialized
portions will still be resync'ed with uninitialized data, it is ok.  However,
if a pre-existing device is lost (aka, the original linear device in a
linear -> raid1 convert) during a "recover", data loss can be the result.
Thus, writes are errored by the kernel and recovery is halted.  The failed
device must be restored or removed.  This is the correct behavior.

Unfortunately, we were treating an up-convert from linear as a "resync"
when we should have been treating it as a "recover".  This patch
removes the special case for linear upconvert.  It allows each new image
sub-LV to be marked with a rebuild flag and treats the array as 'in-sync'.
This has the correct effect of causing the upconvert to be treated as a
"recover" rather than a "resync".  There is no need to flag these two states
differently in LVM metadata, because they are already considered differently
by the kernel RAID metadata.  (Any activation/deactivation will properly
resume the "recover" process and not a "resync" process.)

We make this behavior change based on the presense of dm-raid target
version 1.9.0+.
2017-06-14 08:35:22 -05:00
Heinz Mauelshagen
14d563accc raid: change reshape segtype flags
Commit 1c916ec5ff
missed new reshape flags.
2017-06-14 15:01:19 +02:00
Heinz Mauelshagen
08079ec420 lvconvert: fix detached SubLV deactivation in cluster
On conversion from raid10 to raid0 (takeover), all rmeta
devices and the rimage devices of mirrored stripes are
detached from the raid10 LV. The remaining rimage areas
are being shifted down into the slots of the detached
ones hence requiring renames to show proper _N suffix
sequences (e.g. 0,1,2,3 instead of 0,2,4,6).  Only the
top-level raid10 LV has a cluster lock, not the detached
SubLVs thus their deactivation is impossible and e.g the
rename from *_rimage_6 to *_rimage_3 will fail.  Fix by
activating exclusively before deactivating and removing.

Resolves: rhbz1448123
2017-06-13 23:15:51 +02:00
Heinz Mauelshagen
1c916ec5ff raid: add reshape segtype flag support
Prohibit activation of reshaping RaidLVs on incompatible
lvm2 runtime by storing e.g. 'raid5+RESHAPE' segment type
strings in the lvm2 metadata.  Incompatible runtime not
supporting reshaping won't be able to activate those thus
avoiding potential data corruption.

Any new non-reshaping lvconvert command will reset the
segment type string from 'raid5+RESHAPE' to 'raid5'.

See commits
0299a7af1e and
4141409eb0
for segtype flag support.
2017-06-09 22:23:04 +02:00
Zdenek Kabelac
57379157f4 cleanup: update message 2017-06-09 21:49:19 +02:00
Zdenek Kabelac
db5938a4f8 cleanup: define really uses KB
Cleanup also units for DEFAULT_THIN_POOL_OPTIMAL_METADATA_SIZE define
(128MB) and update calcs for it.
2017-06-09 21:49:19 +02:00
Zdenek Kabelac
5e7db7d85d snapshot: fix reporting for merged old snapshot
When old snapshot is merged, lvm2 still can report some data about
merged 'snapshot' - i.e. it occupied space in VG.

This patch fixes regression from commit:
6fd20be629

and resolved RHBZ: 1460161
2017-06-09 21:03:20 +02:00
Zdenek Kabelac
48ffb996c5 thin: disallow creation of too big thin pools
When a combination of thin-pool chunk size and thin-pool data size
goes beyond addressable limit, such volume creation is directly
prohibited.

Maximum usable thin-pool size is calculated with use of maximal support
metadata size (even when it's created smaller) and given chunk-size.
If the value data size is found to be too big, the command reports
error and operation fails.

Previously thin-pool was created however lots of thin-pool data LV was
not usable and this space in VG has been wasted.
2017-06-08 11:58:36 +02:00
Zdenek Kabelac
ba3d3210d7 cleanup: use DM limit define
For calculation use already defined size in libdm, which give better
estimation of maximal size of thin pool metadata.
2017-06-08 11:07:58 +02:00
Zdenek Kabelac
719d099693 cleanup: rename internal define
More descriptive name of #define.
2017-06-08 11:07:18 +02:00
Heinz Mauelshagen
39703cb485 lvconvert: reject RAID conversions on inactive LVs
Only support RAID conversions on active LVs.

If we'd accept e.g. upconverting linear -> raid1 on inactive
linear LVs, any LV flags passed to the kernel aren't properly
cleared thus errouneously passing them on every activation.

Add respective check to lv_raid_change_image_count() and
move existing one in lv_raid_convert() for better messages.
2017-06-07 18:37:04 +02:00
Heinz Mauelshagen
3217e0cfea lvconvert: choose direct path to desired raid level
Remove superfluous raid5_n interim LV type from raid4 -> raid10 conversion.

Resolves: rhbz1458006
2017-06-02 14:30:57 +02:00
David Teigland
c98a25aab1 print warning about in-use orphans
Warn about a PV that has the in-use flag set, but appears in
the orphan VG (no VG was found referencing it.)

There are a number of conditions that could lead to this:

. The PV was created with no mdas and is used in a VG with
  other PVs (with metadata) that have not yet appeared on
  the system.  So, no VG metadata is found by lvm which
  references the in-use PV with no mdas.

. vgremove could have failed after clearing mdas but
  before clearing the in-use flag.  In this case, the
  in-use flag needs to be manually cleared on the PV.

. The PV may have damanged/unrecognized VG metadata
  that lvm could not read.

. The PV may have no mdas, and the PVs with the metadata
  may have damaged/unrecognized metadata.
2017-06-01 11:18:42 -05:00
David Teigland
f3c90e90f8 disable repairing in-use flag on orphan PVs
A PV holding VG metadata that lvm can't understand
(e.g. damaged, checksum error, unrecognized flag)
will appear as an in-use orphan, and will be cleared
by this repair code.  Disable this repair until the
code can keep track of these problematic PVs, and
distinguish them from actual in-use orphans.
2017-06-01 09:53:14 -05:00
Heinz Mauelshagen
3719f4bc54 lvconvert: reject changing number of stripes on single core
Reject any stripe adding/removing reshape on raid4/5/6/10 because
of related MD kernel deadlock on single core systems until
we get a proper fix in MD.

Related: rhbz1443999
2017-05-30 19:14:32 +02:00
Zdenek Kabelac
fb86bddda2 flags: improve unknown flags logic
Use same logic as with unknown segment type - so preserve such
name fully with all flags just with UNKNOWN segment type bits.
2017-05-30 18:43:45 +02:00
Zdenek Kabelac
d1ac6108c3 flags: restore same logic with MISSING
Since lvmetad is using 'MISSING' in status for 'another' purpose,
we need to support ATM also flag get from this place.

Until fixed better - we accept both flags - alhough lvm2 will
only print in flags.
2017-05-30 16:16:29 +02:00
Zdenek Kabelac
4141409eb0 flags: add segtype flag support
Switch METADATA_FORMAT flag usage to be stored via segtype
instead of 'status' flag which appeared to cause major
incompatibility troubles.

For backward compatiblity segtype flags are still accepted also
via 'status' bits which were used from version 2.02.169 so metadata
saved by this newer lvm2 version should still work nicely, although
new save version will no longer work on this older lvm2 version.
2017-05-29 14:52:56 +02:00
Zdenek Kabelac
0299a7af1e flags: add read and print of segtype flag
Allow storing LV status bits with segment type name field.
Switching to this since this field has better support for compatibility
with older version of lvm2 - since such unknown segtype will not cause
complete invisiblity of metadata from older lvm2 code - just the
particular LV will become unusable with unknown type of segment.
2017-05-29 14:49:41 +02:00
Zdenek Kabelac
1bb0c5197f cleanup: backtrace
Add debug backtrace.
2017-05-29 14:48:33 +02:00
Zdenek Kabelac
966d1130db cleanup: separate type and mask
Split misused 'enum' into 2 fields - one for type
of PV, VG, LV and other for mask.
2017-05-29 14:47:26 +02:00
Zdenek Kabelac
8e0bc73eba cleanup: bad flag is internal error here
Convert to internal error.
2017-05-29 14:47:16 +02:00
Heinz Mauelshagen
65b10281f8 Proper dm_snprintf return checks 2017-05-24 14:00:44 +02:00
Heinz Mauelshagen
3da5cdc5dc Fix typo 2017-05-24 13:47:45 +02:00
David Teigland
7a0f46e2f8 add comment about PV in-use repair
copied from commit message for
d97f1c89de
2017-05-23 16:59:46 -05:00
Alasdair G Kergon
57492a6094 raid: Drop unnecessary/incorrect use of dm_pool_free 2017-05-23 01:51:04 +01:00
Alasdair G Kergon
fbe7464df5 metadata: Unlock VG on more _vg_make_handle error paths
Internal error: VG lock vg0 must be requested before vg3, not after.
Internal error: 3 device(s) were left open and have been closed.
2017-05-23 01:38:02 +01:00
Alasdair G Kergon
d1ddfc4085 format_text: More internal errors if given invalid internal metadata
Three more messages to ensure each failure in out_areas() results in a
low-level message instead of sometimes just <backtrace>.
2017-05-22 23:30:34 +01:00
Heinz Mauelshagen
2bf01c2f37 lvconvert: fix logic in automatic settings of possible (raid) LV types
Commit 5fe07d3574 failed to set raid5 types
properly on conversions from raid6.  It always enforced raid6_ls_6
for types raid6/raid6_zr/raid6_nr/raid6_nc, thus requiring 3 conversions
instead of 2 when asking for raid5_{la,rs,ra,n}.

Related: rhbz1439403
2017-05-18 16:20:39 +02:00
Heinz Mauelshagen
9c651b146e lvconvert: fix indent and typo in last commit 2017-05-18 00:43:20 +02:00
Heinz Mauelshagen
5fe07d3574 lvconvert: enhance automatic settings of possible (raid) LV types
Offer possible interim LV types and display their aliases
(e.g. raid5 and raid5_ls) for all conversions between
striped and any raid LVs in case user requests a type
not suitable to direct conversion.

E.g. running "lvconvert --type raid5 LV" on a striped
LV will replace raid5 aka raid5_ls (rotating parity)
with raid5_n (dedicated parity on last image).
User is asked to repeat the lvconvert command to get to the
requested LV type (raid5 aka raid5_ls in this example)
when such replacement occurs.

Resolves: rhbz1439403
2017-05-18 00:18:15 +02:00
David Teigland
dfc58c637b config: keep description lines under 80
As far as possible, it's nice to keep the config
description lines from going over 80 columns.
2017-05-12 09:55:16 -05:00
Alasdair G Kergon
80900dcf76 metadata: Fix metadata repair when devs still missing.
_check_reappeared_pv() incorrectly clears the MISSING_PV flags of
PVs with unknown devices.
While one caller avoids passing such PVs into the function, the other
doesn't.  Move the check inside the function so it's not forgotten.

Without this patch, if the normal VG reading code tries to repair
inconsistent metadata while there is an unknown PV, it incorrectly
considers the missing PVs no longer to be missing and produces
incorrect 'pvs' output omitting the missing PV, for example.

Easy reproducer:
Create a VG with 3 PVs pv1, pv2, pv3.
Hide pv2.
Run vgreduce --removemissing.
Reinstate the hidden PV pv2 and at the same time hide a different PV
pv3.
Run 'pvs' - incorrect output.
Run 'pvs' again - correct output.

See https://bugzilla.redhat.com/1434054
2017-05-11 02:17:34 +01:00
David Teigland
d45531712d vg_read: check for NULL dev to avoid segfault
There are certain situations (not fully understood)
where is_missing_pv() is false, but pv->dev is NULL,
so this adds a check for NULL pv->dev after is_missing_pv()
to avoid a segfault.
2017-05-10 10:45:41 -05:00