1
0
mirror of git://sourceware.org/git/lvm2.git synced 2024-10-28 20:25:52 +03:00
Commit Graph

65 Commits

Author SHA1 Message Date
Jonathan Brassow
b35fb0b15a raid/misc: Allow creation of parallel areas by LV vs segment
I've changed build_parallel_areas_from_lv to take a new parameter
that allows the caller to build parallel areas by LV vs by segment.
Previously, the function created a list of parallel areas for each
segment in the given LV.  When it came time for allocation, the
parallel areas were honored on a segment basis.  This was problematic
for RAID because any new RAID image must avoid being placed on any
PVs used by other images in the RAID.  For example, if we have a
linear LV that has half its space on one PV and half on another, we
do not want an up-convert to use either of those PVs.  It should
especially not wind up with the following, where the first portion
of one LV is paired up with the second portion of the other:
------PV1-------  ------PV2-------
[ 2of2 image_1 ]  [ 1of2 image_1 ]
[ 1of2 image_0 ]  [ 2of2 image_0 ]
----------------  ----------------
Previously, it was possible for this to happen.  The change makes
it so that the returned parallel areas list contains one "super"
segment (seg_pvs) with a list of all the PVs from every actual
segment in the given LV and covering the entire logical extent range.

This change allows RAID conversions to function properly when there
are existing images that contain multiple segments that span more
than one PV.
2014-06-25 21:20:41 -05:00
Peter Rajnoha
3208396ce5 coverity: fix issues reported by coverity 2014-06-24 14:58:53 +02:00
Peter Rajnoha
cfed0d09e8 report: select: refactor: move percent handling code to libdm for reuse 2014-06-17 16:27:21 +02:00
Zdenek Kabelac
9240aca369 raid: cleanup error messages
Add log_error messages on error paths.
2014-05-27 17:08:49 +02:00
Jonathan Brassow
6c6468f91d RAID: Improve an error message
When down-converting a RAID1 LV, if the user specifies too few devices,
they will get a confusing message.
Ex:
[root]# lvcreate -m 2 --type raid1 -n raid -L 500M taft
  Logical volume "raid" created

[root]# lvconvert -m 0 taft/raid /dev/sdd1
  Unable to extract enough images to satisfy request
  Failed to extract images from taft/raid

This patch makes the error message a bit clearer by telling the user
the count they are trying to remove and the number of devices they
supplied.

[root@bp-01 lvm2]# lvcreate --type raid1 -m 3 -L 200M -n lv vg
  Logical volume "lv" created

[root@bp-01 lvm2]# lvconvert -m -3 vg/lv /dev/sdb1
  Unable to remove 3 images:  Only 1 device given.
  Failed to extract images from vg/lv

[root@bp-01 lvm2]# lvconvert -m -3 vg/lv /dev/sd[bc]1
  Unable to remove 3 images:  Only 2 devices given.
  Failed to extract images from vg/lv

[root@bp-01 lvm2]# lvconvert -m -3 vg/lv /dev/sd[bcd]1
[root@bp-01 lvm2]# lvs -a -o name,attr,devices vg
  LV   Attr       Devices
  lv   -wi-a----- /dev/sde1(1)

This patch doesn't work in all cases.  The user can specify the right
number of devices, but not a sufficient amount of devices from the LV.
This will produce the old error message:
[root@bp-01 lvm2]# lvconvert -m -3 vg/lv /dev/sd[bcf]1
  Unable to extract enough images to satisfy request
  Failed to extract images from vg/lv
However, I think this error message is sufficient for this case.
2014-04-03 16:57:41 -05:00
Jonathan Brassow
4b6e3b5e5e allocation: Allow approximate allocation when specifying size in percent
Introduce a new parameter called "approx_alloc" that is set when the
desired size of a new LV is specified in percentage terms.  If set,
the allocation code tries to get as much space as it can but does not
fail if can at least get some.

One of the practical implications is that users can now specify 100%FREE
when creating RAID LVs, like this:
~> lvcreate --type raid5 -i 2 -l 100%FREE -n lv vg
2014-02-13 21:10:28 -06:00
Zdenek Kabelac
ef6c5795a0 raid: add temporary activation for raid metadata clear
Use LV_TEMPORARY when activating devices for clearing
raid metadata.
2014-02-04 14:51:05 +01:00
Zdenek Kabelac
8c96afd361 cleanup: use compound literals for wipe_lv
Optimize and cleanup recently introduced new function wipe_lv.
Use compound literals to get nicely initialized wipe_params struct.
Pass in lv as explicit argument for wipe_lv.
Use cmd from lv structure.
Initialize only non-null members so it's easy to see what
is the special arg.
2013-11-28 12:45:52 +01:00
Peter Rajnoha
b6dab4e059 lv_manip: rename set_lv -> wipe_lv and include signature wiping capability
Use common wipe_lv (former set_lv) fn to do zeroing as well as signature
wiping if needed. Provide new struct wipe_lv_params to define the
functionality.

Bind "lvcreate -W/--wipesignatures y" with proper wipe_lv call.

Also, add "yes" and "force" to lvcreate_params so it's possible
to apply them for the prompt: "WARNING: %s detected on %s. Wipe it? [y/n]".
2013-11-27 15:48:15 +01:00
Jonathan Brassow
2691f1d764 RAID: Make RAID single-machine-exclusive capable in a cluster
Creation, deletion, [de]activation, repair, conversion, scrubbing
and changing operations are all now available for RAID LVs in a
cluster - provided that they are activated exclusively.

The code has been changed to ensure that no LV or sub-LV activation
is attempted cluster-wide.  This includes the often overlooked
operations of activating metadata areas for the brief time it takes
to clear them.  Additionally, some 'resume_lv' operations were
replaced with 'activate_lv_excl_local' when sub-LVs were promoted
to top-level LVs for removal, clearing or extraction.  This was
necessary because it forces the appropriate renaming actions the
occur via resume in the single-machine case, but won't happen in
a cluster due to the necessity of acquiring a lock first.

The *raid* tests have been updated to allow testing in a cluster.
For the most part, this meant creating devices with '-aey' if they
were to be converted to RAID.  (RAID requires the converting LV to
be EX because it is a condition of activation for the RAID LV in
a cluster.)
2013-09-10 16:33:22 -05:00
Jonathan Brassow
ca51435153 Misc/RAID: Enable resume_lv to handle some renaming conflicts.
When images and their associated metadata are removed from a RAID1 LV,
the remaining sub-LVs are "shifted" down to fill the gaps.  For
example, if there is a 3-way mirror:
	[0][1][2]
and we remove device#0, the devices will be shifted down
	[1][2]
and renamed.
	[0][1]

This can create a problem for resume_lv (specifically,
dm_tree_activate_children) during the renaming process though.  This
is because it will attempt to rename the higher indexed sub-LVs first
and find that it cannot because there are currently other sub-LVs with
that name.  The solution is to check for a conflicting name before
attempting to rename.  If a conflict is found and that conflicting
sub-LV is also in the process of renaming, we can defer the current
rename until the conflicting sub-LV has renamed and cleared the
conflict.

Now that resume_lv can handle these types of rename conflicts, we can
remove the workaround in RAID that was attempting to resume a RAID1
LV from the bottom-up in order to force a proper rename in assending
order before attempting a resume on the top-level LV.  This "hack"
only worked for single machine use-cases of LVM.  Clearing this up
paves the way for exclusive activation of RAID LVs in a cluster.
2013-09-09 15:07:28 -05:00
Jonathan Brassow
f1e3640df3 Misc: Make get_pv_list_for_lv() available to more than just RAID
The function 'get_pv_list_for_lv' will assemble all the PVs that are
used by the specified LV.  It uses 'for_each_sub_lv' to traverse all
of the sub-lvs which may compose it.
2013-08-23 08:40:13 -05:00
Jonathan Brassow
06ac797f42 Clean-up: Replace 'lv_is_active' with more correct/specific variants
There are places where 'lv_is_active' was being used where it was
more correct to use 'lv_is_active_locally'.  For example, when checking
for the existance of a kernel instance before asking for its status.
Most of the time these would work correctly.  (RAID is only allowed on
non-clustered VGs at the moment, which means that 'lv_is_active' and
'lv_is_active_locally' would give the same result.)  However, it is
more correct to use the proper variant and it helps with future
scenarios where targets might be allowed exclusively (or clustered) in
a cluster VG.
2013-05-16 10:36:56 -05:00
Zdenek Kabelac
dd4fdce16c cleanup: drop unused assignment
Assigned values are unused.
2013-04-21 23:14:04 +02:00
Zdenek Kabelac
5e7eae59da lv_manip: check remove_seg_from_segs_using_this_lv()
Add missing check for result of remove_seg_from_segs_using_this_lv().
Failure is reported as internal error.
2013-04-21 23:10:43 +02:00
Zdenek Kabelac
24f8daa13d raid: test for target_pvs
If target_pvs is NULL do not call lv_is_on_pvs()
2013-04-21 23:07:00 +02:00
Jonathan Brassow
2e0740f7ef RAID: Add writemostly/writebehind support for RAID1
'lvchange' is used to alter a RAID 1 logical volume's write-mostly and
write-behind characteristics.  The '--writemostly' parameter takes a
PV as an argument with an optional trailing character to specify whether
to set ('y'), unset ('n'), or toggle ('t') the value.  If no trailing
character is given, it will set the flag.
Synopsis:
        lvchange [--writemostly <PV>:{t|y|n}] [--writebehind <count>] vg/lv
Example:
        lvchange --writemostly /dev/sdb1:y --writebehind 512 vg/raid1_lv

The last character in the 'lv_attr' field is used to show whether a device
has the WriteMostly flag set.  It is signified with a 'w'.  If the device
has failed, the 'p'artial flag has priority.

Example ("nosync" raid1 with mismatch_cnt and writemostly):
[~]# lvs -a --segment vg
  LV                VG   Attr      #Str Type   SSize
  raid1             vg   Rwi---r-m    2 raid1  500.00m
  [raid1_rimage_0]  vg   Iwi---r--    1 linear 500.00m
  [raid1_rimage_1]  vg   Iwi---r-w    1 linear 500.00m
  [raid1_rmeta_0]   vg   ewi---r--    1 linear   4.00m
  [raid1_rmeta_1]   vg   ewi---r--    1 linear   4.00m

Example (raid1 with mismatch_cnt, writemostly - but failed drive):
[~]# lvs -a --segment vg
  LV                VG   Attr      #Str Type   SSize
  raid1             vg   rwi---r-p    2 raid1  500.00m
  [raid1_rimage_0]  vg   Iwi---r--    1 linear 500.00m
  [raid1_rimage_1]  vg   Iwi---r-p    1 linear 500.00m
  [raid1_rmeta_0]   vg   ewi---r--    1 linear   4.00m
  [raid1_rmeta_1]   vg   ewi---r-p    1 linear   4.00m

A new reportable field has been added for writebehind as well.  If
write-behind has not been set or the LV is not RAID1, the field will
be blank.
Example (writebehind is set):
[~]# lvs -a -o name,attr,writebehind vg
  LV            Attr      WBehind
  lv            rwi-a-r--     512
  [lv_rimage_0] iwi-aor-w
  [lv_rimage_1] iwi-aor--
  [lv_rmeta_0]  ewi-aor--
  [lv_rmeta_1]  ewi-aor--

Example (writebehind is not set):
[~]# lvs -a -o name,attr,writebehind vg
  LV            Attr      WBehind
  lv            rwi-a-r--
  [lv_rimage_0] iwi-aor-w
  [lv_rimage_1] iwi-aor--
  [lv_rmeta_0]  ewi-aor--
  [lv_rmeta_1]  ewi-aor--
2013-04-15 13:59:46 -05:00
Zdenek Kabelac
2e39392daf cleanup: remove unused lvl_idx 2013-04-12 11:26:31 +02:00
Jonathan Brassow
dc2ce71313 clean-up: Remove a FIXME question that has been settled
It is ok for us to use the shorthand 'lv_is_virtual' to detect error
targets in a RAID LV when searching for candidates for device replacement.
2013-02-20 15:03:58 -06:00
Jonathan Brassow
bd0ee420b5 RAID: Allow remove/replace of sub-LVs composed of error segments.
When a device fails, we may wish to replace those segments with an
error segment.  (Like when a 'vgreduce --removemissing' removes a
failed device that happens to be a RAID image/meta.)  We are then left
with images that we will eventually want to remove or replace.

This patch allows us to pull out these virtual "error" sub-LVs.  This
allows a user to 'lvconvert -m -1 vg/lv' to extract the bad sub-LVs.
Sub-LVs with error segments are considered for extraction before other
possible devices so that good devices are not accidentally removed.

This patch also adds the ability to replace RAID images that contain error
segments.  The user will still be unable to run 'lvconvert --replace'
because there is no way to address the 'error' segment (i.e. no PV
that it is associated with).  However, 'lvconvert --repair' can be
used to replace the image's error segment with a new PV.  This is also
the most appropriate way to do it, since the LV will continue to be
reported as 'partial'.
2013-02-20 14:58:56 -06:00
Jonathan Brassow
845852d6b4 RAID: Make 'vgreduce --removemissing' work with RAID LVs
Currently it is impossible to remove a failed PV which has a RAID LV
on it.  This patch fixes the issue by replacing the failed PV with an
'error' segment within the affected sub-LVs.  Once there is no longer
a RAID LV using the PV, it can be removed.

Most often, it is better to replace a failed RAID device with a spare.
(You can use 'lvconvert --repair <vg>/<LV>' to accomplish that.)
However, if there are no spares in the volume group and none will be
added, it is useful to be able to removed the failed device.

Following patches address the ability to perform 'lvconvert' operations
on RAID LVs that contain sub-LVs composed of 'error' segments.
2013-02-20 14:52:46 -06:00
Jonathan Brassow
0e4ffd9d3b clean-up: Rename lvm.conf setting 'mirror_region_size' to 'raid_region_size'
We have been using 'mirror_region_size' in lvm.conf as the default region
size for RAID logical volumes as well as mirror logical volumes.  Since,
"raid" is more inclusive and representative than "mirror", I have changed
the name of this setting.  We must still check for the old setting and warn
the user if we are overriding it with the new setting if both happen to be
present.
2013-02-20 14:40:17 -06:00
Alasdair G Kergon
06abb2dd4c logging: classify log_debug messages
Place most log_debug() messages into a class.
2013-01-07 22:30:29 +00:00
Jonathan Brassow
970dfbcd69 RAID: Limit replacement of devices when array is not in-sync.
If a RAID array is not in-sync, replacing devices should not be allowed
as a general rule.  This is because the contents used to populate the
incoming device may be undefined because the devices being read where
not in-sync.  The kernel enforces this rule unless overridden by not
allowing the creation of an array that is not in-sync and includes a
devices that needs to be rebuilt.

Since we cannot know the sync state of an LV if it is inactive, we must
also enforce the rule that an array must be active to replace devices.

That leaves us with the following conditions:
1) never allow replacement or repair of devices if the LV is in-active
2) never allow replacement if the LV is not in-sync
3) allow repair if the LV is not in-sync, but warn that contents may
   not be recoverable.

In the case where a user is performing the repair on the command line via
'lvconvert --repair', the warning is printed before the user is prompted
if they would like to replace the device(s).  If the repair is automated
(i.e. via dmeventd and policy is "allocate"), then the device is replaced
if possible and the warning is printed.
2012-12-18 14:40:42 -06:00
Jonathan Brassow
fb0cee9a66 RAID: Do not allow --splitmirrors on RAID10 logical volumes.
RAID10 does not have the ability to split off images for independent
use.  So, 'lvconvert --splitmirrors' will not work and must be
disallowed.
2012-11-21 18:39:26 -06:00
Zdenek Kabelac
f260f99d57 cleanup: switch log_error to log_warn
Use log_warn to print non-fatal warning messages.

Use of log_error would confuse checker for testing
whether proper error has been reported for some real error.
2012-10-17 15:41:35 +02:00
Zdenek Kabelac
9ee071705b cleanup: fix compiler warnings
remove unused vars
move var declarations into the front of functions.
fix some sign warnings
2012-10-12 10:25:07 +02:00
Jonathan Brassow
886656e4ac RAID: Fix problems with creating, extending and converting large RAID LVs
MD's bitmaps can handle 2^21 regions at most.  The RAID code has always
used a region_size of 1024 sectors.  That means the size of a RAID LV was
limited to 1TiB.  (The user can adjust the region_size when creating a
RAID LV, which can affect the maximum size.)  Thus, creating, extending or
converting to a RAID LV greater than 1TiB would result in a failure to
load the new device-mapper table.

Again, the size of the RAID LV is not limited by how much space is allocated
for the metadata area, but by the limitations of the MD bitmap.  Therefore,
we must adjust the 'region_size' to ensure that the number of regions does
not exceed the limit.  I've added code to do this when extending a RAID LV
(which covers 'create' and 'extend' operations) and when up-converting -
specifically from linear to RAID1.
2012-09-27 16:51:22 -05:00
Jonathan Brassow
2a6712ddef RAID1: Clear the LV_NOTSYNCED flag when a RAID1 LV is converted to linear
Failing to clear the LV_NOTSYNCED flag when converting a RAID1 LV to
linear can result in the flag being present after an upconvert - even
if the sync is performed when upconverting.
2012-09-14 16:26:53 -05:00
Jonathan Brassow
116bcb3ea4 RAID1: Like mirrors, do not allow adding images to LV created w/ --nosync
Mirrors do not allow upconverting if the LV has been created with --nosync.
We will enforce the same rule for RAID1.  It isn't hugely critical, since
the portions that have been written will be copied over to the new device
identically from either of the existing images.  However, the unwritten
sections may be different, causing the added image to be a hybrid of the
existing images.

Also, we are disallowing the addition of new images to a RAID1 LV that has
not completed the initial sync.  This may be different from mirroring, but
that is due to the fact that the 'mirror' segment type "stacks" when adding
a new image and RAID1 does not.  RAID1 will rebuild a newly added image
"inline" from the existant images, so they should be in-sync.
2012-09-14 16:12:52 -05:00
Jonathan Brassow
cdb0339319 RAID: Disallow addition of RAID images while array is not in-sync
We cannot add images to a RAID array while it is not in-sync.  The
kernel will simply reject the table, saying:
	'rebuild' specified while array is not in-sync
Now we check to ensure the LV is in-sync before attempting image
additions.
2012-09-10 17:15:20 -05:00
Jonathan Brassow
b49b98d50c RAID: '--test' should not cause a valid create command to fail
It is necessary when creating a RAID LV to clear the new metadata areas.
Failure to do so could result in a prepopulated bitmap that would cause
the new array to skip syncing portions of the array.  It is a requirement
that the metadata LVs be activated and cleared in the process of creating.
However in test mode, this requirement should be lifted - no new LVs should
be created or written to.
2012-09-05 14:32:06 -05:00
Jonathan Brassow
c3eb3a7687 cleanup: Use segtype->ops->name() instead of segtype->name where applicable
When printing a message for the user and the lv_segment pointer is available,
use segtype->ops->name() instead of segtype->name.  This gives a better
user-readable name for the segment.  This is especially true for the
'striped' segment type, which prints "linear" if there is an area_count of
one.
2012-09-05 11:35:54 -05:00
Alasdair G Kergon
438e0050df config: add silent mode
Accept -q as the short form of --quiet.
Suppress non-essential standard output if -q is given twice.
Treat log/silent in lvm.conf as equivalent to -qq.
Review all log_print messages and change some to
log_print_unless_silent.

When silent, the following commands still produce output:
dumpconfig, lvdisplay, lvmdiskscan, lvs, pvck, pvdisplay,
pvs, version, vgcfgrestore -l, vgdisplay, vgs.
[Needs checking.]

Non-essential messages are shifted from log level 4 to log level 5
for syslog and lvm2_log_fn purposes.
2012-08-25 20:35:48 +01:00
Jonathan Brassow
4047e4dfb1 RAID: Add support for RAID10
This patch adds support for RAID10.  It is not the default at this
stage.  The user needs to specify '--type raid10' if they would like
RAID10 instead of stacked mirror over stripe.
2012-08-24 15:34:19 -05:00
Zdenek Kabelac
286cd2006b cleanup: drop unneeded included header files
This headers were not resolving anything used for compiled .c files.
Remove unused util.c file.
2012-08-23 14:37:20 +02:00
Jonathan Earl Brassow
dfd024d3a8 Allow a subset of failed devices to be replaced in RAID LVs.
If two devices in an array failed, it was previously impossible to replace
just one of them.  This patch allows for the replacement of some, but perhaps
not all, failed devices.
2012-04-24 20:05:31 +00:00
Jonathan Earl Brassow
a7feae8a6e Fix code that performs RAID device replacement while under snapshot.
The code should have been calling [suspend|resume]_lv_origin() rather than
[suspend|resume]_lv.

This addresses bug 807069.
2012-04-12 03:16:37 +00:00
Jonathan Earl Brassow
187486c7bb Fix inability to split RAID1 image while specifying a particular PV.
The logic for resuming the original and newly split LVs was not properly
done to handle situations where anything but the last device in the array
was split.  It did not take into account the possible name collisions that
might occur when the original LV undergoes the shifting and renaming of its
sub-LVs.
2012-04-11 14:20:19 +00:00
Jonathan Earl Brassow
c0b5886f18 RAID LVs could not handle a down-convert if a device other than the last one
in the array was specified for removal.  This change addresses that (bz806111).
2012-04-11 01:23:29 +00:00
Jonathan Earl Brassow
dc7b1640ed Fix name conflicts that prevent down-converting RAID1 when specifying a device
When down-converting a RAID1 device, it is the last device that is extracted
and removed when the user does not specify a particular device.  However,
when a device is specified (and it is not the last), the device is removed and
the remaining sub-LVs are "shifted down" to fill the hole.  This cause problems
when resuming the LV because if the shifted devices were resumed (and thus
renamed) before the sub-LV being extracted, there would be a name conflict.
The solution is to resume the extracted sub-LVs first so that they can be
properly renamed preventing a possible conflict.

This addresses bug 801967.
2012-03-15 20:00:54 +00:00
Jonathan Earl Brassow
870762d8e3 Require number of stripes to be greater than parity devices in higher RAID.
Also, add some comments to code that I recently added that may be unclear
otherwise.
2012-02-23 17:36:35 +00:00
Jonathan Earl Brassow
9bdfb30720 Fix allocation code to allow replacement of single RAID 4/5/6 device.
The code fail to account for the case where we just need a single device
in a RAID 4/5/6 array.  There is no good way to tell the allocation functions
that we don't need parity devices when we are allocating just a single device.
So, I've used a bit of a hack.  If we are allocating an area_count that is <=
the parity count, then we can assume we are simply allocating a replacement
device (i.e. no need to include parity devices in the calculations).  This
should make sense in most cases.  If we need to allocate replacement devices
due to failure (or moving), we will never allocate more than the parity count;
or we would cause the array to become unusable.  If we are creating a new device,
we should always create more stripes than parity devices.
2012-02-23 03:57:23 +00:00
Zdenek Kabelac
cbe6bcd593 Add check for rimage name allocation failure 2012-02-13 11:10:37 +00:00
Jonathan Earl Brassow
6cf3274732 Use suspend|resume_origin_only when up-converting RAID LVs, as mirrors do.
Failure to do so results in "Performing unsafe table load while X device(s) are
known to be suspended" errors.  While fixing the problem in this way works and
is consistent with the way the mirror segment type does it, it would be nice
to find a solution that uses the generic suspend/resume calls.

Also included in this check-in are additions to the test suite that perform
conversions on RAID LVs under a snapshot.  These tests are disabled for the
time being due to a kernel bug that is yet to be tracked down.
2012-01-24 14:33:38 +00:00
Jonathan Earl Brassow
9711057499 Don't allow two images to be split and tracked from a RAID LV at one time
Also, don't allow a splitmirror operation on a RAID LV that is already tracking
a split, unless the operation is to stop the tracking and complete the split.
Example:
~> lvconvert --splitmirrors 1 --trackchanges vg/lv /dev/sdc1
# Now tracking changes - image can be merged back or split-off for good
~> lvconvert --splitmirrors 1 -n new_name vg/lv /dev/sdc1
# ^ Completes split ^

If a split is performed on a RAID that is tracking an already split image and
PVs are provided, we must ensure that
 1) the already split LV is represented in the PVs
 2) we are careful to split only the tracked image
2011-12-01 00:21:04 +00:00
Jonathan Earl Brassow
a927e401f1 Do not allow users to change the name of RAID sub-LVs or the name of the
RAID LV if it is tracking changes for a split image.
2011-12-01 00:09:34 +00:00
Jonathan Earl Brassow
0c506d9a40 Support the ability to replace specific devices in a RAID array.
RAID is not like traditional LVM mirroring.  LVM mirroring required failed
devices to be removed or the logical volume would simply hang.  RAID arrays can
keep on running with failed devices.  In fact, for RAID types other than RAID1,
removing a device would mean substituting an error target or converting to a
lower level RAID (e.g. RAID6 -> RAID5, or RAID4/5 to RAID0).  Therefore, rather
than removing a failed device unconditionally and potentially allocating a
replacement, RAID allows the user to "replace" a device with a new one.  This
approach is a 1-step solution vs the current 2-step solution.

example> lvconvert --replace <dev_to_remove> vg/lv [possible_replacement_PVs]

'--replace' can be specified more than once.

example> lvconvert --replace /dev/sdb1 --replace /dev/sdc1 vg/lv
2011-11-30 02:02:10 +00:00
Jonathan Earl Brassow
f60175c308 Add the ability to convert LVs of "mirror" segtype to "raid1" segtype.
Example:
~> lvconvert --type raid1 vg/mirror_lv

Steps to convert "mirror" to "raid1"
1) Allocate a RAID metadata LV for each mirror image from the same PVs
   on which they are located.
2) Clear the metadata LVs.  This involves writing LVM metadata, so we don't
   change any aspects of the mirror LV before this so that the user can easily
   remove LVs from the failed convert attempt while retaining the original
   mirror.
3) Remove the mirror log, if it exists.
4) Add metadata LVs to mirror LV
5) Rename mirror sub-lvs (s/mimage/rimage/)
6) Change flags and segtype from mirror to raid1
2011-10-07 14:56:01 +00:00
Jonathan Earl Brassow
d3582e0252 Add the ability to convert linear LVs to RAID1
Example:
~> lvconvert --type raid1 -m 1 vg/lv

The following steps are performed to convert linear to RAID1:
1) Allocate a metadata device from the same PV as the linear device
   to provide the metadata/data LV pair required for all RAID components.
2) Allocate the required number of metadata/data LV pairs for the
   remaining additional images.
3) Clear the metadata LVs.  This performs a LVM metadata update.
4) Create the top-level RAID LV and add the component devices.

We want to make any failure easy to unwind.  This is why we don't create the
top-level LV and add the components until the last step.  Should anything
happen before that, the user could simply remove the unnecessary images.  Also,
we want to ensure that the metadata LVs are cleared before forming the array to
prevent stale information from polluting the new array.

A new macro 'seg_is_linear' was added to allow us to distinguish linear LVs
from striped LVs.
2011-10-07 14:52:26 +00:00