1
0
mirror of git://sourceware.org/git/lvm2.git synced 2024-12-21 13:34:40 +03:00
Commit Graph

596 Commits

Author SHA1 Message Date
David Teigland
240062a183 writecache: remove from an active lv 2020-06-10 12:13:31 -05:00
David Teigland
d945b53ff7 remove vg_read_error
Once converted results to error numbers but is now just a null check.
2020-04-24 11:14:29 -05:00
David Teigland
d9e8895a96 Allow dm-integrity to be used for raid images
dm-integrity stores checksums of the data written to an
LV, and returns an error if data read from the LV does
not match the previously saved checksum.  When used on
raid images, dm-raid will correct the error by reading
the block from another image, and the device user sees
no error.  The integrity metadata (checksums) are stored
on an internal LV allocated by lvm for each linear image.
The internal LV is allocated on the same PV as the image.

Create a raid LV with an integrity layer over each
raid image (for raid levels 1,4,5,6,10):

lvcreate --type raidN --raidintegrity y [options]

Add an integrity layer to images of an existing raid LV:

lvconvert --raidintegrity y LV

Remove the integrity layer from images of a raid LV:

lvconvert --raidintegrity n LV

Settings

Use --raidintegritymode journal|bitmap (journal is default)
to configure the method used by dm-integrity to ensure
crash consistency.

Initialization

When integrity is added to an LV, the kernel needs to
initialize the integrity metadata/checksums for all blocks
in the LV.  The data corruption checking performed by
dm-integrity will only operate on areas of the LV that
are already initialized.  The progress of integrity
initialization is reported by the "syncpercent" LV
reporting field (and under the Cpy%Sync lvs column.)

Example: create a raid1 LV with integrity:

$ lvcreate --type raid1 -m1 --raidintegrity y -n rr -L1G foo
  Creating integrity metadata LV rr_rimage_0_imeta with size 12.00 MiB.
  Logical volume "rr_rimage_0_imeta" created.
  Creating integrity metadata LV rr_rimage_1_imeta with size 12.00 MiB.
  Logical volume "rr_rimage_1_imeta" created.
  Logical volume "rr" created.
$ lvs -a foo
  LV                  VG  Attr       LSize  Origin              Cpy%Sync
  rr                  foo rwi-a-r---  1.00g                     4.93
  [rr_rimage_0]       foo gwi-aor---  1.00g [rr_rimage_0_iorig] 41.02
  [rr_rimage_0_imeta] foo ewi-ao---- 12.00m
  [rr_rimage_0_iorig] foo -wi-ao----  1.00g
  [rr_rimage_1]       foo gwi-aor---  1.00g [rr_rimage_1_iorig] 39.45
  [rr_rimage_1_imeta] foo ewi-ao---- 12.00m
  [rr_rimage_1_iorig] foo -wi-ao----  1.00g
  [rr_rmeta_0]        foo ewi-aor---  4.00m
  [rr_rmeta_1]        foo ewi-aor---  4.00m
2020-04-15 12:10:32 -05:00
David Teigland
b6b4ad8e28 move pv_list code into lib 2020-04-13 10:04:14 -05:00
Zdenek Kabelac
2266a1863f lv_manip: add lv_uniq_rename_update
Add function to rename LV to either passed name or if
the name is already in use, generate new lvol% name.
2019-10-21 12:14:15 +02:00
Zdenek Kabelac
2a08d6d1d4 cachevol: use CVOL UUID for cdata and cmeta layered devices
Since code is using -cdata and -cmeta UUID suffixes, it does not need
any new 'extra' ID to be generated and stored in metadata.

Since introduce of new 'segtype' cache+CACHE_USES_CACHEVOL we can
safely assume 'new' cache with cachevol will now be created
without extra metadata_id and data_id in metadata.

For backward compatibility, code still reads them in case older
version of metadata have them - so it still should be able
to activate such volumes.

Bonus is lowered size of lv structure used to store info about LV
(noticable with big volume groups).
2019-10-17 13:03:49 +02:00
David Teigland
91ee025d5b cache: change cachevol flags for backward compat
A cachevol LV had the CACHE_VOL status flag in metadata,
and the cache LV using it had no new flag.  This caused
problems if the new metadata was used by an old version
of lvm.  An old version of lvm would have two problems
processing the new metadata:

. The old lvm would return an error when reading the VG
  metadata when it saw the unknown CACHE_VOL status flag.

. The old lvm would return an error when reading the VG
  metadata because it would not find an expected cache pool
  attached to the cache LV (since the cache LV had a
  cachevol attached instead.)

Change the use of flags:

. Change the CACHE_VOL flag to be a COMPATIBLE flag (instead
  of a STATUS flag) so that old versions will not fail when
  they see it.

. When a cache LV is using a cachevol, the cache LV gets
  a new SEGTYPE flag CACHE_USES_CACHEVOL.  This flag is
  appended to the segtype name, so that old lvm versions
  will fail to use the LV because of an unknown segtype,
  as opposed to failing to read the VG.
2019-10-15 09:05:52 -05:00
Zdenek Kabelac
1cd308d640 cachevol: drop no longer needed functions
Code is no longer used/needed.
2019-10-14 15:20:25 +02:00
David Teigland
fe16d296b0 pvmove: remove some cmirror related code
which is no longer used
2019-10-11 11:31:42 -05:00
Zdenek Kabelac
cf8aee096f vdo: introduce get_vdo_write_policy_name 2019-10-04 17:31:55 +02:00
Zdenek Kabelac
c756f76802 vdo: correct internal API for set_vdo_write_policy
This is 'setting' function.
2019-10-04 17:31:55 +02:00
David Teigland
76dd9b2b51 writecache: move code into new file
put writecache specific code in writecache_manip.c

should be no functional change
2019-09-24 15:51:05 -05:00
David Teigland
27c3c1d7c8 writecache: display layout and role fields 2019-09-20 14:55:11 -05:00
David Teigland
6f7d7089b4 writecache: use dm suffixes and lv attributes
- use internal CACHE_VOL flag on cachevol LV
- add suffixes to dm uuids for internal LVs
- display appropriate letters in the LV attr field
- display writecache's cachevol in lvs output
2019-09-20 14:08:51 -05:00
David Teigland
5d3bced5ea lvconvert: detaching cachevol with missing PVs
. For dm-cache in writethrough, always allow splitcache,
  whether the cache is missing PVs or not.

. For dm-cache in writeback, if the cache is missing PVs,
  allow splitcache with force and yes.

. For dm-writecache, if the cache is missing PVs,
  allow splitcache with force and yes.
2019-09-20 09:59:37 -05:00
David Teigland
25b58310e3 pvscan: avoid full scan for activation
When an online PV completed a VG, the standard
activation functions were used to activate the VG.
These functions use a full scan of all devs.
When many pvscans are run during startup and need
to activate many VGs, scanning all devs from all
the pvscans can take a long time.

Optimize VG activation in pvscan to scan only the
devs in the VG being activated.  This makes use of
the online file info that was used to determine
the VG was complete.

The downside of this approach is that pvscan activation
will not detect duplicate PVs and block activation,
where a normal activation command (which scans all
devices) would.
2019-09-03 10:11:16 -05:00
David Teigland
0404539edb vgcreate/vgextend: restrict PVs with mixed block sizes
Avoid having PVs with different logical block sizes in the same VG.
This prevents LVs from having mixed block sizes, which can produce
file system errors.

The new config setting devices/allow_mixed_block_sizes (default 0)
can be changed to 1 to return to the unrestricted mode.
2019-08-01 10:06:47 -05:00
David Teigland
b4402bd821 exported vg handling
The exported VG checking/enforcement was scattered and
inconsistent.  This centralizes it and makes it consistent,
following the existing approach for foreign and shared
VGs/PVs, which are very similar to exported VGs/PVs.

The access policy that now applies to foreign/shared/exported
VGs/PVs, is that if a foreign/shared/exported VG/PV is named
on the command line (i.e. explicitly requested by the user),
and the command is not permitted to operate on it because it
is foreign/shared/exported, then an access error is reported
and the command exits with an error.  But, if the command is
processing all VGs/PVs, and happens to come across a
foreign/shared/exported VG/PV (that is not explicitly named on
the command line), then the command silently skips it and does
not produce an error.

A command using tags or --select handles inaccessible VGs/PVs
the same way as a command processing all VGs/PVs, and will
not report/return errors if these inaccessible VGs/PVs exist.

The new policy fixes the exit codes on a somewhat random set of
commands that previously exited with an error if they were
looking at all VGs/PVs and an exported VG existed on the system.

There should be no change to which commands are allowed/disallowed
on exported VGs/PVs.

Certain LV commands (lvs/lvdisplay/lvscan) would previously not
display LVs from an exported VG (for unknown reasons).  This has
not changed.  The lvm fullreport command would previously report
info about an exported VG but not about the LVs in it.  This
has changed to include all info from the exported VG.
2019-06-25 15:39:08 -05:00
David Teigland
d16142f90f scanning: open devs rw when rescanning for write
When vg_read rescans devices with the intention of
writing the VG, the label rescan can open the devs
RW so they do not need to be closed and reopened
RW in dev_write_bytes.
2019-06-21 10:57:49 -05:00
David Teigland
4bb7d3da0e lvmcache: remove wrapper around lvmcache_get_vgnameids
This was left over from when there was an lvmetad
version of the function.
2019-06-11 14:10:14 -05:00
David Teigland
ba7ff96faf improve reading and repairing vg metadata
The fact that vg repair is implemented as a part of vg read
has led to a messy and complicated implementation of vg_read,
and limited and uncontrolled repair capability.  This splits
read and repair apart.

Summary
-------

- take all kinds of various repairs out of vg_read
- vg_read no longer writes anything
- vg_read now simply reads and returns vg metadata
- vg_read ignores bad or old copies of metadata
- vg_read proceeds with a single good copy of metadata
- improve error checks and handling when reading
- keep track of bad (corrupt) copies of metadata in lvmcache
- keep track of old (seqno) copies of metadata in lvmcache
- keep track of outdated PVs in lvmcache
- vg_write will do basic repairs
- new command vgck --updatemetdata will do all repairs

Details
-------

- In scan, do not delete dev from lvmcache if reading/processing fails;
  the dev is still present, and removing it makes it look like the dev
  is not there.  Records are now kept about the problems with each PV
  so they be fixed/repaired in the appropriate places.

- In scan, record a bad mda on failure, and delete the mda from
  mda in use list so it will not be used by vg_read or vg_write,
  only by repair.

- In scan, succeed if any good mda on a device is found, instead of
  failing if any is bad.  The bad/old copies of metadata should not
  interfere with normal usage while good copies can be used.

- In scan, add a record of old mdas in lvmcache for later, do not repair
  them while reading, and do not let them prevent us from finding and
  using a good copy of metadata from elsewhere.  One result is that
  "inconsistent metadata" is no longer a read error, but instead a
  record in lvmcache that can be addressed separate from the read.

- Treat a dev with no good mdas like a dev with no mdas, which is an
  existing case we already handle.

- Don't use a fake vg "handle" for returning an error from vg_read,
  or the vg_read_error function for getting that error number;
  just return null if the vg cannot be read or used, and an error_flags
  arg with flags set for the specific kind of error (which can be used
  later for determining the kind of repair.)

- Saving an original copy of the vg metadata, for purposes of reverting
  a write, is now done explicitly in vg_read instead of being hidden in
  the vg_make_handle function.

- When a vg is not accessible due to "access restrictions" but is
  otherwise fine, return the vg through the new error_vg arg so that
  process_each_pv can skip the PVs in the VG while processing.
  (This is a temporary accomodation for the way process_each_pv
  tracks which devs have been looked at, and can be dropped later
  when process_each_pv implementation dev tracking is changed.)

- vg_read does not try to fix or recover a vg, but now just reads the
  metadata, checks access restrictions and returns it.
  (Checking access restrictions might be better done outside of vg_read,
   but this is a later improvement.)

- _vg_read now simply makes one attempt to read metadata from
  each mda, and uses the most recent copy to return to the caller
  in the form of a 'vg' struct.
  (bad mdas were excluded during the scan and are not retried)
  (old mdas were not excluded during scan and are retried here)

- vg_read uses _vg_read to get the latest copy of metadata from mdas,
  and then makes various checks against it to produce warnings,
  and to check if VG access is allowed (access restrictions include:
  writable, foreign, shared, clustered, missing pvs).

- Things that were previously silently/automatically written by vg_read
  that are now done by vg_write, based on the records made in lvmcache
  during the scan and read:
  . clearing the missing flag
  . updating old copies of metadata
  . clearing outdated pvs
  . updating pv header flags

- Bad/corrupt metadata are now repaired; they were not before.

Test changes
------------

- A read command no longer writes the VG to repair it, so add a write
  command to do a repair.
  (inconsistent-metadata, unlost-pv)

- When a missing PV is removed from a VG, and then the device is
  enabled again, vgck --updatemetadata is needed to clear the
  outdated PV before it can be used again, where it wasn't before.
  (lvconvert-repair-policy, lvconvert-repair-raid, lvconvert-repair,
   mirror-vgreduce-removemissing, pv-ext-flags, unlost-pv)

Reading bad/old metadata
------------------------

- "bad metadata": the mda_header or metadata text has invalid fields
  or can't be parsed by lvm.  This is a form of corruption that would
  not be caused by known failure scenarios.  A checksum error is
  typically included among the errors reported.

- "old metadata": a valid copy of the metadata that has a smaller seqno
  than other copies of the metadata.  This can happen if the device
  failed, or io failed, or lvm failed while commiting new metadata
  to all the metadata areas.  Old metadata on a PV that has been
  removed from the VG is the "outdated" case below.

When a VG has some PVs with bad/old metadata, lvm can simply ignore
the bad/old copies, and use a good copy.  This is why there are
multiple copies of the metadata -- so it's available even when some
of the copies cannot be used.  The bad/old copies do not have to be
repaired before the VG can be used (the repair can happen later.)

A PV with no good copies of the metadata simply falls back to being
treated like a PV with no mdas; a common and harmless configuration.

When bad/old metadata exists, lvm warns the user about it, and
suggests repairing it using a new metadata repair command.
Bad metadata in particular is something that users will want to
investigate and repair themselves, since it should not happen and
may indicate some other problem that needs to be fixed.

PVs with bad/old metadata are not the same as missing devices.
Missing devices will block various kinds of VG modification or
activation, but bad/old metadata will not.

Previously, lvm would attempt to repair bad/old metadata whenever
it was read.  This was unnecessary since lvm does not require every
copy of the metadata to be used.  It would also hide potential
problems that should be investigated by the user.  It was also
dangerous in cases where the VG was on shared storage.  The user
is now allowed to investigate potential problems and decide how
and when to repair them.

Repairing bad/old metadata
--------------------------

When label scan sees bad metadata in an mda, that mda is removed
from the lvmcache info->mdas list.  This means that vg_read will
skip it, and not attempt to read/process it again.  If it was
the only in-use mda on a PV, that PV is treated like a PV with
no mdas.  It also means that vg_write will also skip the bad mda,
and not attempt to write new metadata to it.  The only way to
repair bad metadata is with the metadata repair command.

When label scan sees old metadata in an mda, that mda is kept
in the lvmcache info->mdas list.  This means that vg_read will
read/process it again, and likely see the same mismatch with
the other copies of the metadata.  Like the label_scan, the
vg_read will simply ignore the old copy of the metadata and
use the latest copy.  If the command is modifying the vg
(e.g. lvcreate), then vg_write, which writes new metadata to
every mda on info->mdas, will write the new metadata to the
mda that had the old version.  If successful, this will resolve
the old metadata problem (without needing to run a metadata
repair command.)

Outdated PVs
------------

An outdated PV is a PV that has an old copy of VG metadata
that shows it is a member of the VG, but the latest copy of
the VG metadata does not include this PV.  This happens if
the PV is disconnected, vgreduce --removemissing is run to
remove the PV from the VG, then the PV is reconnected.
In this case, the outdated PV needs have its outdated metadata
removed and the PV used flag needs to be cleared.  This repair
will be done by the subsequent repair command.  It is also done
if vgremove is run on the VG.

MISSING PVs
-----------

When a device is missing, most commands will refuse to modify
the VG.  This is the simple case.  More complicated is when
a command is allowed to modify the VG while it is missing a
device.

When a VG is written while a device is missing for one of it's PVs,
the VG metadata is written to disk with the MISSING flag on the PV
with the missing device.  When the VG is next used, it is treated
as if the PV with the MISSING flag still has a missing device, even
if that device has reappeared.

If all LVs that were using a PV with the MISSING flag are removed
or repaired so that the MISSING PV is no longer used, then the
next time the VG metadata is written, the MISSING flag will be
dropped.

Alternative methods of clearing the MISSING flag are:

vgreduce --removemissing will remove PVs with missing devices,
or PVs with the MISSING flag where the device has reappeared.

vgextend --restoremissing will clear the MISSING flag on PVs
where the device has reappeared, allowing the VG to be used
normally.  This must be done with caution since the reappeared
device may have old data that is inconsistent with data on other PVs.

Bad mda repair
--------------

The new command:
vgck --updatemetadata VG

first uses vg_write to repair old metadata, and other basic
issues mentioned above (old metadata, outdated PVs, pv_header
flags, MISSING_PV flags).  It will also go further and repair
bad metadata:

. text metadata that has a bad checksum
. text metadata that is not parsable
. corrupt mda_header checksum and version fields

(To keep a clean diff, #if 0 is added around functions that
are replaced by new code.  These commented functions are
removed by the following commit.)
2019-06-07 15:54:04 -05:00
David Teigland
47effdc025 vgck --updatemetadata is a new command
uses vg_write to correct more common or less severe issues,
and also adds the ability to repair some metadata corruption
that couldn't be handled previously.
2019-06-07 15:54:04 -05:00
David Teigland
8c87dda195 locking: unify global lock for flock and lockd
There have been two file locks used to protect lvm
"global state": "ORPHANS" and "GLOBAL".

Commands that used the ORPHAN flock in exclusive mode:
  pvcreate, pvremove, vgcreate, vgextend, vgremove,
  vgcfgrestore

Commands that used the ORPHAN flock in shared mode:
  vgimportclone, pvs, pvscan, pvresize, pvmove,
  pvdisplay, pvchange, fullreport

Commands that used the GLOBAL flock in exclusive mode:
  pvchange, pvscan, vgimportclone, vgscan

Commands that used the GLOBAL flock in shared mode:
  pvscan --cache, pvs

The ORPHAN lock covers the important cases of serializing
the use of orphan PVs.  It also partially covers the
reporting of orphan PVs (although not correctly as
explained below.)

The GLOBAL lock doesn't seem to have a clear purpose
(it may have eroded over time.)

Neither lock correctly protects the VG namespace, or
orphan PV properties.

To simplify and correct these issues, the two separate
flocks are combined into the one GLOBAL flock, and this flock
is used from the locking sites that are in place for the
lvmlockd global lock.

The logic behind the lvmlockd (distributed) global lock is
that any command that changes "global state" needs to take
the global lock in ex mode.  Global state in lvm is: the list
of VG names, the set of orphan PVs, and any properties of
orphan PVs.  Reading this global state can use the global lock
in sh mode to ensure it doesn't change while being reported.

The locking of global state now looks like:

lockd_global()
  previously named lockd_gl(), acquires the distributed
  global lock through lvmlockd.  This is unchanged.
  It serializes distributed lvm commands that are changing
  global state.  This is a no-op when lvmlockd is not in use.

lockf_global()
  acquires an flock on a local file.  It serializes local lvm
  commands that are changing global state.

lock_global()
  first calls lockf_global() to acquire the local flock for
  global state, and if this succeeds, it calls lockd_global()
  to acquire the distributed lock for global state.

Replace instances of lockd_gl() with lock_global(), so that the
existing sites for lvmlockd global state locking are now also
used for local file locking of global state.  Remove the previous
file locking calls lock_vol(GLOBAL) and lock_vol(ORPHAN).

The following commands which change global state are now
serialized with the exclusive global flock:

pvchange (of orphan), pvresize (of orphan), pvcreate, pvremove,
vgcreate, vgextend, vgremove, vgreduce, vgrename,
vgcfgrestore, vgimportclone, vgmerge, vgsplit

Commands that use a shared flock to read global state (and will
be serialized against the prior list) are those that use
process_each functions that are based on processing a list of
all VG names, or all PVs.  The list of all VGs or all PVs is
global state and the shared lock prevents those lists from
changing while the command is processing them.

The ORPHAN lock previously attempted to produce an accurate
listing of orphan PVs, but it was only acquired at the end of
the command during the fake vg_read of the fake orphan vg.
This is not when orphan PVs were determined; they were
determined by elimination beforehand by processing all real
VGs, and subtracting the PVs in the real VGs from the list
of all PVs that had been identified during the initial scan.
This is fixed by holding the single global lock in shared mode
while processing all VGs to determine the list of orphan PVs.
2019-04-29 13:01:05 -05:00
David Teigland
85e68a8333 lvextend: refresh shared LV remotely using dlm/corosync
When lvextend extends an LV that is active with a shared
lock, use this as a signal that other hosts may also have
the LV active, with gfs2 mounted, and should have the LV
refreshed to reflect the new size.  Use the libdlmcontrol
run api, which uses dlm_controld/corosync to run an
lvchange --refresh command on other cluster nodes.
2019-03-21 12:38:20 -05:00
David Teigland
4e20ebd6a1 pvscan: ignore online for shared and foreign PVs
Activation would not be allowed anyway, but we can
check for these cases early and avoid wasted time in
pvscan managing online files an attempting activation.
2019-03-05 15:19:05 -06:00
David Teigland
a9eaab6beb Use "cachevol" to refer to cache on a single LV
and "cachepool" to refer to a cache on a cache pool object.

The problem was that the --cachepool option was being used
to refer to both a cache pool object, and to a standard LV
used for caching.  This could be somewhat confusing, and it
made it less clear when each kind would be used.  By
separating them, it's clear when a cachepool or a cachevol
should be used.

Previously:

- lvm would use the cache pool approach when the user passed
  a cache-pool LV to the --cachepool option.

- lvm would use the cache vol approach when the user passed
  a standard LV in the --cachepool option.

Now:

- lvm will always use the cache pool approach when the user
  uses the --cachepool option.

- lvm will always use the cache vol approach when the user
  uses the --cachevol option.
2019-02-27 08:52:34 -06:00
Zdenek Kabelac
ab031d673d vdo: introduce function for estimation of virtual size 2019-01-21 12:53:16 +01:00
Heinz Mauelshagen
dd5716ddf2 raid: fix (de)activation of RaidLVs with visible SubLVs
There's a small window during creation of a new RaidLV when
rmeta SubLVs are made visible to wipe them in order to prevent
erroneous discovery of stale RAID metadata.  In case a crash
prevents the SubLVs from being committed hidden after such
wiping, the RaidLV can still be activated with the SubLVs visible.
During deactivation though, a deadlock occurs because the visible
SubLVs are deactivated before the RaidLV.

The patch adds _check_raid_sublvs to the raid validation in merge.c,
an activation check to activate.c (paranoid, because the merge.c check
will prevent activation in case of visible SubLVs) and shares the
existing wiping function _clear_lvs in raid_manip.c moved to lv_manip.c
and renamed to activate_and_wipe_lvlist to remove code duplication.
Whilst on it, introduce activate_and_wipe_lv to share with
(lvconvert|lvchange).c.

Resolves: rhbz1633167
2018-12-11 16:35:34 +01:00
David Teigland
904e1e3d26 Place the first PE at 1 MiB for all defaults
. When using default settings, this commit should change
  nothing.  The first PE continues to be placed at 1 MiB
  resulting in a metadata area size of 1020 KiB (for
  4K page sizes; slightly smaller for larger page sizes.)

. When default_data_alignment is disabled in lvm.conf,
  align pe_start at 1 MiB, based on a default metadata area
  size that adapts to the page size.  Previously, disabling
  this option would result in mda_size that was too small
  for common use, and produced a 64 KiB aligned pe_start.

. Customized pe_start and mda_size values continue to be
  set as before in lvm.conf and command line.

. Remove the configure option for setting default_data_alignment
  at build time.

. Improve alignment related option descriptions.

. Add section about alignment to pvcreate man page.

Previously, DEFAULT_PVMETADATASIZE was 255 sectors.
However, the fact that the config setting named
"default_data_alignment" has a default value of 1 (MiB)
meant that DEFAULT_PVMETADATASIZE was having no effect.

The metadata area size is the space between the start of
the metadata area (page size offset from the start of the
device) and the first PE (1 MiB by default due to
default_data_alignment 1.)  The result is a 1020 KiB metadata
area on machines with 4KiB page size (1024 KiB - 4 KiB),
and smaller on machines with larger page size.

If default_data_alignment was set to 0 (disabled), then
DEFAULT_PVMETADATASIZE 255 would take effect, and produce a
metadata area that was 188 KiB and pe_start of 192 KiB.
This was too small for common use.

This is fixed by making the default metadata area size a
computed value that matches the value produced by
default_data_alignment.
2018-11-26 16:36:50 -06:00
David Teigland
3ae5569570 Add dm-writecache support
dm-writecache is used like dm-cache with a standard LV
as the cache.

$ lvcreate -n main -L 128M -an foo /dev/loop0

$ lvcreate -n fast -L 32M -an foo /dev/pmem0

$ lvconvert --type writecache --cachepool fast foo/main

$ lvs -a foo -o+devices
  LV            VG  Attr       LSize   Origin        Devices
  [fast]        foo -wi-------  32.00m               /dev/pmem0(0)
  main          foo Cwi------- 128.00m [main_wcorig] main_wcorig(0)
  [main_wcorig] foo -wi------- 128.00m               /dev/loop0(0)

$ lvchange -ay foo/main

$ dmsetup table
foo-main_wcorig: 0 262144 linear 7:0 2048
foo-main: 0 262144 writecache p 253:4 253:3 4096 0
foo-fast: 0 65536 linear 259:0 2048

$ lvchange -an foo/main

$ lvconvert --splitcache foo/main

$ lvs -a foo -o+devices
  LV   VG  Attr       LSize   Devices
  fast foo -wi-------  32.00m /dev/pmem0(0)
  main foo -wi------- 128.00m /dev/loop0(0)
2018-11-06 14:18:41 -06:00
David Teigland
cac4a9743a Allow dm-cache cache device to be standard LV
If a single, standard LV is specified as the cache, use
it directly instead of converting it into a cache-pool
object with two separate LVs (for data and metadata).

With a single LV as the cache, lvm will use blocks at the
beginning for metadata, and the rest for data.  Separate
dm linear devices are set up to point at the metadata and
data areas of the LV.  These dm devs are given to the
dm-cache target to use.

The single LV cache cannot be resized without recreating it.

If the --poolmetadata option is used to specify an LV for
metadata, then a cache pool will be created (with separate
LVs for data and metadata.)

Usage:

$ lvcreate -n main -L 128M vg /dev/loop0

$ lvcreate -n fast -L 64M vg /dev/loop1

$ lvs -a vg
  LV   VG Attr       LSize   Type   Devices
  main vg -wi-a----- 128.00m linear /dev/loop0(0)
  fast vg -wi-a-----  64.00m linear /dev/loop1(0)

$ lvconvert --type cache --cachepool fast vg/main

$ lvs -a vg
  LV           VG Attr       LSize   Origin       Pool  Type   Devices
  [fast]       vg Cwi---C---  64.00m                     linear /dev/loop1(0)
  main         vg Cwi---C--- 128.00m [main_corig] [fast] cache  main_corig(0)
  [main_corig] vg owi---C--- 128.00m                     linear /dev/loop0(0)

$ lvchange -ay vg/main

$ dmsetup ls
vg-fast_cdata   (253:4)
vg-fast_cmeta   (253:5)
vg-main_corig   (253:6)
vg-main (253:24)
vg-fast (253:3)

$ dmsetup table
vg-fast_cdata: 0 98304 linear 253:3 32768
vg-fast_cmeta: 0 32768 linear 253:3 0
vg-main_corig: 0 262144 linear 7:0 2048
vg-main: 0 262144 cache 253:5 253:4 253:6 128 2 metadata2 writethrough mq 0
vg-fast: 0 131072 linear 7:1 2048

$ lvchange -an vg/min

$ lvconvert --splitcache vg/main

$ lvs -a vg
  LV   VG Attr       LSize   Type   Devices
  fast vg -wi-------  64.00m linear /dev/loop1(0)
  main vg -wi------- 128.00m linear /dev/loop0(0)
2018-11-06 13:44:54 -06:00
David Teigland
8d7075528f cache: add cache_mode_num_to_str
Requires only string and number, no specific lv/seg type.
2018-11-06 11:36:28 -06:00
Zdenek Kabelac
c58733ca15 lvcreate: vdo support
Supports basic:  'lvcreate --vdo -LXXXG -VYYYG vg/vdoname -n lvname'
Allows to create basic VDO pool volume and virtual VDO volume.
2018-07-09 15:29:12 +02:00
Zdenek Kabelac
44c99a8822 vdo: data percentage
Display percentage of used virtual size of vdo-pool volume.
2018-07-09 15:28:35 +02:00
Zdenek Kabelac
0dafd159a8 vdo_manip: parsing status of VDO device 2018-07-09 15:28:35 +02:00
Zdenek Kabelac
aa63dfbe39 vdo: support functions to map enums to string names
Translate VDO enums to printable strings.
2018-07-09 15:28:35 +02:00
Zdenek Kabelac
aff69ecf39 vdo: component activation of VDO data LV
Allow component activation of VDO data LV.
2018-07-09 15:28:35 +02:00
Zdenek Kabelac
a8f84f7801 vdo: introduce segment types and manip functions
Core functionality introducing lvm VDO support.
2018-07-09 15:28:35 +02:00
David Teigland
981a3ba98e Clean up repair and result values in vg_read
Fix the confusing mix of input and output values
in the single variable.
2018-06-12 11:08:26 -05:00
David Teigland
9a8c36b891 Fix use of orphan lock in commands
vgreduce, vgremove and vgcfgrestore were acquiring
the orphan lock in the midst of command processing
instead of at the start of the command.  (The orphan
lock moved to being acquired at the start of the
command back when pvcreate/vgcreate/vgextend were
reworked based on pvcreate_each_device.)

vgsplit also needed a small update to avoid reacquiring
a VG lock that it already held (for the new VG name).
2018-06-12 09:46:11 -05:00
David Teigland
c4153a8dfc Remove checking for locked VGs
A few places were calling a function to check if a
VG lock was held.  The only place it was actually
needed is for pvcreate which wants to do its own
locking (and scanning) around process_each_pv.

The locking/scanning exceptions for pvcreate in
process_each_pv/vg_read can be enabled by just passing
a couple of flags instead of checking if the VG is
already locked.  This also means that these special
cases won't be enabled unknowingly in other places
where they shouldn't be used.
2018-06-12 09:46:04 -05:00
David Teigland
669b1295ae Remove header declarations for removed functions 2018-06-08 10:01:05 -05:00
Joe Thornber
dbba1e9b93 Merge branch 'master' into 2018-05-11-fork-libdm 2018-06-01 13:04:12 +01:00
David Teigland
fdaa7e2e87 vgs: add report field for shared
equivalent to a non-empty -o locktype.
2018-05-31 10:23:03 -05:00
David Teigland
6cd0523337 lvmlockd: enable repairing shared VG while reading it
When the lvmlockd lock is shared, upgrade it to ex
when repair (writing) is needed during vg_read.

Pass the lockd state through additional read-related
functions so the instances of repair scattered through
vg_read can be handled.

(Temporary solution until the ad hoc repairs can be
pulled out of vg_read into a top level, centralized
repair function.)
2018-05-30 12:56:46 -05:00
Joe Thornber
7f97c7ea9a build: Don't generate symlinks in include/ dir
As we start refactoring the code to break dependencies (see doc/refactoring.txt),
I want us to use full paths in the includes (eg, #include "base/data-struct/list.h").
This makes it more obvious when we're breaking abstraction boundaries, eg, including a file in
metadata/ from base/
2018-05-14 10:30:20 +01:00
David Teigland
57bb46c5e7 filter: use bcache for filter reads
Filters are still applied before any device reading or
the label scan, but any filter checks that want to read
the device are skipped and the device is flagged.

After bcache is populated, but before lvm looks for
devices (i.e. before label scan), the filters are
reapplied to the devices that were flagged above.
The filters will then find the data they need in
bcache.
2018-05-10 16:03:19 -05:00
David Teigland
c1cd18f21e Remove lvm1 and pool disk formats
There are likely more bits of code that can be removed,
e.g. lvm1/pool-specific bits of code that were identified
using FMT flags.

The vgconvert command can likely be reduced further.

The lvm1-specific config settings should probably have
some other fields set for proper deprecation.
2018-04-30 16:55:02 -05:00
David Teigland
47bfac21ca clvmd: skip dev rescan after full scan
When clvmd does a full label scan just prior to
calling _vg_read(), pass a new flag into _vg_read
to indicate that the normal rescan of VG devs is
not needed.
2018-04-25 16:39:43 -05:00
David Teigland
a7cb76ae94 scan: use bcache for label scan and vg read
New label_scan function populates bcache for each device
on the system.

The two read paths are updated to get data from bcache.

The bcache is not yet used for writing.  bcache blocks
for a device are invalidated when the device is written.
2018-04-20 11:19:24 -05:00
Zdenek Kabelac
7323557379 cleanup: add _mb_ to regiosize option
Just like with others mentions default unit in function name.
2018-04-20 12:17:01 +02:00
Zdenek Kabelac
ca9cbd92c4 activation: add base lv component function
Introduce:

lv_is_component() check is LV is actually a component device.

lv_component_is_active() checking if any component device is active.

lv_holder_is_active() is any component holding device is active.
2018-03-06 15:42:05 +01:00
Zdenek Kabelac
63c50ced89 snapshot: relocate common code validation for snapshot origin
Since both lvcreate and lvconvert needs to check for same
type of allowed origin for snapshot - move the code into
a single function.

This way we also fix several inconsitencies where snapshot
has been allowed by mistake either through lvcreate or
lvconvert path.
2017-10-27 17:07:42 +02:00
Alasdair G Kergon
f3ae99dcc0 liblvm: Move lib code used exclusively into metadata-liblvm.c
Also remove some redundant function definitions from metadata.h.
2017-10-18 19:29:32 +01:00
David Teigland
6ac1e04b3a replicator: remove the code
It has not been used in a long time and is not
expected to be used further.
2017-10-13 16:20:42 -05:00
Alasdair G Kergon
486ed10848 vgmerge: Fix intermediate metadata corruption
vgmerge suffers from a similar problem to the one fixed in commit
8146548d25 ("vgsplit: Fix intermediate
metadata corruption.")

When merging, splitting or renaming VGs, use a new PV status flag
PV_MOVED_VG to mark the PVs that hold metadata with the old VG name and
use this to provide PV-level granularity instead of incorrectly assuming
all PVs in the VG are the same.
2017-10-06 02:20:45 +01:00
Zdenek Kabelac
876c4a1b3b tidy: declaration names match implementation
Put in sync some naming used for function declaration and
actual in-code implementation.
2017-07-20 19:16:41 +02:00
Heinz Mauelshagen
34504855a7 raid: add data_offset incompatibility segment type flag
In order to reject out of place reshaping with segment data_offset
field on old runtime, add a respective segment type incompatibility
flag causing "+RESHAPE_DATA_OFFSET" to be suffixed to the segment
type name.
2017-07-14 15:53:23 +02:00
Zdenek Kabelac
e3f63693a4 lvresize: support passing --yes to fsadm
Since fsadm now needs --yes to pass prompting operations,
we need to pass --yes from  lvresize to fsadm.
2017-06-21 14:03:29 +02:00
Heinz Mauelshagen
1c916ec5ff raid: add reshape segtype flag support
Prohibit activation of reshaping RaidLVs on incompatible
lvm2 runtime by storing e.g. 'raid5+RESHAPE' segment type
strings in the lvm2 metadata.  Incompatible runtime not
supporting reshaping won't be able to activate those thus
avoiding potential data corruption.

Any new non-reshaping lvconvert command will reset the
segment type string from 'raid5+RESHAPE' to 'raid5'.

See commits
0299a7af1e and
4141409eb0
for segtype flag support.
2017-06-09 22:23:04 +02:00
Alasdair G Kergon
cbc69f8c69 pvresize: Prompt when non-default size supplied.
Seek confirmation before changing the PV size to one that differs
from the underlying block device.
2017-04-27 02:36:34 +01:00
Heinz Mauelshagen
83cdba75bd mirror/raid: display adjusted region size with units
Display adjusted region size in units (e.g. "4.00 MiB") rather than sectors.
2017-04-20 20:42:21 +02:00
Zdenek Kabelac
4d2b1a0660 cache: enable usage of --cachemetadataformat
lvcreate and lvconvert may select cache metadata format when caching LV.
By default lvm2 picks best available format.
2017-03-10 19:33:01 +01:00
Zdenek Kabelac
518b814cdb cache: LV supports cache segs with metadata format
Cache pool read/writes metadata_format within its segment type..

For CachePoolLV unselected metadata format is NOT stored in metadata.

For CacheLV when metadata format is not present/selected in lvm2 metadata,
it's automatically assumed to be the version 1 (backward compatible).

To ensure older lvm2 will not 'miss-read' metadata with new version 2,
such LV is marked with METADATA_FORMAT status flag (segment is
specifying metadata format). So when cache uses metadata format 2,
it will become inaccesible on older system without such support.
(kernel dm cache < 1.10,  lvm2 < 2.02.169).
2017-03-10 19:33:01 +01:00
Zdenek Kabelac
4a394f410d cache: introduce allocation/cache_metadata_format
Add new profilable configation setting to let user select
which metadata format of a created cache pool he wish to use.

By default the 'best' available format is autodetected at runtime,
but user may enforce format 1 or 2 ATM.

Code also detects availability for metadata2 supporting cache target.

In case of troubles user may easily Disable usage of this feature
by placing 'metadata2' into global/cache_disabled_features list.
2017-03-10 19:33:01 +01:00
Zdenek Kabelac
a9b78d26b1 cleanup: minor cosmetics
Update some return value to match return type.
Drop unused function and declaration.
2017-03-10 19:33:01 +01:00
Zdenek Kabelac
4d0793f0ec pool: rework handling of passed args
As now we can properly recognize all paramerters for pool creation,
we may drop PASS_ARG_  defines and rely on '_UNSELECTED' or 0 entries
as being those without user given args.

When setting are not given on command line - 'update' function
fill them from profiles or configuration. For this  'profile' arg
was needed to be passed around and since  'VG' itself is not needed,
it's been all replaced with 'cmd, profile, extents_size' args.
2017-03-10 19:33:01 +01:00
Zdenek Kabelac
2d11fc695e cache: set chunk_size as first param 2017-03-10 19:33:00 +01:00
Zdenek Kabelac
4184331965 cache: use UNSELECTED enum
Switch from _UNDEFINED to _UNSELECTED which is more describing
its value 0, while value -1 is better match for UNDEFINED.
2017-03-10 19:33:00 +01:00
Zdenek Kabelac
b8cd0f4808 thin: add new ZERO/DISCARDS_UNSELECTED
To more easily recognize unselected state from select '0' state
add new 'THIN_ZERO_UNSELECTED' enum.
Same applies to THIN_DISCARDS_UNSELECTED.

For those we no longer need to use PASS_ARG_ZERO or PASS_ARG_DISCARDS.
2017-03-10 19:33:00 +01:00
Zdenek Kabelac
acfc82ae29 pool: split chunk size validation
Move cache and thin bits into their respective manipulation files.
When possible directly call respective chunk_size validator.
2017-03-10 19:33:00 +01:00
Zdenek Kabelac
375e4bb3da thin: getting default chunk_size from single place
Basically code moving operation to have a single place resolving
thin_pool_chunk_size_policy.

Supported are generic & performance profiles.

Function is now shared between thin manipulation code and configuration
_CFG logic to obtain defaults and handle correct reporting upward coding
stack.
2017-03-10 19:33:00 +01:00
Heinz Mauelshagen
6dfe1ce251 lvconvert: prompt when splitting off LV of a 2-legged raid1 LV
Splitting off an image LV of a 2-legged
raid1 LV causes loss of resilience.

Ask user to avoid uninformed loss of all resilience.

Don't ask for N > 2 legged raid1 LVs.

Adjust tests.
2017-03-09 13:59:47 +01:00
Heinz Mauelshagen
d250aa7208 lvconvert: prompt when splitting off a tracked LV of a 2-legged raid1 LV
Splitting off an image LV of a 2-legged raid1 LV tracking changes
causes loosing partial resilience for any newly written data set.
Full resilience will be provided again after the split off image LV
got merged back in and the new data set got fully synchronized.
Reason being that the data is only stored on the remaining single
writable image during the split.

Ask user to avoid uninformed loss of such partial resilience.

Don't ask for N > 2 legged raid1 LVs.
2017-03-09 03:22:55 +01:00
Heinz Mauelshagen
7fbe6ef16b lvconvert: prompt when converting raid1 to linear
Ask user when converting raid1 to linear to avoid
uninformed loss of all resilience.
2017-03-09 02:39:49 +01:00
Heinz Mauelshagen
ed58672029 metadata: comments
log_count,nosync,stripes,stripe_size,,...  are also used for raid.
2017-03-08 15:13:59 +01:00
Heinz Mauelshagen
18bbeec825 raid: fix raid LV resizing
The lv_extend/_lv_reduce API doesn't cope with resizing RaidLVs
with allocated reshape space and ongoing conversions.  Prohibit
resizing during conversions and remove the reshape space before
processing resize.  Add missing seg->data_copies initialisation.

Fix typo/comment.
2017-03-07 22:05:23 +01:00
Heinz Mauelshagen
2574d3257a lvconvert: allow regionsize on upconvert from linear
Allow to provide regionsize with "lvconvert -m1 -R N " on
upconverts from linear and on N -> M raid1 leg conversions.

Resolves: rhbz1394427
2017-02-24 05:20:58 +01:00
Heinz Mauelshagen
e2354ea344 lvconvert: add infrastructure for RaidLV reshaping support
In order to support striped raid5/6/10 LV reshaping (change
of LV type, stripesize or number of legs), this patch
introduces infrastructure prerequisites to be used
by raid_manip.c extensions in followup patches.

This base is needed for allocation of out-of-place
reshape space required by the MD raid personalities to
avoid writing over data in-place when reading off the
current RAID layout or number of legs and writing out
the new layout or to a different number of legs
(i.e. restripe)

Changes:
- add members reshape_len to 'struct lv_segment' to store
  out-of-place reshape length per component rimage
- add member data_copies to struct lv_segment
  to support more than 2 raid10 data copies
- make alloc_lv_segment() aware of both reshape_len and data_copies
- adjust all alloc_lv_segment() callers to the new API
- add functions to retrieve the current data offset (needed for
  out-of-place reshaping space allocation) and the devices count
  from the kernel
- make libdm deptree code aware of reshape_len
- add LV flags for disk add/remove reshaping
- support import/export of the new 'struct lv_segment' members
- enhance lv_extend/_lv_reduce to cope with reshape_len
- add seg_is_*/segtype_is_* macros related to reshaping
- add target version check for reshaping
- grow rebuilds/writemostly bitmaps to 246 bit to support kernel maximal
- enhance libdm deptree code to support data_offset (out-of-place reshaping)
  and delta_disk (legs add/remove reshaping) target arguments

Related: rhbz834579
Related: rhbz1191935
Related: rhbz1191978
2017-02-24 05:20:58 +01:00
Heinz Mauelshagen
8ab0725077 lvchange: reject writemostly/writebehind on raid1 during resync
The MD kernel raid1 personality does no use any writemostly leg as the primary.

In case a previous linear LV holding data gets upconverted to
raid1 it becomes the primary leg of the new raid1 LV and a full
resynchronization is started to update the new legs.

No writemostly and/or writebehind setting may be allowed during
this initial, full synchronization period of this new raid1 LV
(using the lvchange(8) command), because that would change the
primary (i.e the previous linear LV) thus causing data loss.

lvchange has a bug not preventing this scenario.

Fix rejects setting writemostly and/or writebehind on resychronizing raid1 LVs.

Once we have status in the lvm2 metadata about the linear -> raid upconversion,
we may relax this constraint for other types of resynchronization
(e.g. for user requested "lvchange --resync ").

New lvchange-raid1-writemostly.sh test is added to the test suite.

Resolves: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=855895
2017-02-23 15:09:29 +01:00
David Teigland
46abc28a48 lvconvert: add command to change region size of a raid LV 2017-02-13 08:21:58 -06:00
David Teigland
1e2420bca8 commands: new method for defining commands
. Define a prototype for every lvm command.
. Match every user command with one definition.
. Generate help text and man pages from them.

The new file command-lines.in defines a prototype for every
unique lvm command.  A unique lvm command is a unique
combination of: command name + required option args +
required positional args.  Each of these prototypes also
includes the optional option args and optional positional
args that the command will accept, a description, and a
unique string ID for the definition.  Any valid command
will match one of the prototypes.

Here's an example of the lvresize command definitions from
command-lines.in, there are three unique lvresize commands:

lvresize --size SizeMB LV
OO: --alloc Alloc, --autobackup Bool, --force,
--nofsck, --nosync, --noudevsync, --reportformat String, --resizefs,
--stripes Number, --stripesize SizeKB, --poolmetadatasize SizeMB
OP: PV ...
ID: lvresize_by_size
DESC: Resize an LV by a specified size.

lvresize LV PV ...
OO: --alloc Alloc, --autobackup Bool, --force,
--nofsck, --nosync, --noudevsync,
--reportformat String, --resizefs, --stripes Number, --stripesize SizeKB
ID: lvresize_by_pv
DESC: Resize an LV by specified PV extents.
FLAGS: SECONDARY_SYNTAX

lvresize --poolmetadatasize SizeMB LV_thinpool
OO: --alloc Alloc, --autobackup Bool, --force,
--nofsck, --nosync, --noudevsync,
--reportformat String, --stripes Number, --stripesize SizeKB
OP: PV ...
ID: lvresize_pool_metadata_by_size
DESC: Resize a pool metadata SubLV by a specified size.

The three commands have separate definitions because they have
different required parameters.  Required parameters are specified
on the first line of the definition.  Optional options are
listed after OO, and optional positional args are listed after OP.

This data is used to generate corresponding command definition
structures for lvm in command-lines.h.  usage/help output is also
auto generated, so it is always in sync with the definitions.

Every user-entered command is compared against the set of
command structures, and matched with one.  An error is
reported if an entered command does not have the required
parameters for any definition.  The closest match is printed
as a suggestion, and running lvresize --help will display
the usage for each possible lvresize command.

The prototype syntax used for help/man output includes
required --option and positional args on the first line,
and optional --option and positional args enclosed in [ ]
on subsequent lines.

  command_name <required_opt_args> <required_pos_args>
          [ <optional_opt_args> ]
          [ <optional_pos_args> ]

Command definitions that are not to be advertised/suggested
have the flag SECONDARY_SYNTAX.  These commands will not be
printed in the normal help output.

Man page prototypes are also generated from the same original
command definitions, and are always in sync with the code
and help text.

Very early in command execution, a matching command definition
is found.  lvm then knows the operation being done, and that
the provided args conform to the definition.  This will allow
lots of ad hoc checking/validation to be removed throughout
the code.

Each command definition can also be routed to a specific
function to implement it.  The function is associated with
an enum value for the command definition (generated from
the ID string.)  These per-command-definition implementation
functions have not yet been created, so all commands
currently fall back to the existing per-command-name
implementation functions.

Using per-command-definition functions will allow lots of
code to be removed which tries to figure out what the
command is meant to do.  This is currently based on ad hoc
and complicated option analysis.  When using the new
functions, what the command is doing is already known
from the associated command definition.
2017-02-13 08:20:10 -06:00
Zdenek Kabelac
a39c828c01 cleanup: fix warning about shadowed declaration
Avoid shadowing lv_size as lv_size() is already defined function in lv.h
2017-02-13 10:06:18 +01:00
Heinz Mauelshagen
55eaabd118 lvreduce/lvresize: add ability to reduce the size of a RaidLV
- support shrinking of raid0/1/4/5/6/10 LVs
- enhance lvresize-raid.sh tests: add raid0* and raid10
- fix have_raid4 in aux.sh to allow lv_resize-raid.sh
  and other scripts to test raid4

Resolves: rhbz1394048
2017-02-09 22:42:03 +01:00
Zdenek Kabelac
95e3dd5fb1 lv: more exact check for merging origin
Merging origin has 'MERGE_LV' and should also have its merging snapshot.
2016-12-22 23:37:07 +01:00
Zdenek Kabelac
bdfc96cb08 raid: fix activation of tracked image
Activation of raid has brough up also splitted image with tracing
(without taking lock for this).

So when raid is now activate - such image is not put into
table (with _rmeta).  When user needs such device, just active it.
2016-12-18 19:10:38 +01:00
Zdenek Kabelac
31564834db mirror: add prepare_mirror_log
Function prepares new mirror log LV in-sync optionaly.
This is useful to have such device ready when converting
raid1 to mirror.
2016-12-11 23:16:16 +01:00
Heinz Mauelshagen
8270ff5702 lvconvert: prevent non-synced raid1 primary leg repair
(Automatic) repair may not be allowed during the initial sync of an upconverted
linear LV, because the data on the failing, primary leg hasn't been completely
synchronized to the N-1 other legs of the raid1 LV (replacing failed legs during
repair involves discontinuing access to any replaced legs data, thus preventing
data recovery on the primary leg e.g. via dd_rescue).

Even though repair would not cause data loss when adding legs to a fully synced
raid1 LV, we don't have information yet defining this state yet (e.g. a raid1
LV flag telling the fully synchronized status before any legs were added),
hence can't automatically decide to allow to repair.

If nonetheless a repair on a non-synced raid1 LVs is intended, the "--force"
option has to be provided.

Resolves: rhbz1311765
2016-10-28 15:55:10 +02:00
Alasdair G Kergon
c1a0a2c712 toollib: Record whether or not stripes/stripe_size args supplied. 2016-08-19 13:51:43 +01:00
Heinz Mauelshagen
8e9d5d12ae pvmove: prohibit non-resilient collocation of RAID SubLVs
'pvmove -n name pv1 pv2' allows to collocate multiple RAID SubLVs
on pv2 (e.g. results in collocated raidlv_rimage_0 and raidlv_rimage_1),
thus causing loss of resilence and/or performance of the RaidLV.

Fix this pvmove flaw leading to potential data loss in case of PV failure
by preventing any SubLVs from collocation on any PVs of the RaidLV.
Still allow to collocate any DataLVs of a RaidLV with their sibling MetaLVs
and vice-versa though (e.g. raidlv_rmeta_0 on pv1 may still be moved to pv2
already holding raidlv_rimage_0).

Because access to the top-level RaidLV name is needed,
promote local _top_level_lv_name() from raid_manip.c
to global top_level_lv_name().

- resolves rhbz1202497
2016-08-15 18:22:32 +02:00
Heinz Mauelshagen
d2c3b23e6d lvchange: Allow device specification when requesting a repair
'lvchange --resync LV' or 'lvchange --syncaction repair LV' request the
RAID layout specific parity blocks in raid4/5/6 to be recreated or the
mirrored blocks to be copied again from the master leg/copy for raid1/10,
thus not allowing a rebuild of a particular PV.

Introduce repeatable option '--[raid]rebuild PV' to allow to request
rebuilds of specific PVs in a RaidLV which are known to contain corrupt
data (e.g. rebuild a raid1 master leg).

Add test lvchange-rebuild-raid.sh to test/shell doing rebuild
variations on raid1/10 and 5; add aux function check_status_chars
to support the new test.

 - Resolves rhbz1064592
2016-08-05 16:01:46 +02:00
Alasdair G Kergon
91f866f786 raid: Tell lib whether stripesize was specified 2016-08-05 14:28:14 +01:00
Alasdair G Kergon
7482ff93b8 raid: Turn lv_raid_change_image_count into wrapper
Eventually the separate entry point will disappear.
2016-08-05 14:17:54 +01:00
Alasdair G Kergon
fdc3fcbfce lvconvert: Pass region_size to lv_raid_convert. 2016-08-02 23:51:20 +01:00
Heinz Mauelshagen
df02917d7e vg_validate: fix seg->extents_copied check introduced with
commit 8f62b7bfe5 and add comment for the member
             to 'struct lv_segment'
2016-07-27 23:09:54 +02:00
Zdenek Kabelac
5636bfd83d lvconvert: suppress zeroing warning when converting to thin
When volume was lvconvert-ed to a thin-volume with external origin,
then in case thin-pool was in non-zeroing mode
it's been printing WARNING about not zeroing thin volume - but
this is wanted and expected - so nothing to warn about.

So in this particular use case WARNING needs to be suppressed.

Adding parameter support for lvcreate_params.

So now lvconvert creates 'normal thin LV' in read-only mode
(so any read will 'return 0' for a moment)
then deactivate regular thin LV and reacreate in 'final R/RW' mode
thin LV with external origin and activate again.
2016-07-27 16:20:57 +02:00
Alasdair G Kergon
5af311ddd8 macros: Add lv_is_not_synced. 2016-07-14 14:21:01 +01:00
Zdenek Kabelac
e1112227d1 lvresize: fix zero size extension
Commit ca878a3426 changed behavior
or resize operation. Later the code has been futher changed
to skip fs resize completely when size of LV is already matching
and finaly at the most recent resize changeset for resize the
check for matching size has been eliminated as well so we ended
with a request call to resize fs to 0 size in some cases.

This commit reoders some test so the prompt happens just once before
resize of possibly 2 related volumes.

Also extra test for having LV already given size is added, and
whole metadata update is skipped for this case as the only
result would be an increment of seqno.

However the filesystem is still resized when requested,
so if the LV has some size and the resize is resolved to
the same size, the filesystem resize is called so in case FS
would not match, the resize will happen.
2016-07-13 21:52:04 +02:00
Alasdair G Kergon
79446ffad7 raid: Infrastructure for raid takeover. 2016-06-28 02:42:30 +01:00
Zdenek Kabelac
e5b7ebcdda cleanup: remove unused code 2016-06-23 14:59:29 +02:00