1
0
mirror of git://sourceware.org/git/lvm2.git synced 2025-01-03 05:18:29 +03:00
Commit Graph

1708 Commits

Author SHA1 Message Date
Zdenek Kabelac
08914ed7c1 raid: destroy allocation handle on error path
Don't leak ah memory pool on error path.
2014-09-12 13:51:30 +02:00
Zdenek Kabelac
76c3c94bd2 cleanup: update _alloc_image_component function
Return allocated volume directly instead of 1/0.
2014-09-12 13:51:30 +02:00
Zdenek Kabelac
126463ad1f cleanup: plain code reindent
Just simple reindent and brace changes.
2014-09-12 13:51:30 +02:00
Zdenek Kabelac
ad376e9e00 debug: add missing stack trace on error path 2014-09-12 13:51:29 +02:00
Zdenek Kabelac
c10c16cc35 raid: use _generate_raid_name
Use new function to get implicit name validation
(so we do not exit with internal error on metadata validation).
2014-09-12 13:51:29 +02:00
Zdenek Kabelac
2db0312455 raid: add function for name creation
Add name for construction and validation of raid subvolume
name with a given suffix.

TODO: check if reusable for mirrors as well.
2014-09-12 13:51:29 +02:00
Zdenek Kabelac
40b7b107b1 raid: check result of get_segtype_from_string
Error here is rather highly unpexpected for these types, but
stay consistent with rest of the code and don't use unchecked value.
2014-09-12 13:45:50 +02:00
Zdenek Kabelac
08bde75093 raid: add missing archive call
Before starting to update raid metadata, archive existing unmodified one.
2014-09-12 13:45:49 +02:00
Zdenek Kabelac
569184a3bb raid: add missing vg_revert
After failing vg_write() and suspend_lv() there was missing vg_revert() call.
2014-09-12 13:45:14 +02:00
Zdenek Kabelac
dd1fa0e808 raid: add missing backups
Add backup() calls that were missing after successful update
of metadata.
2014-09-12 13:42:57 +02:00
Zdenek Kabelac
15ba2afdc2 allocation: use vg memory pool
Looks like forgotten memory allocation related to VG used cmd mem pool.
2014-09-12 13:39:58 +02:00
Zdenek Kabelac
a86d9a3b30 lv_rename: actual fix for snapshot
By my rebasing mistake it's been eliminated from previous patch set.
2014-09-09 20:15:51 +02:00
Zdenek Kabelac
c710f02e01 lv_update_and_reload: replace code sequence
Use lv_update_and_reload() and lv_update_and_reload_origin()
to handle write/suspend/commit/resume sequence.

In few places this properly handle vg_revert() after suspend failure,
and also ensures there is metadata backup after successful vg_commit().
2014-09-09 19:20:09 +02:00
Zdenek Kabelac
aee8611af5 lv_manip: remove vg_revert
vg_commit is supposed to have implicit revert handling.
(however as of now it needs fixes).
2014-09-09 19:15:26 +02:00
Zdenek Kabelac
413fc9d3e6 lv_rename: fix snapshot rename
Fix rename operation for snapshot (cow) LV.
Only the snapshot's origin has the lock and by mistake suspend
and resume has been called for the snapshot LV.
This further made volumes unusable in cluster.

So instead of suspend and resuming list of LVs,
we need to just suspend and resume origin.

As the sequence write/suspend/commit/resume
is widely used in lvm2 code base - move it to
new lv_update_and_reload function.
2014-09-09 19:15:24 +02:00
Zdenek Kabelac
319f67b1ab cleanup: add stacktrace for error path 2014-09-08 22:36:42 +02:00
Alasdair G Kergon
2faf416e0e lvextend: Reinstate --nosync logic for mirrors.
Reinstate the logic for syncing extensions of mirrors created with
--nosync.  (Inadvertently disabled by the approximate allocation
changes.)
2014-08-28 00:40:09 +01:00
Zdenek Kabelac
22bfac5dc2 cache: fix allocation size
Commit 0b3d0e79f6 caused regression
in allocation of cache pool. This patch is restoring corect size
for allocation.
2014-08-27 16:47:14 +02:00
Jonathan Brassow
8b9eb95ea9 cache: Clean-up error message.
It is not an internal error message to report to the user that they
cannot create a cache LV on top of a cache LV.  It is simply not
supported yet.
2014-08-24 19:44:37 -05:00
Alasdair G Kergon
8b8d21f873 pre-release 2014-08-26 16:34:14 +01:00
Zdenek Kabelac
25fe716b12 cleanup: indent and stacktrack
Add missing stacktrace on error path
and newline indent.
2014-08-26 14:13:07 +02:00
Zdenek Kabelac
0794a10f91 thin: fix volume_list support
Fixing problem, when user sets volume_list and excludes thin pools
from activation. In this case pool return 'success' for skipped activation.

We need to really check the volume it is actually active to properly
to remove queued pool messages. Otherwise the lvm2 and kernel
metadata started to go async since lvm2 believed, messages were submitted.

Add also better check for threshold when create a new thin volume.
In this case we require local activation of thin pool so we are able
to check pool fullness.
2014-08-26 14:10:18 +02:00
Zdenek Kabelac
1ee5e18a7b thin: more forced ignoring of pool failure
Support also 'vgremove -ff' to properly remove even inactive/broken thin pools.
Update messages to use 'print_unless_silent' for the forced case.
2014-08-26 14:09:04 +02:00
Peter Rajnoha
f4e56b2829 cleanup: consolidate lv_layout and lv_role reporting
This patch makes the keyword combinations found in "lv_layout" and
"lv_role" much more understandable - there were some ambiguities
for some of the combinations which lead to confusion before.

Now, the scheme used is:

LAYOUTS ("how the LV is laid out"):
===================================
[linear] (all segments have number of stripes = 1)

[striped] (all segments have number of stripes > 1)

[linear,striped] (mixed linear and striped)

raid (raid layout always reported together with raid level, raid layout == image + metadata LVs underneath that make up raid LV)
  [raid,raid1]
  [raid,raid10]
  [raid,raid4]
  [raid,raid5] (exact sublayout not specified during creation - default one used - raid5_ls)
    [raid,raid5,raid5_ls]
    [raid,raid5,raid6_rs]
    [raid,raid5,raid5_la]
    [raid,raid5,raid5_ra]
  [raid6,raid] (exact sublayout not specified during creation - default one used - raid6_zr)
    [raid,raid6,raid6_zr]
    [raid,raid6,raid6_nc]
    [raid,raid6,raid6_ns]

[mirror] (mirror layout == log + image LVs underneath that make up mirror LV)

thin (thin layout always reported together with sublayout)
  [thin,sparse] (thin layout == allocated out of thin pool)
  [thin,pool] (thin pool layout == data + metadata volumes underneath that make up thin pool LV, not supposed to be used for direct use!!!)

[cache] (cache layout == allocated out of cache pool in conjunction with cache origin)
  [cache,pool] (cache pool layout == data + metadata volumes underneath that make up cache pool LV, not supposed to be used for direct use!!!)

[virtual] (virtual layout == not hitting disk underneath, currently this layout denotes only 'zero' device used for origin,thickorigin role)

[unknown] (either error state or missing recognition for such layout)

ROLES ("what's the purpose or use of the LV - what is its role"):
=================================================================
- each LV has either of these two roles at least:  [public] (public LV that users may use freely to write their data to)

  [public] (public LV that users may use freely to write their data to)
  [private] (private LV that LVM maintains; not supposed to be directly used by user to write his data to)

- and then some special-purpose roles in addition to that:

  [origin,thickorigin] (origin for thick-style snapshot; "thick" as opposed to "thin")
  [origin,multithickorigin] (there are more than 2 thick-style snapshots for this origin)
  [origin,thinorigin] (origin for thin snapshot)
  [origin,multithinorigin] (there are more than 2 thin snapshots for this origin)
  [origin,extorigin] (external origin for thin snapshot)
  [origin,multiextoriginl (there are more than 2 thin snapshots using this external origin)
  [origin,cacheorigin] (cache origin)

  [snapshot,thicksnapshot] (thick-style snapshot; "thick" as opposed to "thin")
  [snapshot,thinsnapshot] (thin-style snapshot)

  [raid,metadata] (raid metadata LV)
  [raid,image] (raid image LV)

  [mirror,image] (mirror image LV)
  [mirror,log] (mirror log LV)
  [pvmove] (pvmove LV)

  [thin,pool,data] (thin pool data LV)
  [thin,pool,metadata] (thin pool metadata LV)

  [cache,pool,data] (cache pool data LV)
  [cache,pool,metadata] (cache pool metadata LV)

  [pool,spare] (pool spare LV - common role of LV that makes it used for both thin and cache repairs)
2014-08-25 16:14:40 +02:00
Peter Rajnoha
993f8d1b3f refactor: rename 'lv_type' field to 'lv_role'
The 'lv_type' field name was a bit misleading. Better one is 'lv_role'
since this fields describes what's the actual use of the LV currently -
its 'role'.
2014-08-25 16:11:40 +02:00
Alasdair G Kergon
0b3d0e79f6 lvresize: Fix raid/mirror and %PE handling code.
Sort out the lvresize calculation code to handle size changes
specified as physical extents as well as logical extents
and to process mirror resizing and raid extensions correctly.

The 'approx alloc' option was masking the underlying problem.
2014-08-22 01:26:14 +01:00
Zdenek Kabelac
dec39b1a5f lv_manip: check for str_list_dup failure 2014-08-19 14:33:06 +02:00
Zdenek Kabelac
ad9aee9af4 metadata: check result of refresh and rescan
Detect failure in case refresh_filters of lvmcache_label_scan fails.
2014-08-19 14:33:06 +02:00
Peter Rajnoha
84860fd54f lv: remove lv_type_name fn
The lv_type_name function is remnant from old code that reported
only single string for the LV type. LV types are now reported
in a more extended way as keyword list that describe the type
precisely (using lv_layout_and_type fn).

The lv_type_name was used in some error messages to display the
type of the LV so just reinstate the old messages back referencing
the type directly with a string - this is enough for error messages.
They don't need to display the LV type as precisely as it's used
on lvs output (which is optimized for selection anyway).
2014-08-19 14:16:39 +02:00
Peter Rajnoha
aec4d0c939 report: also display "mirror" keyword in lv_layout for mirrored mirror log and "cache" keyword in lv_layout for cached cache pool
$ lvs -a -o name,vg_name,attr,layout,type
  LV                    VG     Attr       Layout     Type
  lvol0                 vg     mwi-a-m--- mirror     mirror
  [lvol0_mimage_0]      vg     iwi-aom--- linear     image,mirror
  [lvol0_mimage_1]      vg     iwi-aom--- linear     image,mirror
  [lvol0_mlog]          vg     mwi-aom--- mirror     log,mirror
  [lvol0_mlog_mimage_0] vg     iwi-aom--- linear     image,mirror
  [lvol0_mlog_mimage_1] vg     iwi-aom--- linear     image,mirror

(lvol0_mlog properly displayed as "mirror" layout for mirrored mirror log)

$ lvs -a -o name,vg_name,attr,layout,type
  LV                  VG     Attr       Layout     Type
  lvol0               vg     Cwi---C--- cache,pool cache,pool
  [lvol0_cdata]       vg     Cwi------- linear     cache,data,pool
  [lvol0_cmeta]       vg     ewi------- linear     cache,metadata,pool
  [lvol1_pmspare]     vg     ewi------- linear     metadata,pool,spare
  lvol2               vg     Cwi---C--- cache,pool cache,pool
  [lvol2_cdata]       vg     Cwi---C--- cache      cache,data,pool
  [lvol2_cdata_corig] vg     owi---C--- linear     cache,origin
  [lvol2_cmeta]       vg     ewi------- linear     cache,metadata,pool

(lvol2_cdata properly displayed as cached cache pool data)
2014-08-19 13:58:32 +02:00
Peter Rajnoha
b806836164 report: also display "mirror" keyword in lv_type for pvmove LV and display "multiple" for external origin used for more than one thin snapshot
$ lvs -a -o name,vg_name,attr,layout,type
  LV        VG     Attr       Layout     Type
  lvol0     vg     -wI-a----- linear     linear
  [pvmove0] vg     p-C-aom--- mirror     mirror,pvmove

(added "mirror" for pvmove LV)

$ lvs -a -o name,vg_name,attr,layout,type
  LV              VG     Attr       Layout     Type
  lvol0           vg     ori------- linear     external,multiple,origin,thin
  [lvol1_pmspare] vg     ewi------- linear     metadata,pool,spare
  lvol2           vg     Vwi-a-tz-- thin       snapshot,thin
  lvol3           vg     Vwi-a-tz-- thin       snapshot,thin
  pool            vg     twi-a-tz-- pool,thin  pool,thin
  [pool_tdata]    vg     Twi-ao---- linear     data,pool,thin
  [pool_tmeta]    vg     ewi-ao---- linear     metadata,pool,thin

(added "multiple" for external origin used for more than one
thin snapshot - lvol0 in the example above)
2014-08-19 09:41:41 +02:00
Peter Rajnoha
90c47a4968 report: fix thin external snapshot identification for lv_layout and lv_type fields
Thin snapshots having external origins missed the "snapshot" keyword for
lv_type field. Also, thin external origins which are thin devices (from
another pool) were not recognized properly.

For example, external origin itself can be either non-thin volume (lvol0
below) or it can be a thin volume from another pool (lvol3 below):

Before this patch:

$ lvs -o name,vg_name,attr,pool_lv,origin,layout,type
  Internal error: Failed to properly detect layout and type for for LV vg/lvol3
  Internal error: Failed to properly detect layout and type for for LV vg/lvol3
  LV    VG     Attr       Pool  Origin Layout     Type
  lvol0 vg     ori-------              linear     external,origin,thin
  lvol2 vg     Vwi-a-tz-- pool  lvol0  thin       thin
  lvol3 vg     ori---tz-- pool         unknown    external,origin,thin,thin
  lvol4 vg     Vwi-a-tz-- pool1 lvol3  thin       thin
  pool  vg     twi-a-tz--              pool,thin  pool,thin
  pool1 vg     twi-a-tz--              pool,thin  pool,thin

- lvol2 as well as lvol4 have missing "snapshot" in type field
- lvol3 has unrecognized layout (should be "thin"), but has double
  "thin" in lv_type which is incorrect
- (also there's double "for" in the internal error message)

With this patch applied:

$ lvs -o name,vg_name,attr,pool_lv,origin,layout,type
  LV    VG     Attr       Pool  Origin Layout     Type
  lvol0 vg     ori-------              linear     external,origin,thin
  lvol2 vg     Vwi-a-tz-- pool  lvol0  thin       snapshot,thin
  lvol3 vg     ori---tz-- pool         thin       external,origin,thin
  lvol4 vg     Vwi-a-tz-- pool1 lvol3  thin       snapshot,thin
  pool  vg     twi-a-tz--              pool,thin  pool,thin
  pool1 vg     twi-a-tz--              pool,thin  pool,thin
2014-08-18 15:58:48 +02:00
Jonathan Brassow
4d45302e25 RAID: Fail RAID4/5/6 creation if PE size is less than STRIPE_SIZE_MIN
The maximum stripe size is equal to the volume group PE size.  If that
size falls below the STRIPE_SIZE_MIN, the creation of RAID 4/5/6 volumes
becomes impossible.  (The kernel will fail to load a RAID 4/5/6 mapping
table with a stripe size less than STRIPE_SIZE_MIN.)  So, we report an
error if it is attempted.

This is very rare because reducing the PE size down that far limits the
size of the PV below that of modern devices.
2014-08-15 21:15:34 -05:00
Peter Rajnoha
8af2309231 cleanup: gcc warning
One more:

metadata/thin_manip.c:503: warning: declaration of "snapshot_count" shadows a global declaration
2014-08-15 15:43:42 +02:00
Peter Rajnoha
8e449ebd63 cleanup: gcc warning
metadata/lv_manip.c:269: warning: declaration of "snapshot_count" shadows a global declaration

There's existing function called "snapshot_count" so rename the
variable to "snap_count".
2014-08-15 15:32:04 +02:00
Peter Rajnoha
e8bbcda2a3 Add lv_layout_and_type fn, lv_layout and lv_type reporting fields.
The lv_layout and lv_type fields together help with LV identification.
We can do basic identification using the lv_attr field which provides
 very condensed view. In contrast to that, the new lv_layout and lv_type
fields provide more detialed information on exact layout and type used
for LVs.

For top-level LVs which are pure types not combined with any
other LV types, the lv_layout value is equal to lv_type value.

For non-top-level LVs which may be combined with other types,
the lv_layout describes the underlying layout used, while the
lv_type describes the use/type/usage of the LV.

These two new fields are both string lists so selection (-S/--select)
criteria can be defined using the list operators easily:
  [] for strict matching
  {} for subset matching.

For example, let's consider this:

$ lvs -a -o name,vg_name,lv_attr,layout,type
  LV                    VG     Attr       Layout       Type
  [lvol1_pmspare]       vg     ewi------- linear       metadata,pool,spare
  pool                  vg     twi-a-tz-- pool,thin    pool,thin
  [pool_tdata]          vg     rwi-aor--- level10,raid data,pool,thin
  [pool_tdata_rimage_0] vg     iwi-aor--- linear       image,raid
  [pool_tdata_rimage_1] vg     iwi-aor--- linear       image,raid
  [pool_tdata_rimage_2] vg     iwi-aor--- linear       image,raid
  [pool_tdata_rimage_3] vg     iwi-aor--- linear       image,raid
  [pool_tdata_rmeta_0]  vg     ewi-aor--- linear       metadata,raid
  [pool_tdata_rmeta_1]  vg     ewi-aor--- linear       metadata,raid
  [pool_tdata_rmeta_2]  vg     ewi-aor--- linear       metadata,raid
  [pool_tdata_rmeta_3]  vg     ewi-aor--- linear       metadata,raid
  [pool_tmeta]          vg     ewi-aor--- level1,raid  metadata,pool,thin
  [pool_tmeta_rimage_0] vg     iwi-aor--- linear       image,raid
  [pool_tmeta_rimage_1] vg     iwi-aor--- linear       image,raid
  [pool_tmeta_rmeta_0]  vg     ewi-aor--- linear       metadata,raid
  [pool_tmeta_rmeta_1]  vg     ewi-aor--- linear       metadata,raid
  thin_snap1            vg     Vwi---tz-k thin         snapshot,thin
  thin_snap2            vg     Vwi---tz-k thin         snapshot,thin
  thin_vol1             vg     Vwi-a-tz-- thin         thin
  thin_vol2             vg     Vwi-a-tz-- thin         multiple,origin,thin

Which is a situation with thin pool, thin volumes and thin snapshots.
We can see internal 'pool_tdata' volume that makes up thin pool has
actually a level10 raid layout and the internal 'pool_tmeta' has
level1 raid layout. Also, we can see that 'thin_snap1' and 'thin_snap2'
are both thin snapshots while 'thin_vol1' is thin origin (having
multiple snapshots).

Such reporting scheme provides much better base for selection criteria
in addition to providing more detailed information, for example:

$ lvs -a -o name,vg_name,lv_attr,layout,type -S 'type=metadata'
LV                   VG   Attr       Layout      Type
[lvol1_pmspare]      vg   ewi------- linear      metadata,pool,spare
[pool_tdata_rmeta_0] vg   ewi-aor--- linear      metadata,raid
[pool_tdata_rmeta_1] vg   ewi-aor--- linear      metadata,raid
[pool_tdata_rmeta_2] vg   ewi-aor--- linear      metadata,raid
[pool_tdata_rmeta_3] vg   ewi-aor--- linear      metadata,raid
[pool_tmeta]         vg   ewi-aor--- level1,raid metadata,pool,thin
[pool_tmeta_rmeta_0] vg   ewi-aor--- linear      metadata,raid
[pool_tmeta_rmeta_1] vg   ewi-aor--- linear      metadata,raid

(selected all LVs which are related to metadata of any type)

lvs -a -o name,vg_name,lv_attr,layout,type -S 'type={metadata,thin}'
LV           VG   Attr       Layout      Type
[pool_tmeta] vg   ewi-aor--- level1,raid metadata,pool,thin

(selected all LVs which hold metadata related to thin)

lvs -a -o name,vg_name,lv_attr,layout,type -S 'type={thin,snapshot}'
LV         VG   Attr       Layout     Type
thin_snap1 vg   Vwi---tz-k thin       snapshot,thin
thin_snap2 vg   Vwi---tz-k thin       snapshot,thin

(selected all LVs which are thin snapshots)

lvs -a -o name,vg_name,lv_attr,layout,type -S 'layout=raid'
LV           VG   Attr       Layout       Type
[pool_tdata] vg   rwi-aor--- level10,raid data,pool,thin
[pool_tmeta] vg   ewi-aor--- level1,raid  metadata,pool,thin

(selected all LVs with raid layout, any raid layout)

lvs -a -o name,vg_name,lv_attr,layout,type -S 'layout={raid,level1}'
  LV           VG   Attr       Layout      Type
  [pool_tmeta] vg   ewi-aor--- level1,raid metadata,pool,thin

(selected all LVs with raid level1 layout exactly)

And so on...
2014-08-15 14:50:38 +02:00
Peter Rajnoha
1cd622d98b report: lvs: properly display 'o' for volume type bit and 'C' for target type bit in lv_attr field for cache origin LVs
Before this patch:
LV                 VG     Attr
[cache_orig_corig] vg     -wi-ao----

With this patch applied:
LV                 VG     Attr
[cache_orig_corig] vg     owi-aoC---
2014-08-15 13:28:43 +02:00
Peter Rajnoha
8eba33510f cache+thin: add lv_is_{cache,thin}_origin fn to identify origin LVs 2014-08-15 13:28:43 +02:00
Peter Rajnoha
ec0d2f7aa4 refactor: add defines for raid segtypes
This will be reused later on in upcoming code...
2014-08-15 13:28:43 +02:00
Alasdair G Kergon
bf78e55ef3 pvcreate: Fix cache state with filters/sig wiping.
_pvcreate_check() has two missing requirements:
  After refreshing filters there must be a rescan.
    (Otherwise the persistent filter may remain empty.)
  After wiping a signature, the filters must be refreshed.
    (A device that was previously excluded by the filter due to
     its signature might now need to be included.)

If several devices are added at once, the repeated scanning isn't
strictly needed, but we can address that later as part of the command
processing restructuring (by grouping the devices).

Replace the new pvcreate code added by commit
54685c20fc "filters: fix regression caused
by commit e80884cd080cad7e10be4588e3493b9000649426"
with this change to _pvcreate_check().

The filter refresh problem dates back to commit
acb4b5e4de "Fix pvcreate device check."
2014-08-14 01:30:01 +01:00
Peter Rajnoha
c52c9a1e31 activation: if LV inactive and non-clustered, do not issue "Cannot deactivate" on -aln
The message "Cannot deactivate remotely exclusive device locally." makes
sense only for clustered LV. If the LV is non-clustered, then it's
always exclusive by definition and if it's already deactivated, this
message pops up inappropriately as those two conditions are met.

So issue the message only if the conditions are met AND we have clustered VG.
2014-08-07 16:44:09 +02:00
Peter Rajnoha
54685c20fc filters: fix regression caused by commit e80884cd08
Commit e80884cd08 tried to dump filters
for them to be reevaluated when creating a PV to avoid overwriting
any existing signature that may have been created after last
scan/filtering.

However, we need to call refresh_filters instead of
persistent_filter->dump since dump requires proper rescannig to fill
up the persistent filter again. However, this is true only for pvcreate
but not for vgcreate with PV creation where the scanning happens before
this PV creation and hence the next rescan (if not full scan), does not
fill the persistent filter.

Also, move refresh_filters so that it's called sooner and only for
pvcreate, vgcreate already calls lvmcache_label_scan(cmd, 2) which
then calls refresh_filters itself, so no need to reevaluate this again.

This caused the persistent filter (/etc/lvm/cache/.cache file) to be
wrong and contain only the PV just being processed with
vgcreate <vg_name> <pv_name_to_create>.

This regression caused other block devices to be filtered out in case
the vgcreate with PV creation was used and then the persistent filter
is used by any other LVM command afterwards.
2014-08-01 11:39:53 +02:00
Alasdair G Kergon
c7b9f0ab42 lvresize: Allow approximation with +%FREE.
Make lvresize -l+%FREE support approximate allocation.

Move existing "Reducing/Extending' message to verbose level
and change it to say 'up to' if approximate allocation is being used.

Replace it with a new message that gives the actual old and new size or
says 'unchanged'.
2014-08-01 00:35:43 +01:00
Peter Rajnoha
ef85997980 metadata: remove spurious "Physical volume <dev_name> not found"
This is addendum to commit 2e82a070f3
which fixed these spurious messages that appeared after commit
651d5093ed ("avoid pv_read in
find_pv_by_name").

There was one more "not found" message issued in case the device
could not be found in device cache (commit 2e82a07 fixed this only
for PV lookup itself). But if we "allow_unformatted" for
find_pv_by_name, we should not issue this message even in case
the device can't be found in dev cache as we just need to know
whether there's a PV or not for the code to decide on next steps
and we don't want to issue any messages if either device itself
is not found or PV is not found.

For example, when we were creating a new PV (and so allow_unformatted = 1)
and the device had a signature on it which caused it to be filtered
by device filter (e.g. MD signature if md filtering is enabled),
or it was part of some other subsystem (e.g. multipath), this message
was issued on find_pv_by_name call which was misleading.

Also, remove misleading "stack" call in case find_pv_by_name
returns NULL in pvcreate_check - any error state is reported
later by pvcreate_check code so no need to "stack" here.

There's one more and proper check to issue "not found" message if
the device can't be found in device cache within pvcreate_check fn
so this situation is still covered properly later in the code.

Before this patch (/dev/sda contains MD signature and is therefore filtered):

$ pvcreate /dev/sda
  Physical volume /dev/sda not found
WARNING: linux_raid_member signature detected on /dev/sda at offset 4096. Wipe it? [y/n]:

With this patch applied:

$ pvcreate /dev/sda
WARNING: linux_raid_member signature detected on /dev/sda at offset 4096. Wipe it? [y/n]:

Non-existent devices are still caught properly:

$ pvcreate /dev/sdx
  Device /dev/sdx not found (or ignored by filtering).
2014-07-31 10:03:30 +02:00
Zdenek Kabelac
d7d81e1157 cleanup: show better messages 2014-07-22 22:41:40 +02:00
Zdenek Kabelac
894eda4707 thin and cache: unify pool common code
Fix get_pool_params to only read params.
Add poolmetadataspare option to get_pool_params.
Move all profile code into update_pool_params.
Move recalculate code into pool_manip.c
2014-07-22 22:41:38 +02:00
Alasdair G Kergon
99e3c13012 raid: Moved degraded activation code to raid_manip.
Adjust some messages & fn names.
2014-07-22 20:50:29 +01:00
Zdenek Kabelac
f5d6c4b0f3 cache: use get_cache_mode for validation
Use a single function to validate cache mode arg
and set DM_ feature flags.
2014-07-17 16:16:45 +02:00
Zdenek Kabelac
9955204e0d cleanup: reorder code
Simplify code.
2014-07-11 13:32:21 +02:00
Zdenek Kabelac
f7d6614061 cache: warn about metadata size limits
Cache pools are similar as with thin pools.
Add (needs %s) - since cache has currently
a bit strange need for extra few kb over
our default 4M extent size so make it more obvious.
2014-07-11 13:31:19 +02:00
Zdenek Kabelac
120bd2d6b1 pool: move code to pool source file
More code is used commonly for all pool types (cache & thin)
2014-07-11 12:57:25 +02:00
Zdenek Kabelac
4db5d78cef display: show C only for cache and cachepool
Keep target type (attr6) as the cache data and metadata volume has.
(i.e. when will show 'raid' type if metadata is raid)
2014-07-11 12:50:44 +02:00
Zdenek Kabelac
8932d4a625 lv_is_pool: add new defines
Defines for lv_is_pool() and  lv_is_pool_metadata()
Also update comments for prompts for their current meaning.
(Though maybe they should be renamed)
2014-07-11 12:50:06 +02:00
Peter Rajnoha
5c3d894013 metadata: fix ALLOCATABLE_PV for lvm1 format
This is addendum for commit 6dc7b783c8.

LVM1 format stores the ALLOCATABLE flag directly in PV header, not
in VG metadata. So the code needs to be fixed further to work
properly for lvm1 format so that the correct PV header is written
(the flag is set only if the PV is in some VG, unset otherwise).
2014-07-11 12:24:15 +02:00
Peter Rajnoha
c9ae21798e report: display 'unknown' value for active/active_locally/active_remotely/active_exclusively if info bypassed
Before the patch:

$ lvs -o name,active vg/lvol1 --driverloaded n
  WARNING: Activation disabled. No device-mapper interaction will beattempted.
  LV    Active
  lvol1 active

With this patch applied:
$ lvs -o name,active vg/lvol1 --driverloaded n
  WARNING: Activation disabled. No device-mapper interaction will be attempted.
  LV    Active
  lvol1 unknown

The same for active_{locally,remotely,exclusively} fields.
Also, rename headings for these fields (ActLocal/ActRemote/ActExcl).
2014-07-11 11:15:06 +02:00
Peter Rajnoha
f6001465ef lv_manip: pool-metadata-spare is just a spare LV, not tightly bound to thin or cache 2014-07-07 17:02:06 +02:00
Peter Rajnoha
6dc7b783c8 metadata: fix regression causing PVs not in VGs to be marked as allocatable
If the PV is not yet in a VG, it's not allocatable.
A regression introduced by commit 0283c439ec
(_pv_create) and later commit a7ca101517
(pv_read).
2014-07-07 14:07:21 +02:00
Peter Rajnoha
6b58647848 lv_manip: add get_lv_type_name/lv_is_linear and lv_is_striped helper fns
The get_lv_type_name helps with translating volume type
to human readable form (can be used in reports or
various messages if needed).

The lv_is_linear and lv_is_striped complete the set of
lv_is_* functions that identify exact volume types.
2014-07-04 15:40:17 +02:00
Peter Rajnoha
a4734354ce refactor: remove static modifier for lv_raid_image_in_sync and lv_raid_healthy fn
...to make use of it in other parts of the code.
2014-07-04 15:40:17 +02:00
Alasdair G Kergon
ac60c876c4 vgsplit: Improve message when LV still active.
Mention parent LV as well as the LV triggering the warning.

Still leaves some confusing cases but its not worth fixing them
at the moment.
(Thin pool inactive but a thin volume active => deactivate thin vol.
Inactive mirror/raid with pvmove in progress => complete pvmove and
active&deactivate mirror/raid.
If new VG already exists it requires some LVs to be inactive
unnecessarily.)
2014-07-04 01:13:51 +01:00
Alasdair G Kergon
137ed3081a report: Add lv_parent field.
Only defined for thin/cache/raid/mirror at this stage as it
relies on get_only_segment_using_this_lv().
2014-07-03 23:49:34 +01:00
Alasdair G Kergon
1e1c2769a7 vgsplit: Fix VG component of lvid.
Fix VG component of lvid in vgsplit and vgmerge
Update vg_validate() to detect the error.
Call lv_is_active() before moving LV into new VG, not after.
2014-07-03 19:06:04 +01:00
Alasdair G Kergon
64ce3a8066 report: Add lv_dm_path and lv_full_name fields. 2014-07-02 17:24:05 +01:00
Alasdair G Kergon
5bfa2ec21d report: Exclude hidden devices from lv_path field. 2014-07-02 14:57:00 +01:00
Zdenek Kabelac
93a80018ae lvremove: remove thin volumes on damaged pools
Support remove of thin volumes With --force --force
when thin pools is damaged.

This way it's possible to remove thin pool with
unrepairable metadata without requiring to
manually edit lvm2 metadata.

lvremove -ff vg/pool

removes all thin volumes and pool even when
thin pool cannot be activated (to accept
removal of thin volumes in kernel metadata)
2014-07-02 10:37:52 +02:00
Zdenek Kabelac
13fb02ff1f cleanup: ignore vg_name in /lib
Since  vg_name inside /lib function has already been ignored mostly
except for a few debug prints - make it and official internal API
feature.

vg_name is used only in  /tools while the VG is not yet openned,
and when  lvresize/lvcreate /lib function is called with VG pointer
already being used, then vg_name becomes irrelevant (it's not been
validated anyway).

So any internal user of lvcreate_params and lvresize_params does not
need to set vg_name pointer and may leave it NULL.
2014-06-30 12:21:36 +02:00
Zdenek Kabelac
2ada685216 cleanup: more lv_is_ functions 2014-06-30 12:16:08 +02:00
Zdenek Kabelac
6da14a82c6 thin: do not create reserved LVs
When creating pool's metadata - create initial LV for clearing with some
generic name and after the volume is create & cleared - rename it to
reserved name '_tmeta/_cmeta'.

We should not expose  'reserved' names for public LVs.
2014-06-30 12:16:05 +02:00
Peter Rajnoha
b6fe906956 activation: fix typo in 'activation skip' message 2014-06-30 11:02:45 +02:00
Jonathan Brassow
ed3c2537b8 raid: Allow repair to reuse PVs from same image that suffered a PV failure
When repairing RAID LVs that have multiple PVs per image, allow
replacement images to be reallocated from the PVs that have not
failed in the image if there is sufficient space.

This allows for scenarios where a 2-way RAID1 is spread across 4 PVs,
where each image lives on two PVs but doesn't use the entire space
on any of them.  If one PV fails and there is sufficient space on the
remaining PV in the image, the image can be reallocated on just the
remaining PV.
2014-06-25 22:26:06 -05:00
Jonathan Brassow
7028fd31a0 misc: after releasing a PV segment, merge it with any adjacent free space
Previously, the seg_pvs used to track free and allocated space where left
in place after 'release_pv_segment' was called to free space from an LV.
Now, an attempt is made to combine any adjacent seg_pvs that also track
free space.  Usually, this doesn't provide much benefit, but in a case
where one command might free some space and then do an allocation, it
can make a difference.  One such case is during a repair of a RAID LV,
where one PV of a multi-PV image fails.  This new behavior is used when
the replacement image can be allocated from the remaining space of the
PV that did not fail.  (First the entire image with the failed PV is
removed.  Then the image is reallocated from the remaining PVs.)
2014-06-25 22:04:58 -05:00
Jonathan Brassow
b35fb0b15a raid/misc: Allow creation of parallel areas by LV vs segment
I've changed build_parallel_areas_from_lv to take a new parameter
that allows the caller to build parallel areas by LV vs by segment.
Previously, the function created a list of parallel areas for each
segment in the given LV.  When it came time for allocation, the
parallel areas were honored on a segment basis.  This was problematic
for RAID because any new RAID image must avoid being placed on any
PVs used by other images in the RAID.  For example, if we have a
linear LV that has half its space on one PV and half on another, we
do not want an up-convert to use either of those PVs.  It should
especially not wind up with the following, where the first portion
of one LV is paired up with the second portion of the other:
------PV1-------  ------PV2-------
[ 2of2 image_1 ]  [ 1of2 image_1 ]
[ 1of2 image_0 ]  [ 2of2 image_0 ]
----------------  ----------------
Previously, it was possible for this to happen.  The change makes
it so that the returned parallel areas list contains one "super"
segment (seg_pvs) with a list of all the PVs from every actual
segment in the given LV and covering the entire logical extent range.

This change allows RAID conversions to function properly when there
are existing images that contain multiple segments that span more
than one PV.
2014-06-25 21:20:41 -05:00
Peter Rajnoha
e80884cd08 filters: always reevaluate filter before creating a PV
...to avoid using cached value (persistent filter) and therefore
not noticing any change made after last scan/filtering - the state
of the device may have changed, for example new signatures added.

$ lvm dumpconfig --type diff
allocation {
	use_blkid_wiping=0
}
devices {
	obtain_device_list_from_udev=0
}

$ cat /etc/lvm/cache/.cache | grep sda

$ vgscan
  Reading all physical volumes.  This may take a while...
  Found volume group "fedora" using metadata type lvm2

$ cat /etc/lvm/cache/.cache | grep sda
		"/dev/sda",

$ parted /dev/sda mklabel gpt
Information: You may need to update /etc/fstab.

$ parted /dev/sda print
Model: QEMU QEMU HARDDISK (scsi)
Disk /dev/sda: 134MB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start  End  Size  File system  Name  Flags

$ cat /etc/lvm/cache/.cache | grep sda
		"/dev/sda",

====

Before this patch:
$ pvcreate /dev/sda
  Physical volume "/dev/sda" successfully created

With this patch applied:
$ pvcreate /dev/sda
  Physical volume /dev/sda not found
  Device /dev/sda not found (or ignored by filtering).
2014-06-25 16:24:28 +02:00
Peter Rajnoha
3208396ce5 coverity: fix issues reported by coverity 2014-06-24 14:58:53 +02:00
Alasdair G Kergon
f29ae59a4d pvvmove: add a few comments 2014-06-20 11:41:20 +01:00
Zdenek Kabelac
f96a499c8d lv: fix lv_is_raid 2014-06-20 11:37:45 +02:00
Zdenek Kabelac
548269a1dd cleanup: use simplier test
Just like all other tests - use direct LV function test
2014-06-20 11:14:11 +02:00
Jonathan Brassow
c6d82c992b pvmove: Fix code that looks up the "move pv" for display
'lvs' would segfault if trying to display the "move pv" if the
pvmove was run with '--atomic'.  The structure of an atomic pvmove
is different and requires us to descend another level in the
LV tree to retrieve the PV information.
2014-06-19 10:57:08 -05:00
Jonathan Brassow
3964a1a89f pvmove: Clean-up iterator.
In 'find_pvmove_lv', separate the code that searches the atomic
pvmove LVs from the code that searches the normal pvmove LVs.  This
cleans up the segment iterator code a bit.
2014-06-19 10:52:09 -05:00
Alasdair G Kergon
b33091cb11 pvmove: tidy 2014-06-19 13:40:47 +01:00
Zdenek Kabelac
597de5c807 cleanup: use insert_layer_for_lv implicit rename
There is implicit rename for certain layered device.
Do it now for _tdata, _cdata and _corig.

TODO: use better API here...
2014-06-18 15:00:18 +02:00
Jonathan Brassow
5ebff6cc9f pvmove: Enable all-or-nothing (atomic) pvmoves
pvmove can be used to move single LVs by name or multiple LVs that
lie within the specified PV range (e.g. /dev/sdb1:0-1000).  When
moving more than one LV, the portions of those LVs that are in the
range to be moved are added to a new temporary pvmove LV.  The LVs
then point to the range in the pvmove LV, rather than the PV
range.

Example 1:
	We have two LVs in this example.  After they were
	created, the first LV was grown, yeilding two segments
	in LV1.  So, there are two LVs with a total of three
	segments.

	Before pvmove:
	      ---------  ---------   ---------
	      | LV1s0 |  | LV2s0 |   | LV1s1 |
	      ---------  ---------   ---------
	         |           |           |
	   -------------------------------------
	PV | 000 - 255 | 256 - 511 | 512 - 767 |
	   -------------------------------------

	After pvmove inserts the temporary pvmove LV:
	          ---------   ---------   ---------
	          | LV1s0 |   | LV2s0 |   | LV1s1 |
	          ---------   ---------   ---------
	              |           |           |
	        -------------------------------------
	pvmove0 |   seg 0   |   seg 1   |   seg 2   |
	        -------------------------------------
	              |           |           |
	        -------------------------------------
	PV      | 000 - 255 | 256 - 511 | 512 - 767 |
	        -------------------------------------

	Each of the affected LV segments now point to a
	range of blocks in the pvmove LV, which purposefully
	corresponds to the segments moved from the original
	LVs into the temporary pvmove LV.

The current implementation goes on from here to mirror the temporary
pvmove LV by segment.  Further, as the pvmove LV is activated, only
one of its segments is actually mirrored (i.e. "moving") at a time.
The rest are either complete or not addressed yet.  If the pvmove
is aborted, those segments that are completed will remain on the
destination and those that are not yet addressed or in the process
of moving will stay on the source PV.  Thus, it is possible to have
a partially completed move - some LVs (or certain segments of LVs)
on the source PV and some on the destination.

Example 2:
	What 'example 1' might look if it was half-way
	through the move.
	             ---------   ---------   ---------
	             | LV1s0 |   | LV2s0 |   | LV1s1 |
	             ---------   ---------   ---------
	                 |           |           |
	           -------------------------------------
	pvmove0    |   seg 0   |   seg 1   |   seg 2   |
	           -------------------------------------
	                 |           |           |
	                 |     -------------------------
	source PV        |     | 256 - 511 | 512 - 767 |
	                 |     -------------------------
	                 |           ||
	           -------------------------
	dest PV    | 000 - 255 | 256 - 511 |
	           -------------------------

This update allows the user to specify that they would like the
pvmove mirror created "by LV" rather than "by segment".  That is,
the pvmove LV becomes an image in an encapsulating mirror along
with the allocated copy image.

Example 3:
	A pvmove that is performed "by LV" rather than "by segment".

	                   ---------   ---------
	                   | LV1s0 |   | LV2s0 |
	                   ---------   ---------
	                       |           |
	                 -------------------------
	        pvmove0  |  * LV-level mirror *  |
	                 -------------------------
                             /                \
	   pvmove_mimage0   /          pvmove_mimage1
	   -------------------------   -------------------------
	   |   seg 0   |   seg 1   |   |   seg 0   |   seg 1   |
	   -------------------------   -------------------------
	        |            |               |           |
	   -------------------------   -------------------------
	   | 000 - 255 | 256 - 511 |   | 000 - 255 | 256 - 511 |
	   -------------------------   -------------------------
	           source PV                    dest PV

The thing that differentiates a pvmove done in this way and a simple
"up-convert" from linear to mirror is the preservation of the
distinct segments.  A normal up-convert would simply allocate the
necessary space with no regard for segment boundaries.  The pvmove
operation must preserve the segments because they are the critical
boundary between the segments of the LVs being moved.  So, when the
pvmove copy image is allocated, all corresponding segments must be
allocated.  The code that merges ajoining segments that are part of
the same LV when the metadata is written must also be avoided in
this case.  This method of mirroring is unique enough to warrant its
own definitional macro, MIRROR_BY_SEGMENTED_LV.  This joins the two
existing macros: MIRROR_BY_SEG (for original pvmove) and MIRROR_BY_LV
(for user created mirrors).

The advantages of performing pvmove in this way is that all of the
LVs affected can be moved together.  It is an all-or-nothing approach
that leaves all LV segments on the source PV if the move is aborted.
Additionally, a mirror log can be used (in the future) to provide tracking
of progress; allowing the copy to continue where it left off in the event
there is a deactivation.
2014-06-17 22:59:36 -05:00
Peter Rajnoha
cfed0d09e8 report: select: refactor: move percent handling code to libdm for reuse 2014-06-17 16:27:21 +02:00
Peter Rajnoha
5abdb52fdc report: select: refactor: move str_list to libdm
The list of strings is used quite frequently and we'd like to reuse
this simple structure for report selection support too. Make it part
of libdevmapper for general reuse throughout the code.

This also simplifies the LVM code a bit since we don't need to
include and manage lvm-types.h anymore (the string list was the
only structure defined there).
2014-06-17 16:27:20 +02:00
Zdenek Kabelac
c46d4a745d snapshot: check snapshot exists
Return 0 if the LV is not even snapshot.
2014-06-17 13:36:07 +02:00
Jonathan Brassow
962a40b981 cache: Properly rename origin LV tree when adding "_corig"
When creating a cache LV with a RAID origin, we need to ensure that
the sub-LVs of that origin properly change their names to include
the "_corig" extention of the top-level LV.  We do this by first
performing a 'lv_rename_update' before making the call to
'insert_layer_for_lv'.
2014-06-16 18:15:39 -05:00
Petr Rockai
ee200ddfc3 pvremove: Update lvmcache => avoid spurious error messages. 2014-06-08 22:57:04 +02:00
Petr Rockai
dba6dec661 metadata: Make it possible to write partial VGs obtained from lvmetad. 2014-06-08 17:41:11 +02:00
Zdenek Kabelac
9240aca369 raid: cleanup error messages
Add log_error messages on error paths.
2014-05-27 17:08:49 +02:00
Zdenek Kabelac
1a84032322 cleanup: indent 2014-05-23 21:37:12 +02:00
Zdenek Kabelac
bf6b69c46b cleanup: use directly segtype->name
Simplify printing of segtype name.
2014-05-23 21:36:55 +02:00
Zdenek Kabelac
952514611d cleanup: add seg_is_pool macro
Simplify code querying for pool segtype.
2014-05-23 21:36:55 +02:00
Zdenek Kabelac
aafd7c878c cleanup: indent 2014-05-21 23:14:41 +02:00
Zdenek Kabelac
9fd0be2a85 debug: fix backtracing 2014-05-20 21:50:28 +02:00
Peter Rajnoha
9e3e4d6994 config: differentiate command and metadata profiles and consolidate profile handling code
- When defining configuration source, the code now uses separate
  CONFIG_PROFILE_COMMAND and CONFIG_PROFILE_METADATA markers
  (before, it was just CONFIG_PROFILE that did not make the
  difference between the two). This helps when checking the
  configuration if it contains correct set of options which
  are all in either command-profilable or metadata-profilable
  group without mixing these groups together - so it's a firm
  distinction. The "command profile" can't contain
  "metadata profile" and vice versa! This is strictly checked
  and if the settings are mixed, such profile is rejected and
  it's not used. So in the end, the CONFIG_PROFILE_COMMAND
  set of options and CONFIG_PROFILE_METADATA are mutually exclusive
  sets.

- Marking configuration with one or the other marker will also
  determine the way these configuration sources are positioned
  in the configuration cascade which is now:

  CONFIG_STRING -> CONFIG_PROFILE_COMMAND -> CONFIG_PROFILE_METADATA -> CONFIG_FILE/CONFIG_MERGED_FILES

- Marking configuration with one or the other marker will also make
  it possible to issue a command context refresh (will be probably
  a part of a future patch) if needed for settings in global profile
  set. For settings in metadata profile set this is impossible since
  we can't refresh cmd context in the middle of reading VG/LV metadata
  and for each VG/LV separately because each VG/LV can have a different
  metadata profile assinged and it's not possible to change these
  settings at this level.

- When command profile is incorrect, it's rejected *and also* the
  command exits immediately - the profile *must* be correct for the
  command that was run with a profile to be executed. Before this
  patch, when the profile was found incorrect, there was just the
  warning message and the command continued without profile applied.
  But it's more correct to exit immediately in this case.

- When metadata profile is incorrect, we reject it during command
  runtime (as we know the profile name from metadata and not early
  from command line as it is in case of command profiles) and we
  *do continue* with the command as we're in the middle of operation.
  Also, the metadata profile is applied directly and on the fly on
  find_config_tree_* fn call and even if the metadata profile is
  found incorrect, we still need to return the non-profiled value
  as found in the other configuration provided or default value.
  To exit immediately even in this case, we'd need to refactor
  existing find_config_tree_* fns so they can return error. Currently,
  these fns return only config values (which end up with default
  values in the end if the config is not found).

- To check the profile validity before use to be sure it's correct,
  one can use :

    lvm dumpconfig --commandprofile/--metadataprofile ProfileName --validate

  (the --commandprofile/--metadataprofile for dumpconfig will come
   as part of the subsequent patch)

- This patch also adds a reference to --commandprofile and
  --metadataprofile in the cmd help string (which was missing before
  for the --profile for some commands). We do not mention --profile
  now as people should use --commandprofile or --metadataprofile
  directly. However, the --profile is still supported for backward
  compatibility and it's translated as:

    --profile == --metadataprofile for lvcreate, vgcreate, lvchange and vgchange
                 (as these commands are able to attach profile to metadata)

    --profile == --commandprofile for all the other commands
                (--metadataprofile is not allowed there as it makes no sense)

- This patch also contains some cleanups to make the code handling
  the profiles more readable...
2014-05-20 16:21:48 +02:00
Zdenek Kabelac
91284bd9b9 cleanup: device extent_size first 2014-05-18 20:08:07 +02:00
Zdenek Kabelac
58777756e8 debug: backtrace error path
Add backtrace for 'n' answer.
2014-05-18 20:07:24 +02:00
Alasdair G Kergon
3b989e317f allocation: Fix alloc anywhere with parity.
Take account of parity areas with alloc anywhere in
_calc_required_extents.  Extents beyond area_count were treated
incorrectly as mirror logs.
2014-05-14 16:25:43 +01:00
Zdenek Kabelac
85cf5a23d2 cleanup: improve error message
Update impossile to happen error message.
2014-05-13 10:33:17 +02:00
Zdenek Kabelac
8b95c82fed coverity: catch unwanted path
We validate this path already earlier.
2014-05-12 16:24:39 +02:00