1
0
mirror of git://sourceware.org/git/lvm2.git synced 2024-12-22 17:35:59 +03:00
Commit Graph

1408 Commits

Author SHA1 Message Date
Peter Rajnoha
ee878bc52c coverity: assigned variable not used and reassigned later 2013-10-16 15:06:43 +02:00
Jonathan Brassow
f58b26b633 RAID: Report RAID images split with tracking as out-of-sync ("I").
Split image should have an out-of-sync attr ('I') - always.  Even if
the RAID LV has not been written to since the LV was split off, it is
still not part of the group that makes up the RAID and is therefore
"out-of-sync".
2013-10-14 10:48:44 -05:00
Zdenek Kabelac
1146691afc snapshot: deactivate virtual snapshot first
Since the virtual snapshot has no reason to stay alive once we
detach related snapshot - deactivate whole thing in front of
snapshot removal - otherwice the code would get tricky for
support in cluster.

The correct full solution would require to have transactions
for libdm operations.

Also enable to the check for snapshot being opened prior
the origin deactivation, otherwise we could easily end
with the origin being deactivate, but snapshot still kept
active, desynchronizing locking state in cluster.
2013-10-14 00:25:15 +02:00
Zdenek Kabelac
81504ba70c snapshot: move virtsnap code from tool to lib
Move code for removal dependency from tool's remove.c
into lib's manipulation code.

Same code then works with lvm2app.
2013-10-12 00:14:52 +02:00
Petr Rockai
0decd7553a metadata: Fix metadata repair paths when lvmetad is used. 2013-10-09 14:44:01 +02:00
Peter Rajnoha
ce7489ed22 activation: add support for flagging an LV to skip udev scanning during activation
A common scenario is during new LV creation when we need to wipe the
newly created LV and avoid any udev scanning before this stage otherwise
it could cause the device (the LV) to be claimed by some other subsystem
for which there were stale metadata within LV data.

This patch adds possibility to mark the LV we're just about to wipe with
a flag that gets passed to udev via DM_COOKIE as a subsystem specific
flag - DM_SUBSYSTEM_UDEV_FLAG0 (in this case the subsystem is "LVM")
so LVM udev rules will take care of handling that.
2013-10-08 13:43:14 +02:00
Alasdair G Kergon
04d9a52684 release 2.02.103
52 files changed, 598 insertions(+), 264 deletions(-)
2013-10-04 14:32:23 +01:00
Peter Rajnoha
8cf0810d57 thin: rename thin_pool_chunk_size_calculation -> ..size_policy and rename "default" policy to "generic"
Just to be consistent with existing naming we use.
2013-10-04 12:30:33 +02:00
Alasdair G Kergon
baf95bbff7 cmdline: Add --ignoreskippedcluster.
Accept --ignoreskippedcluster with pvs, vgs, lvs, pvdisplay, vgdisplay,
lvdisplay, vgchange and lvchange to avoid the 'Skipping clustered
VG' errors when requesting information about a clustered VG
without using clustered locking and still exit with success.

The messages can still be seen with -v.
2013-10-01 21:20:10 +01:00
Peter Rajnoha
24ffd5244f thin: better dbg msgs and avoid uninit. value on chunk size recalc 2013-09-30 08:58:57 +02:00
Jonathan Brassow
acdc731e83 RAID: Fix _sufficient_pes_free calculation for RAID
lib/metadata/lv_manip.c:_sufficient_pes_free() was calculating the
required space for RAID allocations incorrectly due to double
accounting.  This resulted in failure to allocate when available
space was tight.

When RAID data and metadata areas are allocated together, the total
amount is stored in ah->new_extents and ah->alloc_and_split_meta is
set.  '_sufficient_pes_free' was adding the necessary metadata extents
to ah->new_extents without ever checking ah->alloc_and_split_meta.
This often led to double accounting of the metadata extents.  This
patch checks 'ah->alloc_and_split_meta' to perform proper calculations
for RAID.

This error is only present in the function that checks for the needed
space, not in the functions that do the actual allocation.
2013-09-26 11:30:07 -05:00
Peter Rajnoha
78cba8eb3f thin: calculate thin pool chunk size based on device IO hints
If "default" thin pool chunk size calculation method is selected,
use minimum_io_size, otherwise optimal_io_size for "performance"
device hint exposed in sysfs. If there appear to be PVs with
different hints presented, use their least common multiple.

If the hint is less than the default value defined for the
calculation method, use the default value instead.
2013-09-25 16:06:38 +02:00
Peter Rajnoha
cc9e65c391 thin: use appropriate default value based on allocation/thin_pool_chunk_size_calculation setting
If thin_pool_chunk_size_calculation is set to "default", use 64KiB,
otheriwse 512KiB for "performance".
2013-09-25 16:06:38 +02:00
Jonathan Brassow
c37c59e155 Test/clean-up: Indent clean-up and additional RAID resize test
Better indenting and a test for bug 1005434 (parity RAID should
extend in a contiguous fashion).
2013-09-24 21:32:53 -05:00
Jonathan Brassow
5ded7314ae RAID: Fix broken allocation policies for parity RAID types
A previous commit (b6bfddcd0a) which
was designed to prevent segfaults during lvextend when trying to
extend striped logical volumes forgot to include calculations for
RAID4/5/6 parity devices.  This was causing the 'contiguous' and
'cling_by_tags' allocation policies to fail for RAID 4/5/6.

The solution is to remember that while we can compare
	ah->area_count == prev_lvseg->area_count
for non-RAID, we should compare
	(ah->area_count + ah->parity_count) == prev_lvseg->area_count
for a general solution.
2013-09-24 21:32:10 -05:00
Zdenek Kabelac
b29adbbc4d raid: add lv_is_raid()
More readable then status bit flag masking...
2013-09-23 11:35:15 +02:00
Zdenek Kabelac
30432bd604 cleanup: skip call of detect...
SInce we know the pool was locked and we want to reloc pool again,
just use '1' directly.
2013-09-23 11:35:15 +02:00
Zdenek Kabelac
b8ea27ac97 cleanup: hide gcc warning
Older gcc is giving misleading warning:

metadata/lv_manip.c:4018: warning: ‘seg’ may be used uninitialized in
this function

But warning free compilation is better.
2013-09-11 23:40:45 +02:00
Jonathan Brassow
2691f1d764 RAID: Make RAID single-machine-exclusive capable in a cluster
Creation, deletion, [de]activation, repair, conversion, scrubbing
and changing operations are all now available for RAID LVs in a
cluster - provided that they are activated exclusively.

The code has been changed to ensure that no LV or sub-LV activation
is attempted cluster-wide.  This includes the often overlooked
operations of activating metadata areas for the brief time it takes
to clear them.  Additionally, some 'resume_lv' operations were
replaced with 'activate_lv_excl_local' when sub-LVs were promoted
to top-level LVs for removal, clearing or extraction.  This was
necessary because it forces the appropriate renaming actions the
occur via resume in the single-machine case, but won't happen in
a cluster due to the necessity of acquiring a lock first.

The *raid* tests have been updated to allow testing in a cluster.
For the most part, this meant creating devices with '-aey' if they
were to be converted to RAID.  (RAID requires the converting LV to
be EX because it is a condition of activation for the RAID LV in
a cluster.)
2013-09-10 16:33:22 -05:00
Jonathan Brassow
ca51435153 Misc/RAID: Enable resume_lv to handle some renaming conflicts.
When images and their associated metadata are removed from a RAID1 LV,
the remaining sub-LVs are "shifted" down to fill the gaps.  For
example, if there is a 3-way mirror:
	[0][1][2]
and we remove device#0, the devices will be shifted down
	[1][2]
and renamed.
	[0][1]

This can create a problem for resume_lv (specifically,
dm_tree_activate_children) during the renaming process though.  This
is because it will attempt to rename the higher indexed sub-LVs first
and find that it cannot because there are currently other sub-LVs with
that name.  The solution is to check for a conflicting name before
attempting to rename.  If a conflict is found and that conflicting
sub-LV is also in the process of renaming, we can defer the current
rename until the conflicting sub-LV has renamed and cleared the
conflict.

Now that resume_lv can handle these types of rename conflicts, we can
remove the workaround in RAID that was attempting to resume a RAID1
LV from the bottom-up in order to force a proper rename in assending
order before attempting a resume on the top-level LV.  This "hack"
only worked for single machine use-cases of LVM.  Clearing this up
paves the way for exclusive activation of RAID LVs in a cluster.
2013-09-09 15:07:28 -05:00
Zdenek Kabelac
0670bfeb59 thin: validation catch multiseg thin pool/volumes
Multisegment thin pools and volumes are not supported.
Catch such error code path early.
2013-09-07 03:32:07 +02:00
Zdenek Kabelac
4c001a7854 thin: fix resize of stacked thin pool volume
When the pool is created from non-linear target the more complex rules
have to be used and stacking needs to properly decode args for _tdata
LV. Also proper allocation policies are being used according to those
set in lvm2 metadata for data and metadata LVs.

Also properly check for active pool and extra code to active it
temporarily.

With this fix it's now possible to use:

lvcreate -L20 -m2 -n pool vg  --alloc anywhere
lvcreate -L10 -m2 -n poolm vg --alloc anywhere
lvconvert --thinpool vg/pool --poolmetadata vg/poolm

lvresize -L+10 vg/pool
2013-09-07 03:24:48 +02:00
Jonathan Brassow
72d6bdd6b9 misc: make lv_is_on_pv use for_each_sub_lv to walk LV tree
Make lv_is_on_pv use for_each_sub_lv to walk the LV tree.  This
reduces code duplication.
2013-08-23 11:03:28 -05:00
Jonathan Brassow
e5c0213168 Thin: Make 'lv_is_on_pv(s)' work with thin types
The pool metadata LV must be accounted for when determining what PVs
are in a thin-pool.  The pool LV must also be accounted for when
checking thin volumes.

This is a prerequisite for pvmove working with thin types.
2013-08-23 08:49:16 -05:00
Jonathan Brassow
f1e3640df3 Misc: Make get_pv_list_for_lv() available to more than just RAID
The function 'get_pv_list_for_lv' will assemble all the PVs that are
used by the specified LV.  It uses 'for_each_sub_lv' to traverse all
of the sub-lvs which may compose it.
2013-08-23 08:40:13 -05:00
Peter Rajnoha
f74e8fe044 thin: fix commit e195b5227e
Check chunk_size range unconditionally.
2013-08-06 16:28:12 +02:00
Peter Rajnoha
e195b5227e thin: apply VG profile if creating a new thin pool
When creating a new thin pool and there's no profile requested
via "lvcreate --profile ...", inherit any VG profile if it's attached.

Currently this applies to these settings:
  allocation/thin_pool_chunk_size
  allocation/thin_pool_discards
  allocation/thin_pool_zero
2013-08-06 11:42:40 +02:00
Zdenek Kabelac
7b58f10442 thin: move setting of THIN_POOL
Set flag when attaching data LV which make segment THIN_POOL.
2013-07-31 15:27:38 +02:00
Alasdair G Kergon
b6bfddcd0a alloc: fix lvextend when stripe number varies
The PREFERRED allocation mechanism requires the number of areas in the
previous LV segment to match the number in the new segment being
allocated.  If they do not match, the code may crash.
  E.g. https://bugzilla.redhat.com/989347

Introduce A_AREA_COUNT_MATCHES and when not set avoid referring
to the previous segment with the contiguous and cling policies.
2013-07-29 19:35:45 +01:00
Jonathan Brassow
f5a205668b Revert a previous change
commit d00d45a8b6 introduced changes
that are causing cluster mirror tests to fail.  Ultimately, I think
the change was right, but a proper clean-up will have to wait.
The portion of the commit we are reverting correlates to the
following commit comment:
    2) lib/metadata/mirror.c:_delete_lv() - should have been calling
       _activate_lv_like_model() with 'mirror_lv'.  This is because
       'mirror_lv' is the LV that the overall operation is being
       performed on.  We need to use this LV as the basis for
       determining whether to activate locally, or across the
       cluster, etc.
It appears that when legs or logs are removed from a mirror, they
are being activated before they are deleted in order to make them
top-level LVs that can be acted upon.  When doing this, it appears
they are not activated based on the characteristics of the mirror
from which they came.  IOW, if the mirror was exclusively active,
the sub-LVs are activated globally.  This is a no-no.  This then
made it impossible to activate_lv_like_model if the model was
"mirror_lv" instead of "lv" in _delete_lv().  Thus, at some point
this change should probably be put back and those location where
the sub-LVs are being improperly activated "shared" instead of
EX should be corrected.
2013-07-24 14:18:07 -05:00
Zdenek Kabelac
5597dc3652 thin: not zeroing for non-zeroed thin pool snaps
Do not zero initial 4KB of thin snapshot volume for thin pool with
disabled zeroing.
2013-07-24 01:15:31 +02:00
Jonathan Brassow
d00d45a8b6 Clean-up: Addressing a few FIXME's
Three fixme's addressed in this commit:
1) lib/metadata/lv_manip.c:_calc_area_multiple() - this could be
   safely changed to a comment explaining that currently because
   RAID10 can only have a 2-way mirror, we don't need to know the
   number of stripes.  However, we will need to know that in the
   future if RAID10 is to support more than 2-way mirroring.

2) lib/metadata/mirror.c:_delete_lv() - should have been calling
   _activate_lv_like_model() with 'mirror_lv'.  This is because
   'mirror_lv' is the LV that the overall operation is being
   performed on.  We need to use this LV as the basis for
   determining whether to activate locally, or across the
   cluster, etc.

3) tools/lvcreate.c:_lvcreate_params() - Minor clean-up.  If
   '-m 0' is given, treat it as though the mirroring argument
   was not given (i.e. as though the requested segment type
   was 'stripe' and not mirror).
2013-07-23 14:46:22 -05:00
Zdenek Kabelac
373f95a921 snapshot: update merging fix
Activation is needed only for clustered VG.
For non-clustered VG skip activation, since deactivate_lv()
is called without problems (no testing for lock presence).

(updates f6ded62291)
2013-07-23 15:15:04 +02:00
Zdenek Kabelac
6311be29e4 thin: use 64bit arithmetic for checking meta size
Avoid overflow since extents are just 32bit values.

(in release fix 87aca628)
2013-07-23 14:58:07 +02:00
Alasdair G Kergon
84801d7c34 thin: rename extend_pool to create_pool 2013-07-23 13:33:14 +01:00
Zdenek Kabelac
f6ded62291 snapshot: fix merging
When the merging of snapshot is finished, we need to clean dm table
intries for snapshot and -cow device. So for merging snapshot
we have to activate_lv plain 'cow' LV and let the table
resolver to its work - shortly deactivation_lv() request
will follow - in cluster this needs LV lock to be held by clvmd.

Also update a test - add small wait - if lvremove is not 'fast enough'
and merging process has not been stopped and $lv1 removed in background.
Ortherwise the following lvcreate occasionally finds name $lv1 still in use.

(in release fix)
2013-07-22 16:26:00 +02:00
Zdenek Kabelac
ea68f08501 cleanup: remove unused headers 2013-07-22 12:41:21 +02:00
Petr Rockai
6d2604f026 metadata: Fix tracking of read_status flags in _vg_make_handle. 2013-07-22 12:04:47 +02:00
Petr Rockai
3ed7f78ff4 metadata: Do not ignore errors in _vg_update_vg_ondisk. 2013-07-22 12:00:48 +02:00
Petr Rockai
f897fcbd95 metadata: Do not try to maintain an ondisk copy of orphan VGs. 2013-07-22 11:51:35 +02:00
Zdenek Kabelac
3075784955 thin: add spare lvcreate support
Add --poolmetadataspare option and creates and handles
pool metadata spare lv when thin pool is created.
With default setting 'y' it tries to ensure, spare has
at least the size of created LV.
2013-07-18 18:22:44 +02:00
Zdenek Kabelac
a916bf7eeb thin: removal of spare disables recovery
Warn user when removing spare LV.
Remove spare automatically, when last pool from VG is removed.
2013-07-18 18:22:44 +02:00
Zdenek Kabelac
915cc5a2fa thin: report 'e' volume type pool metadata spare
Reuse m'e'tadata volume type for spar'e' volume as well.
Essentially they are related and there is no big reason
to introduce new flag.
2013-07-18 18:22:44 +02:00
Zdenek Kabelac
460d0254eb thin: add pool metadata spare lv support
Add support for pool's metadata spare volume.
2013-07-18 18:22:43 +02:00
Zdenek Kabelac
08df7ba844 thin: improve pool creation activation order
Pool creation involves clearing of metadata device
which triggers udev watch rule we cannot udev synchronize with
in current code.

This metadata devices needs to be activated localy,
so in cluster mode deactivation and reactivation
is always needed.

However for non-clustered mode we may reload table
via suspend/resume path which avoids collision with
udev watch rule which was occasionaly triggering
retry deactivation loop.

Code has been also split into 2 separate code paths
for thin pools and thin volumes which improved readability
of the code as well.

Deactivation has been moved out of extend_pool() and
decision is now in _lv_create_an_lv() which knows
the change mode.
2013-07-18 18:22:43 +02:00
Zdenek Kabelac
d72f34af41 thin: fix error paths for metadata creation
Since we vg_write&commit metadata LV inside  lv_extend() call,
proper restore is needed in case something fails.

So add bad: section which deactivates activated LV
and removes it from VG.

Also check early for metadata LV name lengh fail.
2013-07-18 18:22:43 +02:00
Zdenek Kabelac
7afa9cebcb thin: fix error path in creation path
Remove some calls to revert_new_lv when no LV has been created/commited so far.
When the pool update failed - then only revert is needed.
2013-07-18 18:22:43 +02:00
Zdenek Kabelac
1d3f7953bd lv_manip: move some validation code before archiving
Make as much test we can, before actualy modifying metadata.
Avoids also unnecessary archiving.
2013-07-18 18:22:43 +02:00
Zdenek Kabelac
7b4b97b731 snapshot: local activation for clear COW device
To clear snapshot cow device in cluster enforce local
activation here.
2013-07-18 18:22:43 +02:00
Zdenek Kabelac
b5dfe4bec2 metadata: add is_change_activating
Add simple inline function to check, whether the change is activating.
(better then macro since we get type checking).
2013-07-18 18:22:42 +02:00