1
0
mirror of git://sourceware.org/git/lvm2.git synced 2025-01-03 05:18:29 +03:00
Commit Graph

551 Commits

Author SHA1 Message Date
Peter Rajnoha
e8bbcda2a3 Add lv_layout_and_type fn, lv_layout and lv_type reporting fields.
The lv_layout and lv_type fields together help with LV identification.
We can do basic identification using the lv_attr field which provides
 very condensed view. In contrast to that, the new lv_layout and lv_type
fields provide more detialed information on exact layout and type used
for LVs.

For top-level LVs which are pure types not combined with any
other LV types, the lv_layout value is equal to lv_type value.

For non-top-level LVs which may be combined with other types,
the lv_layout describes the underlying layout used, while the
lv_type describes the use/type/usage of the LV.

These two new fields are both string lists so selection (-S/--select)
criteria can be defined using the list operators easily:
  [] for strict matching
  {} for subset matching.

For example, let's consider this:

$ lvs -a -o name,vg_name,lv_attr,layout,type
  LV                    VG     Attr       Layout       Type
  [lvol1_pmspare]       vg     ewi------- linear       metadata,pool,spare
  pool                  vg     twi-a-tz-- pool,thin    pool,thin
  [pool_tdata]          vg     rwi-aor--- level10,raid data,pool,thin
  [pool_tdata_rimage_0] vg     iwi-aor--- linear       image,raid
  [pool_tdata_rimage_1] vg     iwi-aor--- linear       image,raid
  [pool_tdata_rimage_2] vg     iwi-aor--- linear       image,raid
  [pool_tdata_rimage_3] vg     iwi-aor--- linear       image,raid
  [pool_tdata_rmeta_0]  vg     ewi-aor--- linear       metadata,raid
  [pool_tdata_rmeta_1]  vg     ewi-aor--- linear       metadata,raid
  [pool_tdata_rmeta_2]  vg     ewi-aor--- linear       metadata,raid
  [pool_tdata_rmeta_3]  vg     ewi-aor--- linear       metadata,raid
  [pool_tmeta]          vg     ewi-aor--- level1,raid  metadata,pool,thin
  [pool_tmeta_rimage_0] vg     iwi-aor--- linear       image,raid
  [pool_tmeta_rimage_1] vg     iwi-aor--- linear       image,raid
  [pool_tmeta_rmeta_0]  vg     ewi-aor--- linear       metadata,raid
  [pool_tmeta_rmeta_1]  vg     ewi-aor--- linear       metadata,raid
  thin_snap1            vg     Vwi---tz-k thin         snapshot,thin
  thin_snap2            vg     Vwi---tz-k thin         snapshot,thin
  thin_vol1             vg     Vwi-a-tz-- thin         thin
  thin_vol2             vg     Vwi-a-tz-- thin         multiple,origin,thin

Which is a situation with thin pool, thin volumes and thin snapshots.
We can see internal 'pool_tdata' volume that makes up thin pool has
actually a level10 raid layout and the internal 'pool_tmeta' has
level1 raid layout. Also, we can see that 'thin_snap1' and 'thin_snap2'
are both thin snapshots while 'thin_vol1' is thin origin (having
multiple snapshots).

Such reporting scheme provides much better base for selection criteria
in addition to providing more detailed information, for example:

$ lvs -a -o name,vg_name,lv_attr,layout,type -S 'type=metadata'
LV                   VG   Attr       Layout      Type
[lvol1_pmspare]      vg   ewi------- linear      metadata,pool,spare
[pool_tdata_rmeta_0] vg   ewi-aor--- linear      metadata,raid
[pool_tdata_rmeta_1] vg   ewi-aor--- linear      metadata,raid
[pool_tdata_rmeta_2] vg   ewi-aor--- linear      metadata,raid
[pool_tdata_rmeta_3] vg   ewi-aor--- linear      metadata,raid
[pool_tmeta]         vg   ewi-aor--- level1,raid metadata,pool,thin
[pool_tmeta_rmeta_0] vg   ewi-aor--- linear      metadata,raid
[pool_tmeta_rmeta_1] vg   ewi-aor--- linear      metadata,raid

(selected all LVs which are related to metadata of any type)

lvs -a -o name,vg_name,lv_attr,layout,type -S 'type={metadata,thin}'
LV           VG   Attr       Layout      Type
[pool_tmeta] vg   ewi-aor--- level1,raid metadata,pool,thin

(selected all LVs which hold metadata related to thin)

lvs -a -o name,vg_name,lv_attr,layout,type -S 'type={thin,snapshot}'
LV         VG   Attr       Layout     Type
thin_snap1 vg   Vwi---tz-k thin       snapshot,thin
thin_snap2 vg   Vwi---tz-k thin       snapshot,thin

(selected all LVs which are thin snapshots)

lvs -a -o name,vg_name,lv_attr,layout,type -S 'layout=raid'
LV           VG   Attr       Layout       Type
[pool_tdata] vg   rwi-aor--- level10,raid data,pool,thin
[pool_tmeta] vg   ewi-aor--- level1,raid  metadata,pool,thin

(selected all LVs with raid layout, any raid layout)

lvs -a -o name,vg_name,lv_attr,layout,type -S 'layout={raid,level1}'
  LV           VG   Attr       Layout      Type
  [pool_tmeta] vg   ewi-aor--- level1,raid metadata,pool,thin

(selected all LVs with raid level1 layout exactly)

And so on...
2014-08-15 14:50:38 +02:00
Alasdair G Kergon
c7b9f0ab42 lvresize: Allow approximation with +%FREE.
Make lvresize -l+%FREE support approximate allocation.

Move existing "Reducing/Extending' message to verbose level
and change it to say 'up to' if approximate allocation is being used.

Replace it with a new message that gives the actual old and new size or
says 'unchanged'.
2014-08-01 00:35:43 +01:00
Zdenek Kabelac
894eda4707 thin and cache: unify pool common code
Fix get_pool_params to only read params.
Add poolmetadataspare option to get_pool_params.
Move all profile code into update_pool_params.
Move recalculate code into pool_manip.c
2014-07-22 22:41:38 +02:00
Peter Rajnoha
f6001465ef lv_manip: pool-metadata-spare is just a spare LV, not tightly bound to thin or cache 2014-07-07 17:02:06 +02:00
Peter Rajnoha
6b58647848 lv_manip: add get_lv_type_name/lv_is_linear and lv_is_striped helper fns
The get_lv_type_name helps with translating volume type
to human readable form (can be used in reports or
various messages if needed).

The lv_is_linear and lv_is_striped complete the set of
lv_is_* functions that identify exact volume types.
2014-07-04 15:40:17 +02:00
Alasdair G Kergon
137ed3081a report: Add lv_parent field.
Only defined for thin/cache/raid/mirror at this stage as it
relies on get_only_segment_using_this_lv().
2014-07-03 23:49:34 +01:00
Zdenek Kabelac
93a80018ae lvremove: remove thin volumes on damaged pools
Support remove of thin volumes With --force --force
when thin pools is damaged.

This way it's possible to remove thin pool with
unrepairable metadata without requiring to
manually edit lvm2 metadata.

lvremove -ff vg/pool

removes all thin volumes and pool even when
thin pool cannot be activated (to accept
removal of thin volumes in kernel metadata)
2014-07-02 10:37:52 +02:00
Zdenek Kabelac
13fb02ff1f cleanup: ignore vg_name in /lib
Since  vg_name inside /lib function has already been ignored mostly
except for a few debug prints - make it and official internal API
feature.

vg_name is used only in  /tools while the VG is not yet openned,
and when  lvresize/lvcreate /lib function is called with VG pointer
already being used, then vg_name becomes irrelevant (it's not been
validated anyway).

So any internal user of lvcreate_params and lvresize_params does not
need to set vg_name pointer and may leave it NULL.
2014-06-30 12:21:36 +02:00
Peter Rajnoha
b6fe906956 activation: fix typo in 'activation skip' message 2014-06-30 11:02:45 +02:00
Jonathan Brassow
b35fb0b15a raid/misc: Allow creation of parallel areas by LV vs segment
I've changed build_parallel_areas_from_lv to take a new parameter
that allows the caller to build parallel areas by LV vs by segment.
Previously, the function created a list of parallel areas for each
segment in the given LV.  When it came time for allocation, the
parallel areas were honored on a segment basis.  This was problematic
for RAID because any new RAID image must avoid being placed on any
PVs used by other images in the RAID.  For example, if we have a
linear LV that has half its space on one PV and half on another, we
do not want an up-convert to use either of those PVs.  It should
especially not wind up with the following, where the first portion
of one LV is paired up with the second portion of the other:
------PV1-------  ------PV2-------
[ 2of2 image_1 ]  [ 1of2 image_1 ]
[ 1of2 image_0 ]  [ 2of2 image_0 ]
----------------  ----------------
Previously, it was possible for this to happen.  The change makes
it so that the returned parallel areas list contains one "super"
segment (seg_pvs) with a list of all the PVs from every actual
segment in the given LV and covering the entire logical extent range.

This change allows RAID conversions to function properly when there
are existing images that contain multiple segments that span more
than one PV.
2014-06-25 21:20:41 -05:00
Alasdair G Kergon
f29ae59a4d pvvmove: add a few comments 2014-06-20 11:41:20 +01:00
Zdenek Kabelac
597de5c807 cleanup: use insert_layer_for_lv implicit rename
There is implicit rename for certain layered device.
Do it now for _tdata, _cdata and _corig.

TODO: use better API here...
2014-06-18 15:00:18 +02:00
Jonathan Brassow
5ebff6cc9f pvmove: Enable all-or-nothing (atomic) pvmoves
pvmove can be used to move single LVs by name or multiple LVs that
lie within the specified PV range (e.g. /dev/sdb1:0-1000).  When
moving more than one LV, the portions of those LVs that are in the
range to be moved are added to a new temporary pvmove LV.  The LVs
then point to the range in the pvmove LV, rather than the PV
range.

Example 1:
	We have two LVs in this example.  After they were
	created, the first LV was grown, yeilding two segments
	in LV1.  So, there are two LVs with a total of three
	segments.

	Before pvmove:
	      ---------  ---------   ---------
	      | LV1s0 |  | LV2s0 |   | LV1s1 |
	      ---------  ---------   ---------
	         |           |           |
	   -------------------------------------
	PV | 000 - 255 | 256 - 511 | 512 - 767 |
	   -------------------------------------

	After pvmove inserts the temporary pvmove LV:
	          ---------   ---------   ---------
	          | LV1s0 |   | LV2s0 |   | LV1s1 |
	          ---------   ---------   ---------
	              |           |           |
	        -------------------------------------
	pvmove0 |   seg 0   |   seg 1   |   seg 2   |
	        -------------------------------------
	              |           |           |
	        -------------------------------------
	PV      | 000 - 255 | 256 - 511 | 512 - 767 |
	        -------------------------------------

	Each of the affected LV segments now point to a
	range of blocks in the pvmove LV, which purposefully
	corresponds to the segments moved from the original
	LVs into the temporary pvmove LV.

The current implementation goes on from here to mirror the temporary
pvmove LV by segment.  Further, as the pvmove LV is activated, only
one of its segments is actually mirrored (i.e. "moving") at a time.
The rest are either complete or not addressed yet.  If the pvmove
is aborted, those segments that are completed will remain on the
destination and those that are not yet addressed or in the process
of moving will stay on the source PV.  Thus, it is possible to have
a partially completed move - some LVs (or certain segments of LVs)
on the source PV and some on the destination.

Example 2:
	What 'example 1' might look if it was half-way
	through the move.
	             ---------   ---------   ---------
	             | LV1s0 |   | LV2s0 |   | LV1s1 |
	             ---------   ---------   ---------
	                 |           |           |
	           -------------------------------------
	pvmove0    |   seg 0   |   seg 1   |   seg 2   |
	           -------------------------------------
	                 |           |           |
	                 |     -------------------------
	source PV        |     | 256 - 511 | 512 - 767 |
	                 |     -------------------------
	                 |           ||
	           -------------------------
	dest PV    | 000 - 255 | 256 - 511 |
	           -------------------------

This update allows the user to specify that they would like the
pvmove mirror created "by LV" rather than "by segment".  That is,
the pvmove LV becomes an image in an encapsulating mirror along
with the allocated copy image.

Example 3:
	A pvmove that is performed "by LV" rather than "by segment".

	                   ---------   ---------
	                   | LV1s0 |   | LV2s0 |
	                   ---------   ---------
	                       |           |
	                 -------------------------
	        pvmove0  |  * LV-level mirror *  |
	                 -------------------------
                             /                \
	   pvmove_mimage0   /          pvmove_mimage1
	   -------------------------   -------------------------
	   |   seg 0   |   seg 1   |   |   seg 0   |   seg 1   |
	   -------------------------   -------------------------
	        |            |               |           |
	   -------------------------   -------------------------
	   | 000 - 255 | 256 - 511 |   | 000 - 255 | 256 - 511 |
	   -------------------------   -------------------------
	           source PV                    dest PV

The thing that differentiates a pvmove done in this way and a simple
"up-convert" from linear to mirror is the preservation of the
distinct segments.  A normal up-convert would simply allocate the
necessary space with no regard for segment boundaries.  The pvmove
operation must preserve the segments because they are the critical
boundary between the segments of the LVs being moved.  So, when the
pvmove copy image is allocated, all corresponding segments must be
allocated.  The code that merges ajoining segments that are part of
the same LV when the metadata is written must also be avoided in
this case.  This method of mirroring is unique enough to warrant its
own definitional macro, MIRROR_BY_SEGMENTED_LV.  This joins the two
existing macros: MIRROR_BY_SEG (for original pvmove) and MIRROR_BY_LV
(for user created mirrors).

The advantages of performing pvmove in this way is that all of the
LVs affected can be moved together.  It is an all-or-nothing approach
that leaves all LV segments on the source PV if the move is aborted.
Additionally, a mirror log can be used (in the future) to provide tracking
of progress; allowing the copy to continue where it left off in the event
there is a deactivation.
2014-06-17 22:59:36 -05:00
Peter Rajnoha
cfed0d09e8 report: select: refactor: move percent handling code to libdm for reuse 2014-06-17 16:27:21 +02:00
Peter Rajnoha
5abdb52fdc report: select: refactor: move str_list to libdm
The list of strings is used quite frequently and we'd like to reuse
this simple structure for report selection support too. Make it part
of libdevmapper for general reuse throughout the code.

This also simplifies the LVM code a bit since we don't need to
include and manage lvm-types.h anymore (the string list was the
only structure defined there).
2014-06-17 16:27:20 +02:00
Zdenek Kabelac
1a84032322 cleanup: indent 2014-05-23 21:37:12 +02:00
Zdenek Kabelac
bf6b69c46b cleanup: use directly segtype->name
Simplify printing of segtype name.
2014-05-23 21:36:55 +02:00
Zdenek Kabelac
952514611d cleanup: add seg_is_pool macro
Simplify code querying for pool segtype.
2014-05-23 21:36:55 +02:00
Peter Rajnoha
9e3e4d6994 config: differentiate command and metadata profiles and consolidate profile handling code
- When defining configuration source, the code now uses separate
  CONFIG_PROFILE_COMMAND and CONFIG_PROFILE_METADATA markers
  (before, it was just CONFIG_PROFILE that did not make the
  difference between the two). This helps when checking the
  configuration if it contains correct set of options which
  are all in either command-profilable or metadata-profilable
  group without mixing these groups together - so it's a firm
  distinction. The "command profile" can't contain
  "metadata profile" and vice versa! This is strictly checked
  and if the settings are mixed, such profile is rejected and
  it's not used. So in the end, the CONFIG_PROFILE_COMMAND
  set of options and CONFIG_PROFILE_METADATA are mutually exclusive
  sets.

- Marking configuration with one or the other marker will also
  determine the way these configuration sources are positioned
  in the configuration cascade which is now:

  CONFIG_STRING -> CONFIG_PROFILE_COMMAND -> CONFIG_PROFILE_METADATA -> CONFIG_FILE/CONFIG_MERGED_FILES

- Marking configuration with one or the other marker will also make
  it possible to issue a command context refresh (will be probably
  a part of a future patch) if needed for settings in global profile
  set. For settings in metadata profile set this is impossible since
  we can't refresh cmd context in the middle of reading VG/LV metadata
  and for each VG/LV separately because each VG/LV can have a different
  metadata profile assinged and it's not possible to change these
  settings at this level.

- When command profile is incorrect, it's rejected *and also* the
  command exits immediately - the profile *must* be correct for the
  command that was run with a profile to be executed. Before this
  patch, when the profile was found incorrect, there was just the
  warning message and the command continued without profile applied.
  But it's more correct to exit immediately in this case.

- When metadata profile is incorrect, we reject it during command
  runtime (as we know the profile name from metadata and not early
  from command line as it is in case of command profiles) and we
  *do continue* with the command as we're in the middle of operation.
  Also, the metadata profile is applied directly and on the fly on
  find_config_tree_* fn call and even if the metadata profile is
  found incorrect, we still need to return the non-profiled value
  as found in the other configuration provided or default value.
  To exit immediately even in this case, we'd need to refactor
  existing find_config_tree_* fns so they can return error. Currently,
  these fns return only config values (which end up with default
  values in the end if the config is not found).

- To check the profile validity before use to be sure it's correct,
  one can use :

    lvm dumpconfig --commandprofile/--metadataprofile ProfileName --validate

  (the --commandprofile/--metadataprofile for dumpconfig will come
   as part of the subsequent patch)

- This patch also adds a reference to --commandprofile and
  --metadataprofile in the cmd help string (which was missing before
  for the --profile for some commands). We do not mention --profile
  now as people should use --commandprofile or --metadataprofile
  directly. However, the --profile is still supported for backward
  compatibility and it's translated as:

    --profile == --metadataprofile for lvcreate, vgcreate, lvchange and vgchange
                 (as these commands are able to attach profile to metadata)

    --profile == --commandprofile for all the other commands
                (--metadataprofile is not allowed there as it makes no sense)

- This patch also contains some cleanups to make the code handling
  the profiles more readable...
2014-05-20 16:21:48 +02:00
Zdenek Kabelac
91284bd9b9 cleanup: device extent_size first 2014-05-18 20:08:07 +02:00
Zdenek Kabelac
58777756e8 debug: backtrace error path
Add backtrace for 'n' answer.
2014-05-18 20:07:24 +02:00
Alasdair G Kergon
3b989e317f allocation: Fix alloc anywhere with parity.
Take account of parity areas with alloc anywhere in
_calc_required_extents.  Extents beyond area_count were treated
incorrectly as mirror logs.
2014-05-14 16:25:43 +01:00
Zdenek Kabelac
8b95c82fed coverity: catch unwanted path
We validate this path already earlier.
2014-05-12 16:24:39 +02:00
Zdenek Kabelac
d11617864a coverity: error for undefined origin
If we would have been missing origin here, it would
be an internal error - since these values are validated
earlier.
2014-05-07 14:16:18 +02:00
Alasdair G Kergon
2eed136f0f signals: Move sigint handling out to lvm-signal. 2014-05-01 20:07:17 +01:00
Alasdair G Kergon
8980592514 alloc: Correct existing use of positional fill.
Perform two allocation attempts with cling if maximise_cling is set,
first with then without positional fill.

Avoid segfaults from confusion between positional and sorted sequential
allocation when number of stripes varies as reported here:
https://www.redhat.com/archives/linux-lvm/2014-March/msg00001.html
2014-04-15 02:34:35 +01:00
Alasdair G Kergon
1bf4c3a1fa alloc: Introduce A_POSITIONAL_FILL.
Set A_POSITIONAL_FILL if the array of areas is being filled
positionally (with a slot corresponding to each 'leg') rather
than sequentially (with all suitable areas found, to be sorted
and selected from).
2014-04-15 01:13:47 +01:00
Alasdair G Kergon
c9a8264b8b alloc: Access alloc_parms from alloc_state.
alloc_parms is constant while allocating.
2014-04-15 01:05:34 +01:00
Zdenek Kabelac
84ff3ae703 pvmove: remove locked flag from error pvmove0
When pvmove0 is finished, it replaces temporarily pvmove0
with error segment, however in this case, pvmove0 remains
unremovable in case pvmove --abort is interrupted in this
moment - since it's not a pvmove anymore and normal
lvremove can't be used to remove LOCKED lv.
2014-04-14 12:52:32 +02:00
Alasdair G Kergon
a8d63994ea alloc: Refactor area reservation code.
No functional changes intended to be included in this patch.
2014-04-10 20:48:59 +01:00
Zdenek Kabelac
dc5a3c9964 debug: add internal error for passed LV
TODO: in fact we should parameter LV.
2014-04-01 20:54:09 +02:00
Zdenek Kabelac
d58cc2c0fc cleanup: cache reuse code for pool test
Using same error message for pool associated devices.
2014-04-01 20:18:05 +02:00
Zdenek Kabelac
e72dea55bf cache: fix order of metadata change
Start to change metadata after they are archived.
And since cache_pool is virtual skip deactivation
call for this LV.
2014-04-01 20:17:10 +02:00
Zdenek Kabelac
d3004479cc cache: use remove_layer_from_lv
Since the usability problem were fixed, we can use this function.
Cleanup orphan LVs with TEMPORARY flags
(reduces couple blkid error reports, but couple of them
is still left...)
2014-04-01 20:17:10 +02:00
Zdenek Kabelac
0a15a210a5 cache: cache segment is non-discardable
Since cache segment is purely virtual mapping, it has nothing for
discard. Discardable is cache origin here which is now
properly removed on 'delete' phase.

Plain lv_empty() call needs to only detach cache origin and leave
origin unchanged.
2014-04-01 20:17:10 +02:00
Zdenek Kabelac
a018c57f0b cache: never activate cache pool
Since cache-pool is purely lvm abstraction layer LV, it never
need any device node, so do not add even 'error' device for it.
2014-04-01 20:17:10 +02:00
Zdenek Kabelac
6570d36ad5 lv_rename: resume fail is certainly error
Failing resume path means command has failed,
even when commit was ok.
2014-03-31 12:03:25 +02:00
Zdenek Kabelac
c7a89323f5 fix spurios char
Mistyped char left in code.
2014-03-27 13:49:24 +01:00
Zdenek Kabelac
356fdda46d lv_manip: drop cmd pointer from for_each_sub_lv
Drop unused passed cmd pointer from function.

TODO:

We have two similar functions (though not identical)

lv_manip.c: for_each_sub_lv()
metadata.c: _lv_each_dependency()

They seem to not always match - we should probably convert
to use only a single function.
2014-03-27 13:10:13 +01:00
Zdenek Kabelac
22579b4451 lv_rename: validate renamed sublv
Use proper vgmem memory pool for allocation of LV name in the vg
and check if new renamed LV is a valid name.

TODO: validation should really use also VG name, othewise we are not
able to tell "vgname-lvname" will be valid.
2014-03-27 13:06:23 +01:00
Zdenek Kabelac
3a82490ee1 snapshot: wrap min_chunk test into a lib function
Create a separate function to validation snapshot min chunk value
and relocate code into snapshot_manip file.

This function will be shared with lvconvert then.
2014-03-17 14:31:43 +01:00
Zdenek Kabelac
8a60cbcf45 thin: always activate and deactive pool when creating
When we create thin-pool we have used trick to keep
volume active, but since we now support TEMPORARY flag,
we could just localy active & deactive metadata LV,
and let the thinpool through normal activation process.
2014-03-12 00:16:27 +01:00
Peter Rajnoha
74a3fc4e85 config: add default for allocation/cache_pool_chunk_size
The same as for allocation/thin_pool_chunk_size - the default value
used is just a starting point. The calculation continues using the
properties of the devices actually used.
2014-03-06 11:34:02 +01:00
Peter Rajnoha
d27868e94f config: runtime default for allocation/thin_pool_chunk_size
The allocation/thin_pool_chunk_size is a bit more complex. It's default
value is evaluated in runtime based on selected thin_pool_chunk_size_policy.
But the value is just a starting point. The calculation then continues
with dependency on the properties of the devices used. Which means for
such a default value, we know only the starting value.
2014-03-06 11:26:02 +01:00
Zhiqing Zhang
014ba37cb1 lvresize: fix stripe size validation
While stripe size is twice the physical extent size,
the original code will not reduce stripe size to maximum
(physical extent size).

Signed-off-by: Zhiqing Zhang <zhangzq.fnst@cn.fujitsu.com>
2014-02-26 13:25:50 +01:00
Zdenek Kabelac
e7d189baf7 allocation: add default path
Make it obvious for compiler extents is always defined for
valid code path.
2014-02-25 09:36:26 +01:00
Alasdair G Kergon
b359b86f88 allocation: improve approx alloc with resize
Start to convert percentage size handling in lvresize to the new
standard.  Note in the man pages that this code is incomplete.
Fix a regression in non-percentage allocation in my last check in.

This is what I am aiming for:

-l<extents>
-l<percent> LV/ORIGIN
	sets or changes the LV size based on the specified quantity
	of logical logical extents (that might be backed by
	a higher number of physical extents)

-l<percent> PVS/VG/FREE
	sets or changes the LV size so as to allocate or free the
	desired quantity of physical extents (that might amount to a
	lower number of logical extents for the LV concerned)

-l+50%FREE - Use up half the remaining free space in the VG when
	carrying out this operation.

-l50%VG - After this operation, this LV should be using up half the
	space in the VG.

-l200%LV - Double the logical size of this LV.

-l+100%LV - Double the logical size of this LV.

-l-50%LV - Reduce the logical size of this LV by half.
2014-02-24 22:48:23 +00:00
Jonathan Brassow
f3d1debb18 cache: Disallow resizing of cache related LVs
For now, disallow lvextend/lvreduce/lvresize of cache LVs, cache
pool LVs, and cache pool sub-LVs.
2014-02-24 10:19:50 -06:00
Alasdair G Kergon
13c3f53f55 allocation: misc fixes for percent/raid rounding
Several fixes for the recent changes that treat allocation percentages
as upper limits.
Improve messages to make it easier to see what is happening.
Fix some cases that failed with errors when they didn't need to.
Fix crashes when first_seg() returns NULL.
Remove a couple of log_errors that were actually debugging messages.
2014-02-22 00:26:01 +00:00
Jonathan Brassow
00ce01e52d cache-pool: Change segtype name from cache_pool to cache-pool
Thin pools use "thin-pool" for the segment type name.  To be consistent,
we use "cache-pool" instead of "cache_pool".
2014-02-19 09:26:03 -06:00