1
0
mirror of git://sourceware.org/git/lvm2.git synced 2024-12-21 13:34:40 +03:00
Commit Graph

2569 Commits

Author SHA1 Message Date
Zdenek Kabelac
11eaf1c98c thin: add function pool_is_active
This internal function check for active pool device.
For cluster it checks every thin volume,
On the non-clustered VG we need to check just
for presence of -tpool device.
2013-02-05 14:35:44 +01:00
Zdenek Kabelac
9d445f371c report: leave empty report field for 0
Since we do not support LVs with 0 size, use this value
as 'error' value for devices without origin, and leave this
field blank as in other cases.
2013-02-05 14:32:37 +01:00
Zdenek Kabelac
be5ad90703 lvconvert: fix accepting second lv name
Do not allow to accept second LV name on lvconvert --thinpool
command line.
2013-02-05 14:31:17 +01:00
Zdenek Kabelac
a4870c79ca thin: use noflush for obtaining transaction_id
Do not flush thin pool data, when reading transation_id status.
2013-02-04 19:05:56 +01:00
Zdenek Kabelac
ca7abbce8a activate: add lv_layer function
Add function to return layer name for LV.
2013-02-04 19:01:10 +01:00
Jonathan Brassow
38e7b37c89 WHATS_NEW: Better description of previous change 2013-02-01 11:52:25 -06:00
Jonathan Brassow
801d4f96a8 RAID: Improve 'lvs' attribute reporting of RAID LVs and sub-LVs
There are currently a few issues with the reporting done on RAID LVs and
sub-LVs.  The most concerning is that 'lvs' does not always report the
correct failure status of individual RAID sub-LVs (devices).  This can
occur when a device fails and is restored after the failure has been
detected by the kernel.  In this case, 'lvs' would report all devices are
fine because it can read the labels on each device just fine.
Example:
[root@bp-01 lvm2]# lvs -a -o name,vg_name,attr,copy_percent,devices vg
  LV            VG   Attr      Cpy%Sync Devices
  lv            vg   rwi-a-r--   100.00 lv_rimage_0(0),lv_rimage_1(0)
  [lv_rimage_0] vg   iwi-aor--          /dev/sda1(1)
  [lv_rimage_1] vg   iwi-aor--          /dev/sdb1(1)
  [lv_rmeta_0]  vg   ewi-aor--          /dev/sda1(0)
  [lv_rmeta_1]  vg   ewi-aor--          /dev/sdb1(0)

However, 'dmsetup status' on the device tells us a different story:
  [root@bp-01 lvm2]# dmsetup status vg-lv
  0 1024000 raid raid1 2 DA 1024000/1024000

In this case, we must also be sure to check the RAID LVs kernel status
in order to get the proper information.  Here is an example of the correct
output that is displayed after this patch is applied:
[root@bp-01 lvm2]# lvs -a -o name,vg_name,attr,copy_percent,devices vg
  LV            VG   Attr      Cpy%Sync Devices
  lv            vg   rwi-a-r-p   100.00 lv_rimage_0(0),lv_rimage_1(0)
  [lv_rimage_0] vg   iwi-aor-p          /dev/sda1(1)
  [lv_rimage_1] vg   iwi-aor--          /dev/sdb1(1)
  [lv_rmeta_0]  vg   ewi-aor-p          /dev/sda1(0)
  [lv_rmeta_1]  vg   ewi-aor--          /dev/sdb1(0)

The other case where 'lvs' gives incomplete or improper output is when a
device is replaced or added to a RAID LV.  It should display that the RAID
LV is in the process of sync'ing and that the new device is the only one
that is not-in-sync - as indicated by a leading 'I' in the Attr column.
(Remember that 'i' indicates an (i)mage that is in-sync and 'I' indicates
an (I)mage that is not in sync.)  Here's an example of the old incorrect
behaviour:
[root@bp-01 lvm2]# lvs -a -o name,vg_name,attr,copy_percent,devices vg
  LV            VG   Attr      Cpy%Sync Devices
  lv            vg   rwi-a-r--   100.00 lv_rimage_0(0),lv_rimage_1(0)
  [lv_rimage_0] vg   iwi-aor--          /dev/sda1(1)
  [lv_rimage_1] vg   iwi-aor--          /dev/sdb1(1)
  [lv_rmeta_0]  vg   ewi-aor--          /dev/sda1(0)
  [lv_rmeta_1]  vg   ewi-aor--          /dev/sdb1(0)
[root@bp-01 lvm2]# lvconvert -m +1 vg/lv; lvs -a -o name,vg_name,attr,copy_percent,devices vg
  LV            VG   Attr      Cpy%Sync Devices
  lv            vg   rwi-a-r--     0.00 lv_rimage_0(0),lv_rimage_1(0),lv_rimage_2(0)
  [lv_rimage_0] vg   Iwi-aor--          /dev/sda1(1)
  [lv_rimage_1] vg   Iwi-aor--          /dev/sdb1(1)
  [lv_rimage_2] vg   Iwi-aor--          /dev/sdc1(1)
  [lv_rmeta_0]  vg   ewi-aor--          /dev/sda1(0)
  [lv_rmeta_1]  vg   ewi-aor--          /dev/sdb1(0)
  [lv_rmeta_2]  vg   ewi-aor--          /dev/sdc1(0)                            ** Note that all the images currently are marked as 'I' even though it is
   only the last device that has been added that should be marked.

Here is an example of the correct output after this patch is applied:
[root@bp-01 lvm2]# lvs -a -o name,vg_name,attr,copy_percent,devices vg
  LV            VG   Attr      Cpy%Sync Devices
  lv            vg   rwi-a-r--   100.00 lv_rimage_0(0),lv_rimage_1(0)
  [lv_rimage_0] vg   iwi-aor--          /dev/sda1(1)
  [lv_rimage_1] vg   iwi-aor--          /dev/sdb1(1)
  [lv_rmeta_0]  vg   ewi-aor--          /dev/sda1(0)
  [lv_rmeta_1]  vg   ewi-aor--          /dev/sdb1(0)
[root@bp-01 lvm2]# lvconvert -m +1 vg/lv; lvs -a -o name,vg_name,attr,copy_percent,devices vg
  LV            VG   Attr      Cpy%Sync Devices
  lv            vg   rwi-a-r--     0.00 lv_rimage_0(0),lv_rimage_1(0),lv_rimage_2(0)
  [lv_rimage_0] vg   iwi-aor--          /dev/sda1(1)
  [lv_rimage_1] vg   iwi-aor--          /dev/sdb1(1)
  [lv_rimage_2] vg   Iwi-aor--          /dev/sdc1(1)
  [lv_rmeta_0]  vg   ewi-aor--          /dev/sda1(0)
  [lv_rmeta_1]  vg   ewi-aor--          /dev/sdb1(0)
  [lv_rmeta_2]  vg   ewi-aor--          /dev/sdc1(0)
** Note only the last image is marked with an 'I'.  This is correct and we can
   tell that it isn't the whole array that is sync'ing, but just the new
   device.

It also works under snapshots...
[root@bp-01 lvm2]# lvs -a -o name,vg_name,attr,copy_percent,devices vg
  LV            VG   Attr      Cpy%Sync Devices
  lv            vg   owi-a-r-p    33.47 lv_rimage_0(0),lv_rimage_1(0),lv_rimage_2(0)
  [lv_rimage_0] vg   iwi-aor--          /dev/sda1(1)
  [lv_rimage_1] vg   Iwi-aor-p          /dev/sdb1(1)
  [lv_rimage_2] vg   Iwi-aor--          /dev/sdc1(1)
  [lv_rmeta_0]  vg   ewi-aor--          /dev/sda1(0)
  [lv_rmeta_1]  vg   ewi-aor-p          /dev/sdb1(0)
  [lv_rmeta_2]  vg   ewi-aor--          /dev/sdc1(0)
  snap          vg   swi-a-s--          /dev/sda1(51201)
2013-02-01 11:33:54 -06:00
Peter Rajnoha
f7da1caf8d blkdeactivate: fix handling of nested mountpoints and mangled mount paths.
If there was a nested mountpoint inside an existing mount path,
blkdeactivate could fail to unmount such a mountpoint as it
needs to deactivate the deepest path first and continue upwards.

For example the simplest reproducer:

[root@rhel6-a ~]# lsblk
NAME                        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                           8:0    0    4G  0 disk
|-vg-lvol0 (dm-2)           253:2    0   32M  0 lvm  /mnt/a
`-vg-lvol1 (dm-3)           253:3    0   32M  0 lvm  /mnt/a/b

Before this patch:

[root@rhel6-a ~]# blkdeactivate -u
Deactivating block devices:
  UMOUNT: unmounting vg-lvol0 (dm-2) mounted on /mnt/a
umount: /mnt/a: device is busy.
        (In some cases useful info about processes that use
         the device is found by lsof(8) or fuser(1))
  UMOUNT: unmounting vg-lvol1 (dm-3) mounted on /mnt/a/b
  LVM: deactivating Logical Volume vg/lvol1

(deactivation of vg/lvol0 is skipped as /mnt/a that is on lvol0
can't be unmounted - it still has /mnt/a/b as nested mountpoint!)

With this patch applied:

[root@rhel6-a ~]# blkdeactivate -u
Deactivating block devices:
  UMOUNT: unmounting vg-lvol1 (dm-3) mounted on /mnt/a/b
  UMOUNT: unmounting vg-lvol0 (dm-2) mounted on /mnt/a
  LVM: deactivating Logical Volume vg/lvol0
  LVM: deactivating Logical Volume vg/lvol1

===

Also, this patch contains a fix for processing mangled mount paths:

[root@rhel6-a ~]# lsblk
NAME                        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                           8:0    0    4G  0 disk
`-vg-lvol0 (dm-2)           253:2    0   32M  0 lvm  /mnt/x y z

[root@rhel6-a ~]# lsblk -r
vg-lvol0 253:2 0 32M 0 lvm /mnt/x\x20y\x20z

(the mount path is mangled with \xNN that is visible in raw
lsblk output only and which is used in blkdeactive as well)

Before this patch:

[root@rhel6-a ~]# blkdeactivate -u
Deactivating block devices:
  umount: /mnt/x\x20y\x20z: not found

After this patch applied:

[root@rhel6-a ~]# blkdeactivate -u
Deactivating block devices:
  UMOUNT: unmounting vg-lvol0 (dm-2) mounted on /mnt/x\x20y\x20z
  LVM: deactivating Logical Volume vg/lvol0
2013-01-23 14:45:41 +01:00
Zdenek Kabelac
8bcc1da2f3 locales: use higher prio LC_ALL variable
For reseting locale environment into significantly less memory
consuming version 'C' - use LC_ALL instead of LANG since it has
higher priority in locale settings.

Otherwise we may observe whole locale-archive which might be
over 100MB on i.e. Fedora systems locked in memory with
some daemons.
2013-01-22 11:25:02 +01:00
Petr Rockai
142c4bf9f0 Update WHATS_NEW. 2013-01-16 11:22:08 +01:00
Zdenek Kabelac
2b760a7fa7 whatsnew 2013-01-11 09:26:51 +01:00
Alasdair G Kergon
7f747a0d73 logging: add debug classes
Add log/debug_classes to lvm.conf to allow debug messages to be
classified and filtered at runtime.

The dm_errno field is only used by log_error(), so I've redefined it
for log_debug() messages to hold the message class.

By default, all existing messages appear, but we can add categories that
generate high volumes of data, such as logging all traffic to/from
lvmetad.
2013-01-07 22:25:19 +00:00
Peter Rajnoha
ad85b0c526 pvscan: synchronize with udev if pvscan --cache is used.
We need to call sync_local_dev_names directly as pvscan uses
VG_GLOBAL lock and this one *does not* cause the synchronization
(sync_dev_names) to be called on unlock (VG_GLOBAL is not a real VG):

define unlock_vg(cmd, vol)
  do { \
    if (is_real_vg(vol)) \
      sync_dev_names(cmd); \
    (void) lock_vol(cmd, vol, LCK_VG_UNLOCK); \
  } while (0)

Without this fix, we end up without udev synchronization for the
pvscan --cache (mainly for -aay that causes the VGs/LVs to be
autoactivated) and also udev synchronization cookies are then left
in the system since they're not managed properly (code before sets
up udev sync cookies, but we have to call dm_udev_wait at least once
after that to do the wait and cleanup).
2012-12-21 11:15:46 +01:00
Peter Rajnoha
756bcabbfe activation: fix autoactivation to not trigger on each PV change
Before, the pvscan --cache -aay was called on each ADD and CHANGE
uevent (for a device that is not a device-mapper device) and each CHANGE
event (for a PV that is a device-mapper device).

This causes troubles with autoactivation in some cases as CHANGE event
may originate from using the OPTION+="watch" udev rule that is defined
in 60-persistent-storage.rules (part of the rules provided by udev
directly) and it's used for all block devices
(except fd*|mtd*|nbd*|gnbd*|btibm*|dm-*|md* devices). For example, the
following sequence incorrectly activates the rest of LVs in a VG if one
of the LVs in the VG is being removed:

[root@rhel6-a ~]# pvcreate /dev/sda
  Physical volume "/dev/sda" successfully created

[root@rhel6-a ~]# vgcreate vg /dev/sda
  Volume group "vg" successfully created

[root@rhel6-a ~]# lvcreate -l1 vg
  Logical volume "lvol0" created

[root@rhel6-a ~]# lvcreate -l1 vg
  Logical volume "lvol1" created

[root@rhel6-a ~]# vgchange -an vg
  0 logical volume(s) in volume group "vg" now active

[root@rhel6-a ~]# lvs
  LV      VG        Attr      LSize   Pool Origin Data%  Move Log
Cpy%Sync Convert
  lvol0   vg        -wi------   4.00m
  lvol1   vg        -wi------   4.00m

[root@rhel6-a ~]# lvremove -ff vg/lvol1
  Logical volume "lvol1" successfully removed

[root@rhel6-a ~]# lvs
  LV      VG        Attr      LSize   Pool Origin Data%  Move Log
Cpy%Sync Convert
  lvol0   vg        -wi-a----   4.00m

...so the vg was deactivated, then lvol1 removed, and we end up with
lvol1 removed (which is ok) BUT with lvol0 activated (which is wrong)!!!
This is because after lvol1 removal, we need to write metadata to the
underlying device /dev/sda and that causes the CHANGE event to be
generated (because of the WATCH udev rule set on this device) and this
causes the pvscan --cache -aay to be reevaluated.

We have to limit this and call pvscan --cache -aay to autoactivate
VGs/LVs only in these cases:

 --> if the *PV is not a dm device*, scan only after proper device
addition (ADD event) and not with any other changes (CHANGE event)

 --> if the *PV is a dm device*, scan only after proper mapping
activation (CHANGE event + the underlying PV in a state "just
activated")
2012-12-21 10:34:48 +01:00
Jonathan Brassow
970dfbcd69 RAID: Limit replacement of devices when array is not in-sync.
If a RAID array is not in-sync, replacing devices should not be allowed
as a general rule.  This is because the contents used to populate the
incoming device may be undefined because the devices being read where
not in-sync.  The kernel enforces this rule unless overridden by not
allowing the creation of an array that is not in-sync and includes a
devices that needs to be rebuilt.

Since we cannot know the sync state of an LV if it is inactive, we must
also enforce the rule that an array must be active to replace devices.

That leaves us with the following conditions:
1) never allow replacement or repair of devices if the LV is in-active
2) never allow replacement if the LV is not in-sync
3) allow repair if the LV is not in-sync, but warn that contents may
   not be recoverable.

In the case where a user is performing the repair on the command line via
'lvconvert --repair', the warning is printed before the user is prompted
if they would like to replace the device(s).  If the repair is automated
(i.e. via dmeventd and policy is "allocate"), then the device is replaced
if possible and the warning is printed.
2012-12-18 14:40:42 -06:00
Peter Rajnoha
0379c480e0 WHATS_NEW: changelog for fae1a611d2 and 5294a6f77a 2012-12-18 12:12:58 +01:00
Zdenek Kabelac
401c9aba4a pv_read: add missing check for valid info
If the lvmcache_info_from_pvid() fails to find valid
info, invoke the lookup by dev, and only in this case
call lvmcache_info_from_pvid() again.

Also check for the result of info and return
error directly, so the NULL is not passed
to lvmcache_get_label().
2012-12-15 17:23:27 +01:00
Zdenek Kabelac
3e8dbfaecf lvmetad: add check for failure dm_config_write_node
Detect if dm_config_write_node failed and fail correctly.
2012-12-15 17:23:27 +01:00
Zdenek Kabelac
4008f4f891 lvmetad: fix socket leak in handle_connect
Close socket_fd and report error on malloc failure.
2012-12-15 17:23:27 +01:00
Zdenek Kabelac
e012d0635d lvmetad: check id_read_format error status
Detect error from id_read_format() function.
2012-12-15 17:23:27 +01:00
Zdenek Kabelac
ba3f37c9e4 lvmetad: fix memleak on pv_found error path
Free resources allocated in pv_found when going out
through error path.
2012-12-15 17:23:27 +01:00
Zdenek Kabelac
a4269aadf3 lvmetad: unlock vg on out-of-memory path
If we fail to get memory for mutex, hash the mutex
or fail somewhere along pthread function calls
return allocated resources back and unlock vg_lock_map mutex.
2012-12-15 17:23:26 +01:00
Zdenek Kabelac
788ac7fa54 libdaemon: check for strdup result
Detect failure of dm_pool_strdup() and print error in fail path.
Save one extra strchr call - since we already know the distance
for the '=' character.

Drop stack trace from return after log_error().
2012-12-15 17:23:26 +01:00
Zdenek Kabelac
ff5612c0c3 format-text: check for _text_create_text_instance
Test if 'fid' creation failed and report stack trace,
break the loop and do not pass NULL fid further.
2012-12-15 17:23:23 +01:00
Zdenek Kabelac
740ab81d03 log: move abort past syslog
When the abort_on_internal_errors is enabled, we aborted prior
the syslog logging output.

Since such fatal error gets level _LOG_FATAL it should
not be blocked by debug_level() check so lets move it further,
to get abort error logged also via syslog.
2012-12-15 17:22:48 +01:00
Andy Grover
58b61c252a python-lvm: Small fixups to new create_lv_snapshot
Tabify

Remove use of asize, unneeded.

Don't initialize lvobj->parent_vgobj to NULL, the object ctor already
zeroed everything on alloc.

Redo call to lvm_lv_snapshot to use the liblvm snapshot implementation
we went with.

Add {}s to silence warning in lv_dealloc.

Rename snapshot function for consistency.

Update WHATS_NEW.

Signed-off-by: Andy Grover <agrover@redhat.com>
2012-12-14 10:30:26 -08:00
Petr Rockai
c089029b70 Update WHATS_NEW. 2012-12-12 15:17:08 +01:00
Peter Rajnoha
e5709a32be lvmetad: fix compiler warning and add WHATS_NEW line for previous commit 2012-12-12 13:27:25 +01:00
Peter Rajnoha
cad22be394 lvconvert: allow lvconvert --stripes/stripesize only with -mirrors/--repair/--thinpool
Also, update lvconvert man page to reflect this and make clear that
the --stripes/stripesize is applied to newly allocated space only.
2012-12-11 15:50:25 +01:00
Zdenek Kabelac
ec49f07b0d mirrors: fix leak in device_is_usable mirror check
Function _ignore_blocked_mirror_devices was not release
allocated strings images_health and log_health.

In error paths it was also not releasing dm_task structure.

Swaped return code of _ignore_blocked_mirror_devices and
use 1 as success.

In _parse_mirror_status use log_error if memory allocation
fails and few more errors so they are no going unnoticed
as debug messages.

On error path always clear return values and free strings.

For dev_create_file  use cache mem pool to avoid memleak.
2012-12-11 11:15:22 +01:00
Peter Rajnoha
f942ae4a7a lvconvert: do not ignore -f in lvconvert --repair -y -f 2012-12-11 09:52:54 +01:00
Jonathan Brassow
3835755259 pvmove/RAID: Disallow pvmove on RAID LVs until properly handled
Attempting pvmove on RAID LVs replaces the kernel RAID target with
a temporary pvmove target, ultimately destroying the RAID LV.  pvmove
must be prevented on RAID LVs for now.

Use 'lvconvert --replace old_pv vg/lv new_pv' if you want to move
an image of the RAID LV.
2012-12-04 17:47:47 -06:00
Peter Rajnoha
e2be2652ad Allow empty activation/{auto_activation|read_only|}_volume_list config option.
In case we don't want to activate, autoactivate or have the
VG/LV read-only. Primarily targeted for the auto_activation_volume_list,
but it makes no harm for other settings (the part of the code
that reads these three settings is shared, but there's no
reason to separate it only for this change).
2012-12-04 10:33:54 +01:00
Zdenek Kabelac
5ec20e267f thin: reworked thin feature detection
Rework thin feature detection to support runtime
section to allow to disable them selectively.

New lvm.conf option is born: global/thin_disabled_features
2012-12-03 11:57:40 +01:00
Zdenek Kabelac
99018b37ee thin: lvconvert supports swapping metadata device
Support swapping of metadata device if the thin pool already
exists. This way it's easy to i.e. resize metadata or their
repair operation.

User may create some empty LV, replace existing metadata
or dump and restore them into bigger LV.
2012-12-02 18:01:27 +01:00
Zdenek Kabelac
6987a353de thin: add detach_pool_metadata_lv
Add internal function detach_pool_metadata_lv().
2012-12-02 17:56:29 +01:00
Zdenek Kabelac
9ec474f38a lvm2api: fix size reporting
API is reporting all sizes as 64bit integers in bytes.
Fix at those places, where sectors were returned
to remain consistent.
2012-12-02 17:55:08 +01:00
Peter Rajnoha
4891a735d3 udev: recognize DM_DISABLE_UDEV environment variable
Setting this environment variable will cause a full fallback
to old direct node and symlink management in libdevmapper and lvm2.

It means:

 - disabling udev synchronization
   (--noudevsync in dmsetup and --noudevsync + activation/udev_sync=0
    lvm2 config)
 - disabling dm and any subsystem related udev rules
   (--noudevrules in dmsetup and activation/udev_rules=0 lvm2 config)
 - management of nodes/symlinks under /dev directly by libdevmapper/lvm2
   (--verifyudev in dmsetup and activation/verify_udev_operations=1
    lvm2 config)
 - not obtaining any device list from udev database
   (devices/obtain_device_list_from_udev=0 lvm2 config)

Note: we could set all of these before - there's no functional change!
However the DM_DISABLE_UDEV environment variable is a nice shortcut
to make it easier for libdevmapper users so that one can switch off all
of the udev management off at one go directly on the command line,
without a need to modify any source or add any extra switches.
2012-11-29 14:03:48 +01:00
Peter Rajnoha
fb8cc7c63f udev: do not verify udev operations for --noudevsync
If udev synchronization is disabled by means of --noudevsync
option, we should disable just the synchronization and nothing else.
The udev fallback (verifying udev operations and fixing the
nodes/symlinks if found incorrect) is orthogonal and controlled
by a separate activation/verify_udev_operations configuration option.
2012-11-29 13:59:12 +01:00
Zdenek Kabelac
0387e70d76 thin: fix property discard for lvm2api
Discards property is string and may have these values:
  ignore, nopassdown, passdown
2012-11-27 14:09:49 +01:00
Zdenek Kabelac
09b7ceea95 thin: allow restore with --force
Allow restoring metadata with thin pool volumes.
No validation is done for this case within vgcfgrestore tool -
thus incorrect metadata may lead to destruction of pool content.
2012-11-27 14:08:24 +01:00
Alasdair G Kergon
8c49aa79e7 filters: Add STEC skd and Violin vtms devices 2012-11-26 14:55:17 +00:00
Zdenek Kabelac
1ef9831018 thin: support configurable thin pool defaults
Configurable settings for thin pool create
if they are not specified on command line.

New supported lvm.conf options are:
  allocation/thin_pool_chunk_size
  allocation/thin_pool_discards
  allocation/thin_pool_zero
2012-11-26 12:16:47 +01:00
Zdenek Kabelac
683b1f0625 thin: detect discards for non-power-2
Check if target supports discards for chunk sizes,
that are not power of 2 (just multiple of 64K),
and enable it in case it's supported by thin kernel target.
2012-11-26 12:14:47 +01:00
Petr Rockai
60668f823e Automatically restore MISSING PVs with no MDAs. 2012-11-25 20:41:56 +01:00
Jonathan Brassow
b3e9a09abe RAID: If no stripes argument is given for RAID10 create, default to 2
Similar to the way the 'mirror', 'raid1' and 'raid10' segment types set
the number of mirrors to 2 ('-m 1') if the argument is not specified,
here we set the number of stripes to 2 if not given on the command line
when creating a RAID10 LV.
2012-11-21 18:46:52 -06:00
Jonathan Brassow
fb0cee9a66 RAID: Do not allow --splitmirrors on RAID10 logical volumes.
RAID10 does not have the ability to split off images for independent
use.  So, 'lvconvert --splitmirrors' will not work and must be
disallowed.
2012-11-21 18:39:26 -06:00
Zdenek Kabelac
d5697b29ee mm: skip mlocking [vectors]
Somehow forgotten:
https://www.redhat.com/archives/linux-lvm/2012-June/msg00019.html
Need for arm architecture support.
2012-11-20 10:02:51 +01:00
Zdenek Kabelac
b21d3e3592 thin: lvconvert update
Use common function from toollib and support allocation
of metadata LV with give thin pool data LV.
2012-11-19 14:38:17 +01:00
Zdenek Kabelac
f4137640f6 thin: add common pool functions
Move common functions for lvcreate and lvconvert.

get_pool_params() - read thin pool args.
update_pool_params() - updates/validates some thin args.

It is getting complicated and even few more things will be
implemented, so to avoid reimplementing things differently
in lvcreate and lvconvert  code has been splitted
into 2 common functions that allow some future extension.
2012-11-19 14:38:17 +01:00