1
0
mirror of git://sourceware.org/git/lvm2.git synced 2024-12-22 17:35:59 +03:00
lvm2/test/shell/lvconvert-thin.sh

119 lines
3.5 KiB
Bash
Raw Normal View History

2012-10-09 18:32:11 +04:00
#!/bin/sh
# Copyright (C) 2012 Red Hat, Inc. All rights reserved.
#
# This copyrighted material is made available to anyone wishing to use,
# modify, copy, or redistribute it subject to the terms and conditions
# of the GNU General Public License v.2.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
. lib/test
2012-11-19 15:31:11 +04:00
prepare_lvs()
{
lvremove -f $vg
lvcreate -L10M -n $lv1 $vg
lvcreate -L8M -n $lv2 $vg
}
2012-10-09 18:32:11 +04:00
#
# Main
#
aux have_thin 1 0 0 || skip
2012-10-09 18:32:11 +04:00
2012-11-27 04:01:38 +04:00
aux prepare_pvs 4 64
2012-10-09 18:32:11 +04:00
2012-11-27 04:01:38 +04:00
# build one large PV
vgcreate $vg1 $(cut -d ' ' -f -3 DEVICES)
# 32bit linux kernels are fragille with device size >= 16T
# maybe uname -m [ x86_64 | i686 ]
TSIZE=64T
aux can_use_16T || TSIZE=15T
lvcreate -s -l 100%FREE -n $lv $vg1 --virtualsize $TSIZE
aux extend_filter_LVMTEST
2012-11-27 04:01:38 +04:00
pvcreate "$DM_DEV_DIR/$vg1/$lv"
vgcreate $vg -s 64K $(cut -d ' ' -f 4 DEVICES) "$DM_DEV_DIR/$vg1/$lv"
2012-10-09 18:32:11 +04:00
# create mirrored LVs for data and metadata volumes
lvcreate -aey -L10M --type mirror -m1 --mirrorlog core -n $lv1 $vg
Mirror/Thin: Disallow thinpools on mirror logical volumes The same corner cases that exist for snapshots on mirrors exist for any logical volume layered on top of mirror. (One example is when a mirror image fails and a non-repair LVM command is the first to detect it via label reading. In this case, the LVM command will hang and prevent the necessary LVM repair command from running.) When a better alternative exists, it makes no sense to allow a new target to stack on mirrors as a new feature. Since, RAID is now capable of running EX in a cluster and thin is not active-active aware, it makes sense to pair these two rather than mirror+thinpool. As further background, here are some additional comments that I made when addressing a bug related to mirror+thinpool: (https://bugzilla.redhat.com/show_bug.cgi?id=919604#c9) I am going to disallow thin* on top of mirror logical volumes. Users will have to use the "raid1" segment type if they want this. This bug has come down to a choice between: 1) Disallowing thin-LVs from being used as PVs. 2) Disallowing thinpools on top of mirrors. The problem is that the code in dev_manager.c:device_is_usable() is unable to tell whether there is a mirror device lower in the stack from the device being checked. Pretty much anything layered on top of a mirror will suffer from this problem. (Snapshots are a good example of this; and option #1 above has been chosen to deal with them. This can also be seen in dev_manager.c:device_is_usable().) When a mirror failure occurs, the kernel blocks all I/O to it. If there is an LVM command that comes along to do the repair (or a different operation that requires label reading), it would normally avoid the mirror when it sees that it is blocked. However, if there is a snapshot or a thin-LV that is on a mirror, the above code will not detect the mirror underneath and will issue label reading I/O. This causes the command to hang. Choosing #1 would mean that thin-LVs could never be used as PVs - even if they are stacked on something other than mirrors. Choosing #2 means that thinpools can never be placed on mirrors. This is probably better than we think, since it is preferred that people use the "raid1" segment type in the first place. However, RAID* cannot currently be used in a cluster volume group - even in EX-only mode. Thus, a complete solution for option #2 must include the ability to activate RAID logical volumes (and perform RAID operations) in a cluster volume group. I've already begun working on this.
2013-09-12 00:58:44 +04:00
lvcreate -aey -L10M -n $lv2 $vg
2012-11-19 15:31:11 +04:00
lvchange -an $vg/$lv1
Mirror/Thin: Disallow thinpools on mirror logical volumes The same corner cases that exist for snapshots on mirrors exist for any logical volume layered on top of mirror. (One example is when a mirror image fails and a non-repair LVM command is the first to detect it via label reading. In this case, the LVM command will hang and prevent the necessary LVM repair command from running.) When a better alternative exists, it makes no sense to allow a new target to stack on mirrors as a new feature. Since, RAID is now capable of running EX in a cluster and thin is not active-active aware, it makes sense to pair these two rather than mirror+thinpool. As further background, here are some additional comments that I made when addressing a bug related to mirror+thinpool: (https://bugzilla.redhat.com/show_bug.cgi?id=919604#c9) I am going to disallow thin* on top of mirror logical volumes. Users will have to use the "raid1" segment type if they want this. This bug has come down to a choice between: 1) Disallowing thin-LVs from being used as PVs. 2) Disallowing thinpools on top of mirrors. The problem is that the code in dev_manager.c:device_is_usable() is unable to tell whether there is a mirror device lower in the stack from the device being checked. Pretty much anything layered on top of a mirror will suffer from this problem. (Snapshots are a good example of this; and option #1 above has been chosen to deal with them. This can also be seen in dev_manager.c:device_is_usable().) When a mirror failure occurs, the kernel blocks all I/O to it. If there is an LVM command that comes along to do the repair (or a different operation that requires label reading), it would normally avoid the mirror when it sees that it is blocked. However, if there is a snapshot or a thin-LV that is on a mirror, the above code will not detect the mirror underneath and will issue label reading I/O. This causes the command to hang. Choosing #1 would mean that thin-LVs could never be used as PVs - even if they are stacked on something other than mirrors. Choosing #2 means that thinpools can never be placed on mirrors. This is probably better than we think, since it is preferred that people use the "raid1" segment type in the first place. However, RAID* cannot currently be used in a cluster volume group - even in EX-only mode. Thus, a complete solution for option #2 must include the ability to activate RAID logical volumes (and perform RAID operations) in a cluster volume group. I've already begun working on this.
2013-09-12 00:58:44 +04:00
# conversion fails for mirror segment type
not lvconvert --thinpool $vg/$lv1
not lvconvert --thinpool $vg/$lv2 --poolmetadata $vg/$lv2
lvremove -f $vg
# create RAID LVs for data and metadata volumes
lvcreate -aey -L10M --type raid1 -m1 -n $lv1 $vg
lvcreate -aey -L10M --type raid1 -m1 -n $lv2 $vg
lvchange -an $vg/$lv1
2012-11-19 15:31:11 +04:00
# conversion fails for internal volumes
Mirror/Thin: Disallow thinpools on mirror logical volumes The same corner cases that exist for snapshots on mirrors exist for any logical volume layered on top of mirror. (One example is when a mirror image fails and a non-repair LVM command is the first to detect it via label reading. In this case, the LVM command will hang and prevent the necessary LVM repair command from running.) When a better alternative exists, it makes no sense to allow a new target to stack on mirrors as a new feature. Since, RAID is now capable of running EX in a cluster and thin is not active-active aware, it makes sense to pair these two rather than mirror+thinpool. As further background, here are some additional comments that I made when addressing a bug related to mirror+thinpool: (https://bugzilla.redhat.com/show_bug.cgi?id=919604#c9) I am going to disallow thin* on top of mirror logical volumes. Users will have to use the "raid1" segment type if they want this. This bug has come down to a choice between: 1) Disallowing thin-LVs from being used as PVs. 2) Disallowing thinpools on top of mirrors. The problem is that the code in dev_manager.c:device_is_usable() is unable to tell whether there is a mirror device lower in the stack from the device being checked. Pretty much anything layered on top of a mirror will suffer from this problem. (Snapshots are a good example of this; and option #1 above has been chosen to deal with them. This can also be seen in dev_manager.c:device_is_usable().) When a mirror failure occurs, the kernel blocks all I/O to it. If there is an LVM command that comes along to do the repair (or a different operation that requires label reading), it would normally avoid the mirror when it sees that it is blocked. However, if there is a snapshot or a thin-LV that is on a mirror, the above code will not detect the mirror underneath and will issue label reading I/O. This causes the command to hang. Choosing #1 would mean that thin-LVs could never be used as PVs - even if they are stacked on something other than mirrors. Choosing #2 means that thinpools can never be placed on mirrors. This is probably better than we think, since it is preferred that people use the "raid1" segment type in the first place. However, RAID* cannot currently be used in a cluster volume group - even in EX-only mode. Thus, a complete solution for option #2 must include the ability to activate RAID logical volumes (and perform RAID operations) in a cluster volume group. I've already begun working on this.
2013-09-12 00:58:44 +04:00
not lvconvert --thinpool $vg/${lv1}_rimage_0
not lvconvert --thinpool $vg/$lv1 --poolmetadata $vg/${lv2}_rimage_0
2012-11-19 15:31:11 +04:00
# can't use --readahead with --poolmetadata
not lvconvert --thinpool $vg/$lv1 --poolmetadata $vg/$lv2 --readahead 512
lvconvert --thinpool $vg/$lv1 --poolmetadata $vg/$lv2
prepare_lvs
lvconvert -c 64 --stripes 2 --thinpool $vg/$lv1 --readahead 48
lvremove -f $vg
lvcreate -L1T -n $lv1 $vg
lvconvert -c 8M --thinpool $vg/$lv1
lvremove -f $vg
# test with bigger sizes
lvcreate -L1T -n $lv1 $vg
lvcreate -L8M -n $lv2 $vg
lvcreate -L1M -n $lv3 $vg
2012-10-09 18:32:11 +04:00
2012-11-19 15:31:11 +04:00
# chunk size is bigger then size of thin pool data
not lvconvert -c 1G --thinpool $vg/$lv3
# stripes can't be used with poolmetadata
not lvconvert --stripes 2 --thinpool $vg/$lv1 --poolmetadata $vg/$lv2
# too small metadata (<2M)
not lvconvert -c 64 --thinpool $vg/$lv1 --poolmetadata $vg/$lv3
# too small chunk size fails
not lvconvert -c 4 --thinpool $vg/$lv1 --poolmetadata $vg/$lv2
# too big chunk size fails
not lvconvert -c 2G --thinpool $vg/$lv1 --poolmetadata $vg/$lv2
# negative chunk size fails
not lvconvert -c -256 --thinpool $vg/$lv1 --poolmetadata $vg/$lv2
# non power of 2 fails
not lvconvert -c 88 --thinpool $vg/$lv1 --poolmetadata $vg/$lv2
2012-10-09 18:32:11 +04:00
2012-11-19 15:31:11 +04:00
# Warning about smaller then suggested
lvconvert -c 256 --thinpool $vg/$lv1 --poolmetadata $vg/$lv2 |& tee err
grep "WARNING: Chunk size is smaller" err
2012-10-09 18:32:11 +04:00
2012-11-19 15:31:11 +04:00
lvremove -f $vg
lvcreate -L1T -n $lv1 $vg
lvcreate -L32G -n $lv2 $vg
# Warning about bigger then needed
lvconvert --thinpool $vg/$lv1 --poolmetadata $vg/$lv2 |& tee err
grep "WARNING: Maximum size" err
2012-10-09 18:32:11 +04:00
2012-11-19 15:31:11 +04:00
lvremove -f $vg
if test "$TSIZE" -eq 64T; then
2012-11-19 15:31:11 +04:00
lvcreate -L24T -n $lv1 $vg
# Warning about bigger then needed (24T data and 16G -> 128K chunk)
lvconvert -c 64 --thinpool $vg/$lv1 |& tee err
grep "WARNING: Chunk size is too small" err
fi
2012-10-09 18:32:11 +04:00
2012-11-19 15:31:11 +04:00
#lvs -a -o+chunk_size,stripe_size,seg_pe_ranges
2012-10-09 18:32:11 +04:00
Mirror/Thin: Disallow thinpools on mirror logical volumes The same corner cases that exist for snapshots on mirrors exist for any logical volume layered on top of mirror. (One example is when a mirror image fails and a non-repair LVM command is the first to detect it via label reading. In this case, the LVM command will hang and prevent the necessary LVM repair command from running.) When a better alternative exists, it makes no sense to allow a new target to stack on mirrors as a new feature. Since, RAID is now capable of running EX in a cluster and thin is not active-active aware, it makes sense to pair these two rather than mirror+thinpool. As further background, here are some additional comments that I made when addressing a bug related to mirror+thinpool: (https://bugzilla.redhat.com/show_bug.cgi?id=919604#c9) I am going to disallow thin* on top of mirror logical volumes. Users will have to use the "raid1" segment type if they want this. This bug has come down to a choice between: 1) Disallowing thin-LVs from being used as PVs. 2) Disallowing thinpools on top of mirrors. The problem is that the code in dev_manager.c:device_is_usable() is unable to tell whether there is a mirror device lower in the stack from the device being checked. Pretty much anything layered on top of a mirror will suffer from this problem. (Snapshots are a good example of this; and option #1 above has been chosen to deal with them. This can also be seen in dev_manager.c:device_is_usable().) When a mirror failure occurs, the kernel blocks all I/O to it. If there is an LVM command that comes along to do the repair (or a different operation that requires label reading), it would normally avoid the mirror when it sees that it is blocked. However, if there is a snapshot or a thin-LV that is on a mirror, the above code will not detect the mirror underneath and will issue label reading I/O. This causes the command to hang. Choosing #1 would mean that thin-LVs could never be used as PVs - even if they are stacked on something other than mirrors. Choosing #2 means that thinpools can never be placed on mirrors. This is probably better than we think, since it is preferred that people use the "raid1" segment type in the first place. However, RAID* cannot currently be used in a cluster volume group - even in EX-only mode. Thus, a complete solution for option #2 must include the ability to activate RAID logical volumes (and perform RAID operations) in a cluster volume group. I've already begun working on this.
2013-09-12 00:58:44 +04:00
# Convertions of pool to mirror or RAID is unsupported
not lvconvert --type mirror -m1 $vg/$lv1
not lvconvert --type raid1 -m1 $vg/$lv1
2012-10-09 18:32:11 +04:00
vgremove -ff $vg