1
0
mirror of git://sourceware.org/git/lvm2.git synced 2024-12-22 17:35:59 +03:00
lvm2/test/shell/lvcreate-large-raid.sh

119 lines
3.1 KiB
Bash
Raw Normal View History

2017-07-02 22:38:32 +03:00
#!/usr/bin/env bash
# Copyright (C) 2012,2016 Red Hat, Inc. All rights reserved.
#
# This copyrighted material is made available to anyone wishing to use,
# modify, copy, or redistribute it subject to the terms and conditions
# of the GNU General Public License v.2.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
# 'Exercise some lvcreate diagnostics'
SKIP_WITH_LVMLOCKD=1
SKIP_WITH_LVMPOLLD=1
. lib/inittest
# FIXME update test to make something useful on <16T
aux can_use_16T || skip
aux have_raid 1 3 0 || skip
segtypes="raid5"
aux have_raid4 && segtypes="raid4 raid5"
# Prepare 5x ~1P sized devices
aux prepare_pvs 5 1000000000
vgcreate $vg1 $(< DEVICES)
aux lvmconf 'devices/issue_discards = 1'
# Delay PVs so that resynchronization doesn't fill too much space
for device in $(< DEVICES)
do
2017-07-10 11:40:09 +03:00
aux delay_dev "$device" 0 10 "$(get first_extent_sector "$device")"
done
# bz837927 START
#
# Create large RAID LVs
#
# 200 TiB raid1
lvcreate --type raid1 -m 1 -L 200T -n $lv1 $vg1 --nosync
check lv_field $vg1/$lv1 size "200.00t"
check raid_leg_status $vg1 $lv1 "AA"
lvremove -ff $vg1
# 1 PiB raid1
lvcreate --type raid1 -m 1 -L 1P -n $lv1 $vg1 --nosync
check lv_field $vg1/$lv1 size "1.00p"
check raid_leg_status $vg1 $lv1 "AA"
lvremove -ff $vg1
# 750 TiB raid4/5
for segtype in $segtypes; do
lvcreate --type $segtype -i 3 -L 750T -n $lv1 $vg1 --nosync
check lv_field $vg1/$lv1 size "750.00t"
check raid_leg_status $vg1 $lv1 "AAAA"
lvremove -ff $vg1
done
#
# Extending large 200 TiB RAID LV to 400 TiB (belong in different script?)
#
lvcreate --type raid1 -m 1 -L 200T -n $lv1 $vg1 --nosync
check lv_field $vg1/$lv1 size "200.00t"
check raid_leg_status $vg1 $lv1 "AA"
lvextend -L +200T $vg1/$lv1
check lv_field $vg1/$lv1 size "400.00t"
check raid_leg_status $vg1 $lv1 "AA"
lvremove -ff $vg1
# Check --nosync is rejected for raid6
if aux have_raid 1 9 0 ; then
not lvcreate --type raid6 -i 3 -L 750T -n $lv1 $vg1 --nosync
fi
# 750 TiB raid6
lvcreate --type raid6 -i 3 -L 750T -n $lv1 $vg1
check lv_field $vg1/$lv1 size "750.00t"
check raid_leg_status $vg1 $lv1 "aaaaa"
lvremove -ff $vg1
# 1 PiB raid6, then extend up to 2 PiB
lvcreate --type raid6 -i 3 -L 1P -n $lv1 $vg1
check lv_field $vg1/$lv1 size "1.00p"
check raid_leg_status $vg1 $lv1 "aaaaa"
lvextend -L +1P $vg1/$lv1
check lv_field $vg1/$lv1 size "2.00p"
check raid_leg_status $vg1 $lv1 "aaaaa"
lvremove -ff $vg1
#
# Convert large 200 TiB linear to RAID1 (belong in different test script?)
#
lvcreate -aey -L 200T -n $lv1 $vg1
lvconvert -y --type raid1 -m 1 $vg1/$lv1
check lv_field $vg1/$lv1 size "200.00t"
lvconvert: linear -> raid1 upconvert should cause "recover" not "resync" Two of the sync actions performed by the kernel (aka MD runtime) are "resync" and "recover". The "resync" refers to when an entirely new array is going through the process of initializing (or resynchronizing after an unexpected shutdown). The "recover" is the process of initializing a new member device to the array. So, a brand new array with all new devices will undergo "resync". An array with replaced or added sub-LVs will undergo "recover". These two states are treated very differently when failures happen. If any device is lost or replaced while "resync", there are no worries. This is because any writes created from the inception of the array have occurred to all the devices and can be safely recovered. Even though non-initialized portions will still be resync'ed with uninitialized data, it is ok. However, if a pre-existing device is lost (aka, the original linear device in a linear -> raid1 convert) during a "recover", data loss can be the result. Thus, writes are errored by the kernel and recovery is halted. The failed device must be restored or removed. This is the correct behavior. Unfortunately, we were treating an up-convert from linear as a "resync" when we should have been treating it as a "recover". This patch removes the special case for linear upconvert. It allows each new image sub-LV to be marked with a rebuild flag and treats the array as 'in-sync'. This has the correct effect of causing the upconvert to be treated as a "recover" rather than a "resync". There is no need to flag these two states differently in LVM metadata, because they are already considered differently by the kernel RAID metadata. (Any activation/deactivation will properly resume the "recover" process and not a "resync" process.) We make this behavior change based on the presense of dm-raid target version 1.9.0+.
2017-06-14 16:33:42 +03:00
if aux have_raid 1 9 0; then
# The 1.9.0 version of dm-raid is capable of performing
# linear -> RAID1 upconverts as "recover" not "resync"
# The LVM code now checks the dm-raid version when
# upconverting and if 1.9.0+ is found, it uses "recover"
check raid_leg_status $vg1 $lv1 "Aa"
else
check raid_leg_status $vg1 $lv1 "aa"
fi
lvremove -ff $vg1
# bz837927 END
vgremove -ff $vg1