1
0
mirror of git://sourceware.org/git/lvm2.git synced 2024-12-22 17:35:59 +03:00
lvm2/udev/69-dm-lvm-metad.rules.in

37 lines
1.4 KiB
Plaintext
Raw Normal View History

# Copyright (C) 2012 Red Hat, Inc. All rights reserved.
#
# This file is part of LVM2.
# Udev rules for LVM.
#
# Scan all block devices having a PV label for LVM metadata.
# Store this information in LVMetaD (the LVM metadata daemon) and maintain LVM
# metadata state for improved performance by avoiding further scans while
# running subsequent LVM commands or while using lvm2app library.
# Also, notify LVMetaD about any relevant block device removal.
#
# This rule is essential for having the information in LVMetaD up-to-date.
# It also requires blkid to be called on block devices before so only devices
# used as LVM PVs are processed (ID_FS_TYPE="LVM2_member" or "LVM1_member").
SUBSYSTEM!="block", GOTO="lvm_end"
(LVM_EXEC_RULE)
# Only process devices already marked as a PV - this requires blkid to be called before.
activation: fix autoactivation to not trigger on each PV change Before, the pvscan --cache -aay was called on each ADD and CHANGE uevent (for a device that is not a device-mapper device) and each CHANGE event (for a PV that is a device-mapper device). This causes troubles with autoactivation in some cases as CHANGE event may originate from using the OPTION+="watch" udev rule that is defined in 60-persistent-storage.rules (part of the rules provided by udev directly) and it's used for all block devices (except fd*|mtd*|nbd*|gnbd*|btibm*|dm-*|md* devices). For example, the following sequence incorrectly activates the rest of LVs in a VG if one of the LVs in the VG is being removed: [root@rhel6-a ~]# pvcreate /dev/sda Physical volume "/dev/sda" successfully created [root@rhel6-a ~]# vgcreate vg /dev/sda Volume group "vg" successfully created [root@rhel6-a ~]# lvcreate -l1 vg Logical volume "lvol0" created [root@rhel6-a ~]# lvcreate -l1 vg Logical volume "lvol1" created [root@rhel6-a ~]# vgchange -an vg 0 logical volume(s) in volume group "vg" now active [root@rhel6-a ~]# lvs LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert lvol0 vg -wi------ 4.00m lvol1 vg -wi------ 4.00m [root@rhel6-a ~]# lvremove -ff vg/lvol1 Logical volume "lvol1" successfully removed [root@rhel6-a ~]# lvs LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert lvol0 vg -wi-a---- 4.00m ...so the vg was deactivated, then lvol1 removed, and we end up with lvol1 removed (which is ok) BUT with lvol0 activated (which is wrong)!!! This is because after lvol1 removal, we need to write metadata to the underlying device /dev/sda and that causes the CHANGE event to be generated (because of the WATCH udev rule set on this device) and this causes the pvscan --cache -aay to be reevaluated. We have to limit this and call pvscan --cache -aay to autoactivate VGs/LVs only in these cases: --> if the *PV is not a dm device*, scan only after proper device addition (ADD event) and not with any other changes (CHANGE event) --> if the *PV is a dm device*, scan only after proper mapping activation (CHANGE event + the underlying PV in a state "just activated")
2012-12-21 13:34:48 +04:00
ENV{ID_FS_TYPE}!="LVM2_member|LVM1_member", GOTO="lvm_end"
ACTION=="remove", GOTO="lvm_scan"
2013-05-03 15:20:07 +04:00
ACTION=="change", KERNEL=="md[0-9]*", GOTO="lvm_scan"
activation: fix autoactivation to not trigger on each PV change Before, the pvscan --cache -aay was called on each ADD and CHANGE uevent (for a device that is not a device-mapper device) and each CHANGE event (for a PV that is a device-mapper device). This causes troubles with autoactivation in some cases as CHANGE event may originate from using the OPTION+="watch" udev rule that is defined in 60-persistent-storage.rules (part of the rules provided by udev directly) and it's used for all block devices (except fd*|mtd*|nbd*|gnbd*|btibm*|dm-*|md* devices). For example, the following sequence incorrectly activates the rest of LVs in a VG if one of the LVs in the VG is being removed: [root@rhel6-a ~]# pvcreate /dev/sda Physical volume "/dev/sda" successfully created [root@rhel6-a ~]# vgcreate vg /dev/sda Volume group "vg" successfully created [root@rhel6-a ~]# lvcreate -l1 vg Logical volume "lvol0" created [root@rhel6-a ~]# lvcreate -l1 vg Logical volume "lvol1" created [root@rhel6-a ~]# vgchange -an vg 0 logical volume(s) in volume group "vg" now active [root@rhel6-a ~]# lvs LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert lvol0 vg -wi------ 4.00m lvol1 vg -wi------ 4.00m [root@rhel6-a ~]# lvremove -ff vg/lvol1 Logical volume "lvol1" successfully removed [root@rhel6-a ~]# lvs LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert lvol0 vg -wi-a---- 4.00m ...so the vg was deactivated, then lvol1 removed, and we end up with lvol1 removed (which is ok) BUT with lvol0 activated (which is wrong)!!! This is because after lvol1 removal, we need to write metadata to the underlying device /dev/sda and that causes the CHANGE event to be generated (because of the WATCH udev rule set on this device) and this causes the pvscan --cache -aay to be reevaluated. We have to limit this and call pvscan --cache -aay to autoactivate VGs/LVs only in these cases: --> if the *PV is not a dm device*, scan only after proper device addition (ADD event) and not with any other changes (CHANGE event) --> if the *PV is a dm device*, scan only after proper mapping activation (CHANGE event + the underlying PV in a state "just activated")
2012-12-21 13:34:48 +04:00
# If the PV is not a dm device, scan only after device addition (ADD event)
KERNEL!="dm-[0-9]*", ACTION!="add", GOTO="lvm_end"
# If the PV is a dm device, scan only after proper mapping activation (CHANGE event + DM_ACTIVATION=1)
udev: also autoactivate on coldplug Commit 756bcabbfe297688ba240a880bc2b55265ad33f0 fixed autoactivation to not trigger on each uevent for a PV that appeared in the system most notably the events that are triggered artificially (udevadm trigger or as the result of the WATCH udev rule being applied that consequently generates CHANGE uevents). This fixed a situation in which VGs/LVs were activated when they should not. BUT we still need to care about the coldplug used at boot to retrigger the ADD events - the "udevadm trigger --action=add"! For non-DM-based PVs, this is already covered as for these we run the autoactivation on ADD event only. However, for DM-based PVs, we still need to run the autoactivation even for the artificial ADD event, reusing the udev DB content from previous proper CHANGE event that came with the DM device activation. Simply, this patch fixes a situation in which we run extra "udevadm trigger --action=add" (or echo add > /sys/block/<dev>/uevent) for DM-based PVs (cryptsetup devices, multipath devices, any other DM devices...). Without this patch, while using lvmetad + autoactivation, any VG/LV that has a DM-based PV and for which we do not call the activation directly, the VG/LV is not activated. For example a VG with an LV with root FS on it which is directly activated in initrd and then missing activation of the rest of the LVs in the VG because of unhandled uevent retrigger on boot after switching to root FS (the "coldplug"). (No WHATS_NEW here as this fixes the commit mentioned above and which was not released yet.)
2013-04-19 14:17:53 +04:00
# or after a coldplug (event retrigger) with "add" event (ADD event + DM_ACTIVATION=1)
activation: fix autoactivation to not trigger on each PV change Before, the pvscan --cache -aay was called on each ADD and CHANGE uevent (for a device that is not a device-mapper device) and each CHANGE event (for a PV that is a device-mapper device). This causes troubles with autoactivation in some cases as CHANGE event may originate from using the OPTION+="watch" udev rule that is defined in 60-persistent-storage.rules (part of the rules provided by udev directly) and it's used for all block devices (except fd*|mtd*|nbd*|gnbd*|btibm*|dm-*|md* devices). For example, the following sequence incorrectly activates the rest of LVs in a VG if one of the LVs in the VG is being removed: [root@rhel6-a ~]# pvcreate /dev/sda Physical volume "/dev/sda" successfully created [root@rhel6-a ~]# vgcreate vg /dev/sda Volume group "vg" successfully created [root@rhel6-a ~]# lvcreate -l1 vg Logical volume "lvol0" created [root@rhel6-a ~]# lvcreate -l1 vg Logical volume "lvol1" created [root@rhel6-a ~]# vgchange -an vg 0 logical volume(s) in volume group "vg" now active [root@rhel6-a ~]# lvs LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert lvol0 vg -wi------ 4.00m lvol1 vg -wi------ 4.00m [root@rhel6-a ~]# lvremove -ff vg/lvol1 Logical volume "lvol1" successfully removed [root@rhel6-a ~]# lvs LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert lvol0 vg -wi-a---- 4.00m ...so the vg was deactivated, then lvol1 removed, and we end up with lvol1 removed (which is ok) BUT with lvol0 activated (which is wrong)!!! This is because after lvol1 removal, we need to write metadata to the underlying device /dev/sda and that causes the CHANGE event to be generated (because of the WATCH udev rule set on this device) and this causes the pvscan --cache -aay to be reevaluated. We have to limit this and call pvscan --cache -aay to autoactivate VGs/LVs only in these cases: --> if the *PV is not a dm device*, scan only after proper device addition (ADD event) and not with any other changes (CHANGE event) --> if the *PV is a dm device*, scan only after proper mapping activation (CHANGE event + the underlying PV in a state "just activated")
2012-12-21 13:34:48 +04:00
KERNEL=="dm-[0-9]*", ENV{DM_ACTIVATION}!="1", GOTO="lvm_end"
LABEL="lvm_scan"
RUN+="(LVM_EXEC)/lvm pvscan --cache --activate ay --major $major --minor $minor"
LABEL="lvm_end"