1
0
mirror of git://sourceware.org/git/lvm2.git synced 2025-02-01 09:47:48 +03:00
lvm2/udev/69-dm-lvm-metad.rules.in

40 lines
1.6 KiB
Plaintext
Raw Normal View History

# Copyright (C) 2012 Red Hat, Inc. All rights reserved.
#
# This file is part of LVM2.
# Udev rules for LVM.
#
# Scan all block devices having a PV label for LVM metadata.
# Store this information in LVMetaD (the LVM metadata daemon) and maintain LVM
# metadata state for improved performance by avoiding further scans while
# running subsequent LVM commands or while using lvm2app library.
# Also, notify LVMetaD about any relevant block device removal.
#
# This rule is essential for having the information in LVMetaD up-to-date.
# It also requires blkid to be called on block devices before so only devices
# used as LVM PVs are processed (ID_FS_TYPE="LVM2_member" or "LVM1_member").
SUBSYSTEM!="block", GOTO="lvm_end"
(LVM_EXEC_RULE)
udev: inform lvmetad about lost PV label In stacked environment where we have a PV layered on top of a snapshot LV and then removing the LV, lvmetad still keeps information about the PV: [0] raw/~ $ pvcreate /dev/sda Physical volume "/dev/sda" successfully created [0] raw/~ $ vgcreate vg /dev/sda Volume group "vg" successfully created [0] raw/~ $ lvcreate -L32m vg Logical volume "lvol0" created [0] raw/~ $ lvcreate -L32m -s vg/lvol0 Logical volume "lvol1" created [0] raw/~ $ pvcreate /dev/vg/lvol1 Physical volume "/dev/vg/lvol1" successfully created [0] raw/~ $ lvremove -ff vg/lvol1 Logical volume "lvol1" successfully removed [0] raw/~ $ pvs No device found for PV BdNlu2-7bHV-XcIp-mFFC-PPuR-ef6K-yffdzO. PV VG Fmt Attr PSize PFree /dev/sda vg lvm2 a-- 124.00m 92.00m [0] raw/~ $ pvscan --cache --major 253 --minor 3 Device 253:3 not found. Cleared from lvmetad cache. This is because of the reactivation that is done just before snapshot removal as part of the process (vg/lvol1 from the example above). This causes a CHANGE event to be generated, but any scan done on the LV does not see the original data anymore (in this case the stacked PV label on top) and consequently the ID_FS_TYPE="LVM2_member" (provided by blkid scan) is not stored in udev db anymore for the LV. Consequently, the pvscan --cache is not run anymore as the dev is not identified as LVM PV by the "LVM2_member" id - lvmetad loses this info and still keeps records about the PV. We can run into a very similar problem with erasing the PV label directly: [0] raw/~ $ lvcreate -L32m vg Logical volume "lvol0" created [0] raw/~ $ pvcreate /dev/vg/lvol0 Physical volume "/dev/vg/lvol0" successfully created [0] raw/~ $ dd if=/dev/zero of=/dev/vg/lvol0 bs=1M dd: error writing '/dev/vg/lvol0': No space left on device 33+0 records in 32+0 records out 33554432 bytes (34 MB) copied, 0.380921 s, 88.1 MB/s [0] raw/~ $ pvs PV VG Fmt Attr PSize PFree /dev/sda vg lvm2 a-- 124.00m 92.00m /dev/vg/lvol0 lvm2 a-- 32.00m 32.00m [0] raw/~ $ pvscan --cache --major 253 --minor 2 No PV label found on /dev/vg/lvol0. This patch adds detection of this change from ID_FS_LABEL="LVM2_member" to ID_FS_LABEL="<whatever_else>" and hence informing the lvmetad about PV being gone.
2013-08-26 15:27:00 +02:00
# If the PV label got lost, inform lvmetad about it.
ENV{DM_ID_FS_TYPE_OLD}=="LVM2_member", ENV{DM_ID_FS_TYPE}!="LVM2_member", GOTO="lvm_scan"
# Only process devices already marked as a PV - this requires blkid to be called before.
activation: fix autoactivation to not trigger on each PV change Before, the pvscan --cache -aay was called on each ADD and CHANGE uevent (for a device that is not a device-mapper device) and each CHANGE event (for a PV that is a device-mapper device). This causes troubles with autoactivation in some cases as CHANGE event may originate from using the OPTION+="watch" udev rule that is defined in 60-persistent-storage.rules (part of the rules provided by udev directly) and it's used for all block devices (except fd*|mtd*|nbd*|gnbd*|btibm*|dm-*|md* devices). For example, the following sequence incorrectly activates the rest of LVs in a VG if one of the LVs in the VG is being removed: [root@rhel6-a ~]# pvcreate /dev/sda Physical volume "/dev/sda" successfully created [root@rhel6-a ~]# vgcreate vg /dev/sda Volume group "vg" successfully created [root@rhel6-a ~]# lvcreate -l1 vg Logical volume "lvol0" created [root@rhel6-a ~]# lvcreate -l1 vg Logical volume "lvol1" created [root@rhel6-a ~]# vgchange -an vg 0 logical volume(s) in volume group "vg" now active [root@rhel6-a ~]# lvs LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert lvol0 vg -wi------ 4.00m lvol1 vg -wi------ 4.00m [root@rhel6-a ~]# lvremove -ff vg/lvol1 Logical volume "lvol1" successfully removed [root@rhel6-a ~]# lvs LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert lvol0 vg -wi-a---- 4.00m ...so the vg was deactivated, then lvol1 removed, and we end up with lvol1 removed (which is ok) BUT with lvol0 activated (which is wrong)!!! This is because after lvol1 removal, we need to write metadata to the underlying device /dev/sda and that causes the CHANGE event to be generated (because of the WATCH udev rule set on this device) and this causes the pvscan --cache -aay to be reevaluated. We have to limit this and call pvscan --cache -aay to autoactivate VGs/LVs only in these cases: --> if the *PV is not a dm device*, scan only after proper device addition (ADD event) and not with any other changes (CHANGE event) --> if the *PV is a dm device*, scan only after proper mapping activation (CHANGE event + the underlying PV in a state "just activated")
2012-12-21 10:34:48 +01:00
ENV{ID_FS_TYPE}!="LVM2_member|LVM1_member", GOTO="lvm_end"
ACTION=="remove", GOTO="lvm_scan"
ACTION=="change", KERNEL=="md[0-9]*|loop[0-9]*", GOTO="lvm_scan"
activation: fix autoactivation to not trigger on each PV change Before, the pvscan --cache -aay was called on each ADD and CHANGE uevent (for a device that is not a device-mapper device) and each CHANGE event (for a PV that is a device-mapper device). This causes troubles with autoactivation in some cases as CHANGE event may originate from using the OPTION+="watch" udev rule that is defined in 60-persistent-storage.rules (part of the rules provided by udev directly) and it's used for all block devices (except fd*|mtd*|nbd*|gnbd*|btibm*|dm-*|md* devices). For example, the following sequence incorrectly activates the rest of LVs in a VG if one of the LVs in the VG is being removed: [root@rhel6-a ~]# pvcreate /dev/sda Physical volume "/dev/sda" successfully created [root@rhel6-a ~]# vgcreate vg /dev/sda Volume group "vg" successfully created [root@rhel6-a ~]# lvcreate -l1 vg Logical volume "lvol0" created [root@rhel6-a ~]# lvcreate -l1 vg Logical volume "lvol1" created [root@rhel6-a ~]# vgchange -an vg 0 logical volume(s) in volume group "vg" now active [root@rhel6-a ~]# lvs LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert lvol0 vg -wi------ 4.00m lvol1 vg -wi------ 4.00m [root@rhel6-a ~]# lvremove -ff vg/lvol1 Logical volume "lvol1" successfully removed [root@rhel6-a ~]# lvs LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert lvol0 vg -wi-a---- 4.00m ...so the vg was deactivated, then lvol1 removed, and we end up with lvol1 removed (which is ok) BUT with lvol0 activated (which is wrong)!!! This is because after lvol1 removal, we need to write metadata to the underlying device /dev/sda and that causes the CHANGE event to be generated (because of the WATCH udev rule set on this device) and this causes the pvscan --cache -aay to be reevaluated. We have to limit this and call pvscan --cache -aay to autoactivate VGs/LVs only in these cases: --> if the *PV is not a dm device*, scan only after proper device addition (ADD event) and not with any other changes (CHANGE event) --> if the *PV is a dm device*, scan only after proper mapping activation (CHANGE event + the underlying PV in a state "just activated")
2012-12-21 10:34:48 +01:00
# If the PV is not a dm device, scan only after device addition (ADD event)
KERNEL!="dm-[0-9]*", ACTION!="add", GOTO="lvm_end"
# If the PV is a dm device, scan only after proper mapping activation (CHANGE event + DM_ACTIVATION=1)
udev: also autoactivate on coldplug Commit 756bcabbfe297688ba240a880bc2b55265ad33f0 fixed autoactivation to not trigger on each uevent for a PV that appeared in the system most notably the events that are triggered artificially (udevadm trigger or as the result of the WATCH udev rule being applied that consequently generates CHANGE uevents). This fixed a situation in which VGs/LVs were activated when they should not. BUT we still need to care about the coldplug used at boot to retrigger the ADD events - the "udevadm trigger --action=add"! For non-DM-based PVs, this is already covered as for these we run the autoactivation on ADD event only. However, for DM-based PVs, we still need to run the autoactivation even for the artificial ADD event, reusing the udev DB content from previous proper CHANGE event that came with the DM device activation. Simply, this patch fixes a situation in which we run extra "udevadm trigger --action=add" (or echo add > /sys/block/<dev>/uevent) for DM-based PVs (cryptsetup devices, multipath devices, any other DM devices...). Without this patch, while using lvmetad + autoactivation, any VG/LV that has a DM-based PV and for which we do not call the activation directly, the VG/LV is not activated. For example a VG with an LV with root FS on it which is directly activated in initrd and then missing activation of the rest of the LVs in the VG because of unhandled uevent retrigger on boot after switching to root FS (the "coldplug"). (No WHATS_NEW here as this fixes the commit mentioned above and which was not released yet.)
2013-04-19 12:17:53 +02:00
# or after a coldplug (event retrigger) with "add" event (ADD event + DM_ACTIVATION=1)
activation: fix autoactivation to not trigger on each PV change Before, the pvscan --cache -aay was called on each ADD and CHANGE uevent (for a device that is not a device-mapper device) and each CHANGE event (for a PV that is a device-mapper device). This causes troubles with autoactivation in some cases as CHANGE event may originate from using the OPTION+="watch" udev rule that is defined in 60-persistent-storage.rules (part of the rules provided by udev directly) and it's used for all block devices (except fd*|mtd*|nbd*|gnbd*|btibm*|dm-*|md* devices). For example, the following sequence incorrectly activates the rest of LVs in a VG if one of the LVs in the VG is being removed: [root@rhel6-a ~]# pvcreate /dev/sda Physical volume "/dev/sda" successfully created [root@rhel6-a ~]# vgcreate vg /dev/sda Volume group "vg" successfully created [root@rhel6-a ~]# lvcreate -l1 vg Logical volume "lvol0" created [root@rhel6-a ~]# lvcreate -l1 vg Logical volume "lvol1" created [root@rhel6-a ~]# vgchange -an vg 0 logical volume(s) in volume group "vg" now active [root@rhel6-a ~]# lvs LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert lvol0 vg -wi------ 4.00m lvol1 vg -wi------ 4.00m [root@rhel6-a ~]# lvremove -ff vg/lvol1 Logical volume "lvol1" successfully removed [root@rhel6-a ~]# lvs LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert lvol0 vg -wi-a---- 4.00m ...so the vg was deactivated, then lvol1 removed, and we end up with lvol1 removed (which is ok) BUT with lvol0 activated (which is wrong)!!! This is because after lvol1 removal, we need to write metadata to the underlying device /dev/sda and that causes the CHANGE event to be generated (because of the WATCH udev rule set on this device) and this causes the pvscan --cache -aay to be reevaluated. We have to limit this and call pvscan --cache -aay to autoactivate VGs/LVs only in these cases: --> if the *PV is not a dm device*, scan only after proper device addition (ADD event) and not with any other changes (CHANGE event) --> if the *PV is a dm device*, scan only after proper mapping activation (CHANGE event + the underlying PV in a state "just activated")
2012-12-21 10:34:48 +01:00
KERNEL=="dm-[0-9]*", ENV{DM_ACTIVATION}!="1", GOTO="lvm_end"
LABEL="lvm_scan"
RUN+="(LVM_EXEC)/lvm pvscan --cache --activate ay --major $major --minor $minor"
LABEL="lvm_end"