1
0
mirror of git://sourceware.org/git/lvm2.git synced 2024-12-22 17:35:59 +03:00
lvm2/udev/69-dm-lvm-metad.rules.in

85 lines
3.6 KiB
Plaintext
Raw Normal View History

# Copyright (C) 2012 Red Hat, Inc. All rights reserved.
#
# This file is part of LVM2.
# Udev rules for LVM.
#
# Scan all block devices having a PV label for LVM metadata.
# Store this information in LVMetaD (the LVM metadata daemon) and maintain LVM
# metadata state for improved performance by avoiding further scans while
# running subsequent LVM commands or while using lvm2app library.
# Also, notify LVMetaD about any relevant block device removal.
#
# This rule is essential for having the information in LVMetaD up-to-date.
# It also requires blkid to be called on block devices before so only devices
# used as LVM PVs are processed (ID_FS_TYPE="LVM2_member" or "LVM1_member").
SUBSYSTEM!="block", GOTO="lvm_end"
(LVM_EXEC_RULE)
ENV{DM_NOSCAN}=="1", GOTO="lvm_end"
udev: fix pvscan --cache -aay to trigger on relevant events This patch fixes the way the special devices are handled (special in this context means that they're not usable after the usual ADD event like other generic devices): - DM and MD devices are pvscanned only when they are just set up. This is the first CHANGE event that makes the device usable (the DM_UDEV_PRIMARY_SOURCE_FLAG is set for DM and the md/array_state sysfs attribute is present for MD). Whether the device is activated is remembered via DM_ACTIVATED (for DM) and LVM_MD_PV_ACTIVATED (for MD) udev environment variable. This is then used to decide whether we should fire the pvscan on ADD event to support coldplugging. For any (artificial) ADD event generated during coldplug, the device must be already set up properly to fire the pvscan on it. - Similar for loop devices. For loop devices, only CHANGE events are relevant (so there's a CHANGE after the loop device is set up as well as detached). Whether the loop has just been activated is detected via loop/backing_file sysfs attribute presence. The activation state is remembered via LVM_LOOP_PV_ACTIVATED udev environment variable. - Do not pvscan multipath device components (underlying paths). - Do not pvscan RAID device components. - Also, set LVM_SCANNED="1" udev environment variable for debug purposes (it's visible in the lvmdump -u that takes the current udev database). This variable is set once the pvscan is triggered. The table below summarises when the pvscan is triggered (marked with X, X* means fire only if the special dev is properly set up): | real ADD | real CHANGE | artificial ADD | artificial CHANGE | remove ============================================================================= DM | | X | X* | | X MD | | X | X* | | loop | | X | X* | | other | X | | X | | X
2013-09-10 17:49:05 +04:00
# If the PV label got lost, inform lvmetad immediately.
# Detect the lost PV label by comparing previous ID_FS_TYPE value with current one.
ENV{.ID_FS_TYPE_NEW}="$env{ID_FS_TYPE}"
IMPORT{db}="ID_FS_TYPE"
ENV{ID_FS_TYPE}=="LVM2_member|LVM1_member", ENV{.ID_FS_TYPE_NEW}!="LVM2_member|LVM1_member", ENV{LVM_PV_GONE}="1"
ENV{ID_FS_TYPE}="$env{.ID_FS_TYPE_NEW}"
ENV{LVM_PV_GONE}=="1", GOTO="lvm_scan"
udev: inform lvmetad about lost PV label In stacked environment where we have a PV layered on top of a snapshot LV and then removing the LV, lvmetad still keeps information about the PV: [0] raw/~ $ pvcreate /dev/sda Physical volume "/dev/sda" successfully created [0] raw/~ $ vgcreate vg /dev/sda Volume group "vg" successfully created [0] raw/~ $ lvcreate -L32m vg Logical volume "lvol0" created [0] raw/~ $ lvcreate -L32m -s vg/lvol0 Logical volume "lvol1" created [0] raw/~ $ pvcreate /dev/vg/lvol1 Physical volume "/dev/vg/lvol1" successfully created [0] raw/~ $ lvremove -ff vg/lvol1 Logical volume "lvol1" successfully removed [0] raw/~ $ pvs No device found for PV BdNlu2-7bHV-XcIp-mFFC-PPuR-ef6K-yffdzO. PV VG Fmt Attr PSize PFree /dev/sda vg lvm2 a-- 124.00m 92.00m [0] raw/~ $ pvscan --cache --major 253 --minor 3 Device 253:3 not found. Cleared from lvmetad cache. This is because of the reactivation that is done just before snapshot removal as part of the process (vg/lvol1 from the example above). This causes a CHANGE event to be generated, but any scan done on the LV does not see the original data anymore (in this case the stacked PV label on top) and consequently the ID_FS_TYPE="LVM2_member" (provided by blkid scan) is not stored in udev db anymore for the LV. Consequently, the pvscan --cache is not run anymore as the dev is not identified as LVM PV by the "LVM2_member" id - lvmetad loses this info and still keeps records about the PV. We can run into a very similar problem with erasing the PV label directly: [0] raw/~ $ lvcreate -L32m vg Logical volume "lvol0" created [0] raw/~ $ pvcreate /dev/vg/lvol0 Physical volume "/dev/vg/lvol0" successfully created [0] raw/~ $ dd if=/dev/zero of=/dev/vg/lvol0 bs=1M dd: error writing '/dev/vg/lvol0': No space left on device 33+0 records in 32+0 records out 33554432 bytes (34 MB) copied, 0.380921 s, 88.1 MB/s [0] raw/~ $ pvs PV VG Fmt Attr PSize PFree /dev/sda vg lvm2 a-- 124.00m 92.00m /dev/vg/lvol0 lvm2 a-- 32.00m 32.00m [0] raw/~ $ pvscan --cache --major 253 --minor 2 No PV label found on /dev/vg/lvol0. This patch adds detection of this change from ID_FS_LABEL="LVM2_member" to ID_FS_LABEL="<whatever_else>" and hence informing the lvmetad about PV being gone.
2013-08-26 17:27:00 +04:00
# Only process devices already marked as a PV - this requires blkid to be called before.
activation: fix autoactivation to not trigger on each PV change Before, the pvscan --cache -aay was called on each ADD and CHANGE uevent (for a device that is not a device-mapper device) and each CHANGE event (for a PV that is a device-mapper device). This causes troubles with autoactivation in some cases as CHANGE event may originate from using the OPTION+="watch" udev rule that is defined in 60-persistent-storage.rules (part of the rules provided by udev directly) and it's used for all block devices (except fd*|mtd*|nbd*|gnbd*|btibm*|dm-*|md* devices). For example, the following sequence incorrectly activates the rest of LVs in a VG if one of the LVs in the VG is being removed: [root@rhel6-a ~]# pvcreate /dev/sda Physical volume "/dev/sda" successfully created [root@rhel6-a ~]# vgcreate vg /dev/sda Volume group "vg" successfully created [root@rhel6-a ~]# lvcreate -l1 vg Logical volume "lvol0" created [root@rhel6-a ~]# lvcreate -l1 vg Logical volume "lvol1" created [root@rhel6-a ~]# vgchange -an vg 0 logical volume(s) in volume group "vg" now active [root@rhel6-a ~]# lvs LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert lvol0 vg -wi------ 4.00m lvol1 vg -wi------ 4.00m [root@rhel6-a ~]# lvremove -ff vg/lvol1 Logical volume "lvol1" successfully removed [root@rhel6-a ~]# lvs LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert lvol0 vg -wi-a---- 4.00m ...so the vg was deactivated, then lvol1 removed, and we end up with lvol1 removed (which is ok) BUT with lvol0 activated (which is wrong)!!! This is because after lvol1 removal, we need to write metadata to the underlying device /dev/sda and that causes the CHANGE event to be generated (because of the WATCH udev rule set on this device) and this causes the pvscan --cache -aay to be reevaluated. We have to limit this and call pvscan --cache -aay to autoactivate VGs/LVs only in these cases: --> if the *PV is not a dm device*, scan only after proper device addition (ADD event) and not with any other changes (CHANGE event) --> if the *PV is a dm device*, scan only after proper mapping activation (CHANGE event + the underlying PV in a state "just activated")
2012-12-21 13:34:48 +04:00
ENV{ID_FS_TYPE}!="LVM2_member|LVM1_member", GOTO="lvm_end"
ENV{DM_MULTIPATH_DEVICE_PATH}=="1", GOTO="lvm_end"
activation: fix autoactivation to not trigger on each PV change Before, the pvscan --cache -aay was called on each ADD and CHANGE uevent (for a device that is not a device-mapper device) and each CHANGE event (for a PV that is a device-mapper device). This causes troubles with autoactivation in some cases as CHANGE event may originate from using the OPTION+="watch" udev rule that is defined in 60-persistent-storage.rules (part of the rules provided by udev directly) and it's used for all block devices (except fd*|mtd*|nbd*|gnbd*|btibm*|dm-*|md* devices). For example, the following sequence incorrectly activates the rest of LVs in a VG if one of the LVs in the VG is being removed: [root@rhel6-a ~]# pvcreate /dev/sda Physical volume "/dev/sda" successfully created [root@rhel6-a ~]# vgcreate vg /dev/sda Volume group "vg" successfully created [root@rhel6-a ~]# lvcreate -l1 vg Logical volume "lvol0" created [root@rhel6-a ~]# lvcreate -l1 vg Logical volume "lvol1" created [root@rhel6-a ~]# vgchange -an vg 0 logical volume(s) in volume group "vg" now active [root@rhel6-a ~]# lvs LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert lvol0 vg -wi------ 4.00m lvol1 vg -wi------ 4.00m [root@rhel6-a ~]# lvremove -ff vg/lvol1 Logical volume "lvol1" successfully removed [root@rhel6-a ~]# lvs LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert lvol0 vg -wi-a---- 4.00m ...so the vg was deactivated, then lvol1 removed, and we end up with lvol1 removed (which is ok) BUT with lvol0 activated (which is wrong)!!! This is because after lvol1 removal, we need to write metadata to the underlying device /dev/sda and that causes the CHANGE event to be generated (because of the WATCH udev rule set on this device) and this causes the pvscan --cache -aay to be reevaluated. We have to limit this and call pvscan --cache -aay to autoactivate VGs/LVs only in these cases: --> if the *PV is not a dm device*, scan only after proper device addition (ADD event) and not with any other changes (CHANGE event) --> if the *PV is a dm device*, scan only after proper mapping activation (CHANGE event + the underlying PV in a state "just activated")
2012-12-21 13:34:48 +04:00
udev: fix pvscan --cache -aay to trigger on relevant events This patch fixes the way the special devices are handled (special in this context means that they're not usable after the usual ADD event like other generic devices): - DM and MD devices are pvscanned only when they are just set up. This is the first CHANGE event that makes the device usable (the DM_UDEV_PRIMARY_SOURCE_FLAG is set for DM and the md/array_state sysfs attribute is present for MD). Whether the device is activated is remembered via DM_ACTIVATED (for DM) and LVM_MD_PV_ACTIVATED (for MD) udev environment variable. This is then used to decide whether we should fire the pvscan on ADD event to support coldplugging. For any (artificial) ADD event generated during coldplug, the device must be already set up properly to fire the pvscan on it. - Similar for loop devices. For loop devices, only CHANGE events are relevant (so there's a CHANGE after the loop device is set up as well as detached). Whether the loop has just been activated is detected via loop/backing_file sysfs attribute presence. The activation state is remembered via LVM_LOOP_PV_ACTIVATED udev environment variable. - Do not pvscan multipath device components (underlying paths). - Do not pvscan RAID device components. - Also, set LVM_SCANNED="1" udev environment variable for debug purposes (it's visible in the lvmdump -u that takes the current udev database). This variable is set once the pvscan is triggered. The table below summarises when the pvscan is triggered (marked with X, X* means fire only if the special dev is properly set up): | real ADD | real CHANGE | artificial ADD | artificial CHANGE | remove ============================================================================= DM | | X | X* | | X MD | | X | X* | | loop | | X | X* | | other | X | | X | | X
2013-09-10 17:49:05 +04:00
# Inform lvmetad about any PV that is gone.
activation: fix autoactivation to not trigger on each PV change Before, the pvscan --cache -aay was called on each ADD and CHANGE uevent (for a device that is not a device-mapper device) and each CHANGE event (for a PV that is a device-mapper device). This causes troubles with autoactivation in some cases as CHANGE event may originate from using the OPTION+="watch" udev rule that is defined in 60-persistent-storage.rules (part of the rules provided by udev directly) and it's used for all block devices (except fd*|mtd*|nbd*|gnbd*|btibm*|dm-*|md* devices). For example, the following sequence incorrectly activates the rest of LVs in a VG if one of the LVs in the VG is being removed: [root@rhel6-a ~]# pvcreate /dev/sda Physical volume "/dev/sda" successfully created [root@rhel6-a ~]# vgcreate vg /dev/sda Volume group "vg" successfully created [root@rhel6-a ~]# lvcreate -l1 vg Logical volume "lvol0" created [root@rhel6-a ~]# lvcreate -l1 vg Logical volume "lvol1" created [root@rhel6-a ~]# vgchange -an vg 0 logical volume(s) in volume group "vg" now active [root@rhel6-a ~]# lvs LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert lvol0 vg -wi------ 4.00m lvol1 vg -wi------ 4.00m [root@rhel6-a ~]# lvremove -ff vg/lvol1 Logical volume "lvol1" successfully removed [root@rhel6-a ~]# lvs LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert lvol0 vg -wi-a---- 4.00m ...so the vg was deactivated, then lvol1 removed, and we end up with lvol1 removed (which is ok) BUT with lvol0 activated (which is wrong)!!! This is because after lvol1 removal, we need to write metadata to the underlying device /dev/sda and that causes the CHANGE event to be generated (because of the WATCH udev rule set on this device) and this causes the pvscan --cache -aay to be reevaluated. We have to limit this and call pvscan --cache -aay to autoactivate VGs/LVs only in these cases: --> if the *PV is not a dm device*, scan only after proper device addition (ADD event) and not with any other changes (CHANGE event) --> if the *PV is a dm device*, scan only after proper mapping activation (CHANGE event + the underlying PV in a state "just activated")
2012-12-21 13:34:48 +04:00
ACTION=="remove", GOTO="lvm_scan"
udev: fix pvscan --cache -aay to trigger on relevant events This patch fixes the way the special devices are handled (special in this context means that they're not usable after the usual ADD event like other generic devices): - DM and MD devices are pvscanned only when they are just set up. This is the first CHANGE event that makes the device usable (the DM_UDEV_PRIMARY_SOURCE_FLAG is set for DM and the md/array_state sysfs attribute is present for MD). Whether the device is activated is remembered via DM_ACTIVATED (for DM) and LVM_MD_PV_ACTIVATED (for MD) udev environment variable. This is then used to decide whether we should fire the pvscan on ADD event to support coldplugging. For any (artificial) ADD event generated during coldplug, the device must be already set up properly to fire the pvscan on it. - Similar for loop devices. For loop devices, only CHANGE events are relevant (so there's a CHANGE after the loop device is set up as well as detached). Whether the loop has just been activated is detected via loop/backing_file sysfs attribute presence. The activation state is remembered via LVM_LOOP_PV_ACTIVATED udev environment variable. - Do not pvscan multipath device components (underlying paths). - Do not pvscan RAID device components. - Also, set LVM_SCANNED="1" udev environment variable for debug purposes (it's visible in the lvmdump -u that takes the current udev database). This variable is set once the pvscan is triggered. The table below summarises when the pvscan is triggered (marked with X, X* means fire only if the special dev is properly set up): | real ADD | real CHANGE | artificial ADD | artificial CHANGE | remove ============================================================================= DM | | X | X* | | X MD | | X | X* | | loop | | X | X* | | other | X | | X | | X
2013-09-10 17:49:05 +04:00
# If the PV is a special device listed below, scan only if the device is
# properly activated. These devices are not usable after an ADD event,
# but they require an extra setup and they are ready after a CHANGE event.
# Also support coldplugging with ADD event but only if the device is already
# properly activated.
activation: fix autoactivation to not trigger on each PV change Before, the pvscan --cache -aay was called on each ADD and CHANGE uevent (for a device that is not a device-mapper device) and each CHANGE event (for a PV that is a device-mapper device). This causes troubles with autoactivation in some cases as CHANGE event may originate from using the OPTION+="watch" udev rule that is defined in 60-persistent-storage.rules (part of the rules provided by udev directly) and it's used for all block devices (except fd*|mtd*|nbd*|gnbd*|btibm*|dm-*|md* devices). For example, the following sequence incorrectly activates the rest of LVs in a VG if one of the LVs in the VG is being removed: [root@rhel6-a ~]# pvcreate /dev/sda Physical volume "/dev/sda" successfully created [root@rhel6-a ~]# vgcreate vg /dev/sda Volume group "vg" successfully created [root@rhel6-a ~]# lvcreate -l1 vg Logical volume "lvol0" created [root@rhel6-a ~]# lvcreate -l1 vg Logical volume "lvol1" created [root@rhel6-a ~]# vgchange -an vg 0 logical volume(s) in volume group "vg" now active [root@rhel6-a ~]# lvs LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert lvol0 vg -wi------ 4.00m lvol1 vg -wi------ 4.00m [root@rhel6-a ~]# lvremove -ff vg/lvol1 Logical volume "lvol1" successfully removed [root@rhel6-a ~]# lvs LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert lvol0 vg -wi-a---- 4.00m ...so the vg was deactivated, then lvol1 removed, and we end up with lvol1 removed (which is ok) BUT with lvol0 activated (which is wrong)!!! This is because after lvol1 removal, we need to write metadata to the underlying device /dev/sda and that causes the CHANGE event to be generated (because of the WATCH udev rule set on this device) and this causes the pvscan --cache -aay to be reevaluated. We have to limit this and call pvscan --cache -aay to autoactivate VGs/LVs only in these cases: --> if the *PV is not a dm device*, scan only after proper device addition (ADD event) and not with any other changes (CHANGE event) --> if the *PV is a dm device*, scan only after proper mapping activation (CHANGE event + the underlying PV in a state "just activated")
2012-12-21 13:34:48 +04:00
udev: fix pvscan --cache -aay to trigger on relevant events This patch fixes the way the special devices are handled (special in this context means that they're not usable after the usual ADD event like other generic devices): - DM and MD devices are pvscanned only when they are just set up. This is the first CHANGE event that makes the device usable (the DM_UDEV_PRIMARY_SOURCE_FLAG is set for DM and the md/array_state sysfs attribute is present for MD). Whether the device is activated is remembered via DM_ACTIVATED (for DM) and LVM_MD_PV_ACTIVATED (for MD) udev environment variable. This is then used to decide whether we should fire the pvscan on ADD event to support coldplugging. For any (artificial) ADD event generated during coldplug, the device must be already set up properly to fire the pvscan on it. - Similar for loop devices. For loop devices, only CHANGE events are relevant (so there's a CHANGE after the loop device is set up as well as detached). Whether the loop has just been activated is detected via loop/backing_file sysfs attribute presence. The activation state is remembered via LVM_LOOP_PV_ACTIVATED udev environment variable. - Do not pvscan multipath device components (underlying paths). - Do not pvscan RAID device components. - Also, set LVM_SCANNED="1" udev environment variable for debug purposes (it's visible in the lvmdump -u that takes the current udev database). This variable is set once the pvscan is triggered. The table below summarises when the pvscan is triggered (marked with X, X* means fire only if the special dev is properly set up): | real ADD | real CHANGE | artificial ADD | artificial CHANGE | remove ============================================================================= DM | | X | X* | | X MD | | X | X* | | loop | | X | X* | | other | X | | X | | X
2013-09-10 17:49:05 +04:00
# DM device:
KERNEL!="dm-[0-9]*", GOTO="next"
ENV{DM_UDEV_PRIMARY_SOURCE_FLAG}=="1", ENV{DM_ACTIVATION}=="1", GOTO="lvm_scan"
udev: fix pvscan --cache -aay to trigger on relevant events This patch fixes the way the special devices are handled (special in this context means that they're not usable after the usual ADD event like other generic devices): - DM and MD devices are pvscanned only when they are just set up. This is the first CHANGE event that makes the device usable (the DM_UDEV_PRIMARY_SOURCE_FLAG is set for DM and the md/array_state sysfs attribute is present for MD). Whether the device is activated is remembered via DM_ACTIVATED (for DM) and LVM_MD_PV_ACTIVATED (for MD) udev environment variable. This is then used to decide whether we should fire the pvscan on ADD event to support coldplugging. For any (artificial) ADD event generated during coldplug, the device must be already set up properly to fire the pvscan on it. - Similar for loop devices. For loop devices, only CHANGE events are relevant (so there's a CHANGE after the loop device is set up as well as detached). Whether the loop has just been activated is detected via loop/backing_file sysfs attribute presence. The activation state is remembered via LVM_LOOP_PV_ACTIVATED udev environment variable. - Do not pvscan multipath device components (underlying paths). - Do not pvscan RAID device components. - Also, set LVM_SCANNED="1" udev environment variable for debug purposes (it's visible in the lvmdump -u that takes the current udev database). This variable is set once the pvscan is triggered. The table below summarises when the pvscan is triggered (marked with X, X* means fire only if the special dev is properly set up): | real ADD | real CHANGE | artificial ADD | artificial CHANGE | remove ============================================================================= DM | | X | X* | | X MD | | X | X* | | loop | | X | X* | | other | X | | X | | X
2013-09-10 17:49:05 +04:00
GOTO="lvm_end"
# MD device:
LABEL="next"
KERNEL!="md[0-9]*", GOTO="next"
IMPORT{db}="LVM_MD_PV_ACTIVATED"
ACTION=="add", ENV{LVM_MD_PV_ACTIVATED}=="1", GOTO="lvm_scan"
ACTION=="change", ENV{LVM_MD_PV_ACTIVATED}!="1", TEST=="md/array_state", ENV{LVM_MD_PV_ACTIVATED}="1", GOTO="lvm_scan"
GOTO="lvm_end"
# Loop device:
LABEL="next"
KERNEL!="loop[0-9]*", GOTO="next"
ACTION=="add", ENV{LVM_LOOP_PV_ACTIVATED}=="1", GOTO="lvm_scan"
ACTION=="change", ENV{LVM_LOOP_PV_ACTIVATED}!="1", TEST=="loop/backing_file", ENV{LVM_LOOP_PV_ACTIVATED}="1", GOTO="lvm_scan"
GOTO="lvm_end"
# If the PV is not a special device listed above, scan only after device addition (ADD event)
LABEL="next"
ACTION!="add", GOTO="lvm_end"
activation: fix autoactivation to not trigger on each PV change Before, the pvscan --cache -aay was called on each ADD and CHANGE uevent (for a device that is not a device-mapper device) and each CHANGE event (for a PV that is a device-mapper device). This causes troubles with autoactivation in some cases as CHANGE event may originate from using the OPTION+="watch" udev rule that is defined in 60-persistent-storage.rules (part of the rules provided by udev directly) and it's used for all block devices (except fd*|mtd*|nbd*|gnbd*|btibm*|dm-*|md* devices). For example, the following sequence incorrectly activates the rest of LVs in a VG if one of the LVs in the VG is being removed: [root@rhel6-a ~]# pvcreate /dev/sda Physical volume "/dev/sda" successfully created [root@rhel6-a ~]# vgcreate vg /dev/sda Volume group "vg" successfully created [root@rhel6-a ~]# lvcreate -l1 vg Logical volume "lvol0" created [root@rhel6-a ~]# lvcreate -l1 vg Logical volume "lvol1" created [root@rhel6-a ~]# vgchange -an vg 0 logical volume(s) in volume group "vg" now active [root@rhel6-a ~]# lvs LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert lvol0 vg -wi------ 4.00m lvol1 vg -wi------ 4.00m [root@rhel6-a ~]# lvremove -ff vg/lvol1 Logical volume "lvol1" successfully removed [root@rhel6-a ~]# lvs LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert lvol0 vg -wi-a---- 4.00m ...so the vg was deactivated, then lvol1 removed, and we end up with lvol1 removed (which is ok) BUT with lvol0 activated (which is wrong)!!! This is because after lvol1 removal, we need to write metadata to the underlying device /dev/sda and that causes the CHANGE event to be generated (because of the WATCH udev rule set on this device) and this causes the pvscan --cache -aay to be reevaluated. We have to limit this and call pvscan --cache -aay to autoactivate VGs/LVs only in these cases: --> if the *PV is not a dm device*, scan only after proper device addition (ADD event) and not with any other changes (CHANGE event) --> if the *PV is a dm device*, scan only after proper mapping activation (CHANGE event + the underlying PV in a state "just activated")
2012-12-21 13:34:48 +04:00
LABEL="lvm_scan"
udev: fix pvscan --cache -aay to trigger on relevant events This patch fixes the way the special devices are handled (special in this context means that they're not usable after the usual ADD event like other generic devices): - DM and MD devices are pvscanned only when they are just set up. This is the first CHANGE event that makes the device usable (the DM_UDEV_PRIMARY_SOURCE_FLAG is set for DM and the md/array_state sysfs attribute is present for MD). Whether the device is activated is remembered via DM_ACTIVATED (for DM) and LVM_MD_PV_ACTIVATED (for MD) udev environment variable. This is then used to decide whether we should fire the pvscan on ADD event to support coldplugging. For any (artificial) ADD event generated during coldplug, the device must be already set up properly to fire the pvscan on it. - Similar for loop devices. For loop devices, only CHANGE events are relevant (so there's a CHANGE after the loop device is set up as well as detached). Whether the loop has just been activated is detected via loop/backing_file sysfs attribute presence. The activation state is remembered via LVM_LOOP_PV_ACTIVATED udev environment variable. - Do not pvscan multipath device components (underlying paths). - Do not pvscan RAID device components. - Also, set LVM_SCANNED="1" udev environment variable for debug purposes (it's visible in the lvmdump -u that takes the current udev database). This variable is set once the pvscan is triggered. The table below summarises when the pvscan is triggered (marked with X, X* means fire only if the special dev is properly set up): | real ADD | real CHANGE | artificial ADD | artificial CHANGE | remove ============================================================================= DM | | X | X* | | X MD | | X | X* | | loop | | X | X* | | other | X | | X | | X
2013-09-10 17:49:05 +04:00
# The table below summarises the situations in which we reach the LABEL="lvm_scan".
# Marked by X, X* means only if the special dev is properly set up.
# The artificial ADD is supported for coldplugging. We avoid running the pvscan
# on artificial CHANGE so there's no unexpected autoactivation when WATCH rule fires.
# N.B. MD and loop never actually reaches lvm_scan on REMOVE as the PV label is gone
# within a CHANGE event (these are caught by the "LVM_PV_GONE" rule at the beginning).
#
# | real ADD | real CHANGE | artificial ADD | artificial CHANGE | REMOVE
# =============================================================================
# DM | | X | X* | | X
# MD | | X | X* | |
# loop | | X | X* | |
# other | X | | X | | X
RUN+="(LVM_EXEC)/lvm pvscan --background --cache --activate ay --major $major --minor $minor", ENV{LVM_SCANNED}="1"
LABEL="lvm_end"