1
0
mirror of git://sourceware.org/git/lvm2.git synced 2024-12-21 13:34:40 +03:00

activation: fix autoactivation to not trigger on each PV change

Before, the pvscan --cache -aay was called on each ADD and CHANGE
uevent (for a device that is not a device-mapper device) and each CHANGE
event (for a PV that is a device-mapper device).

This causes troubles with autoactivation in some cases as CHANGE event
may originate from using the OPTION+="watch" udev rule that is defined
in 60-persistent-storage.rules (part of the rules provided by udev
directly) and it's used for all block devices
(except fd*|mtd*|nbd*|gnbd*|btibm*|dm-*|md* devices). For example, the
following sequence incorrectly activates the rest of LVs in a VG if one
of the LVs in the VG is being removed:

[root@rhel6-a ~]# pvcreate /dev/sda
  Physical volume "/dev/sda" successfully created

[root@rhel6-a ~]# vgcreate vg /dev/sda
  Volume group "vg" successfully created

[root@rhel6-a ~]# lvcreate -l1 vg
  Logical volume "lvol0" created

[root@rhel6-a ~]# lvcreate -l1 vg
  Logical volume "lvol1" created

[root@rhel6-a ~]# vgchange -an vg
  0 logical volume(s) in volume group "vg" now active

[root@rhel6-a ~]# lvs
  LV      VG        Attr      LSize   Pool Origin Data%  Move Log
Cpy%Sync Convert
  lvol0   vg        -wi------   4.00m
  lvol1   vg        -wi------   4.00m

[root@rhel6-a ~]# lvremove -ff vg/lvol1
  Logical volume "lvol1" successfully removed

[root@rhel6-a ~]# lvs
  LV      VG        Attr      LSize   Pool Origin Data%  Move Log
Cpy%Sync Convert
  lvol0   vg        -wi-a----   4.00m

...so the vg was deactivated, then lvol1 removed, and we end up with
lvol1 removed (which is ok) BUT with lvol0 activated (which is wrong)!!!
This is because after lvol1 removal, we need to write metadata to the
underlying device /dev/sda and that causes the CHANGE event to be
generated (because of the WATCH udev rule set on this device) and this
causes the pvscan --cache -aay to be reevaluated.

We have to limit this and call pvscan --cache -aay to autoactivate
VGs/LVs only in these cases:

 --> if the *PV is not a dm device*, scan only after proper device
addition (ADD event) and not with any other changes (CHANGE event)

 --> if the *PV is a dm device*, scan only after proper mapping
activation (CHANGE event + the underlying PV in a state "just
activated")
This commit is contained in:
Peter Rajnoha 2012-12-21 10:34:48 +01:00
parent 970dfbcd69
commit 756bcabbfe
3 changed files with 14 additions and 5 deletions

View File

@ -1,5 +1,6 @@
Version 2.02.99 -
===================================
Fix autoactivation to not autoactivate VG/LV on each change of the PVs used.
Limit RAID device replacement to repair only if LV is not in-sync.
Disallow RAID device replacement or repair on inactive LVs.
Fix possible race while removing metadata from lvmetad.

View File

@ -45,7 +45,7 @@ ENV{DISK_RO}=="1", GOTO="dm_disable"
# in libdevmapper so we need to detect this and try to behave correctly.
# For such spurious events, regenerate all flags from current udev database content
# (this information would normally be inaccessible for spurious ADD and CHANGE events).
ENV{DM_UDEV_PRIMARY_SOURCE_FLAG}=="1", GOTO="dm_flags_done"
ENV{DM_UDEV_PRIMARY_SOURCE_FLAG}=="1", ENV{DM_ACTIVATION}="1", GOTO="dm_flags_done"
IMPORT{db}="DM_UDEV_DISABLE_DM_RULES_FLAG"
IMPORT{db}="DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG"
IMPORT{db}="DM_UDEV_DISABLE_DISK_RULES_FLAG"

View File

@ -17,10 +17,18 @@
SUBSYSTEM!="block", GOTO="lvm_end"
(LVM_EXEC_RULE)
# Device-mapper devices are processed only on change event or on supported synthesized event.
KERNEL=="dm-[0-9]*", ENV{DM_UDEV_RULES_VSN}!="?*", GOTO="lvm_end"
# Only process devices already marked as a PV - this requires blkid to be called before.
ENV{ID_FS_TYPE}=="LVM2_member|LVM1_member", RUN+="(LVM_EXEC)/lvm pvscan --cache --activate ay --major $major --minor $minor"
ENV{ID_FS_TYPE}!="LVM2_member|LVM1_member", GOTO="lvm_end"
ACTION=="remove", GOTO="lvm_scan"
# If the PV is not a dm device, scan only after device addition (ADD event)
KERNEL!="dm-[0-9]*", ACTION!="add", GOTO="lvm_end"
# If the PV is a dm device, scan only after proper mapping activation (CHANGE event + DM_ACTIVATION=1)
KERNEL=="dm-[0-9]*", ENV{DM_ACTIVATION}!="1", GOTO="lvm_end"
LABEL="lvm_scan"
RUN+="(LVM_EXEC)/lvm pvscan --cache --activate ay --major $major --minor $minor"
LABEL="lvm_end"