device usage based on devices file
The LVM devices file lists devices that lvm can use. The default
file is /etc/lvm/devices/system.devices, and the lvmdevices(8)
command is used to add or remove device entries. If the file
does not exist, or if lvm.conf includes use_devicesfile=0, then
lvm will not use a devices file. When the devices file is in use,
the regex filter is not used, and the filter settings in lvm.conf
or on the command line are ignored.
LVM records devices in the devices file using hardware-specific
IDs, such as the WWID, and attempts to use subsystem-specific
IDs for virtual device types. These device IDs are also written
in the VG metadata. When no hardware or virtual ID is available,
lvm falls back using the unstable device name as the device ID.
When devnames are used, lvm performs extra scanning to find
devices if their devname changes, e.g. after reboot.
When proper device IDs are used, an lvm command will not look
at devices outside the devices file, but when devnames are used
as a fallback, lvm will scan devices outside the devices file
to locate PVs on renamed devices. A config setting
search_for_devnames can be used to control the scanning for
renamed devname entries.
Related to the devices file, the new command option
--devices <devnames> allows a list of devices to be specified for
the command to use, overriding the devices file. The listed
devices act as a sort of devices file in terms of limiting which
devices lvm will see and use. Devices that are not listed will
appear to be missing to the lvm command.
Multiple devices files can be kept in /etc/lvm/devices, which
allows lvm to be used with different sets of devices, e.g.
system devices do not need to be exposed to a specific application,
and the application can use lvm on its own set of devices that are
not exposed to the system. The option --devicesfile <filename> is
used to select the devices file to use with the command. Without
the option set, the default system devices file is used.
Setting --devicesfile "" causes lvm to not use a devices file.
An existing, empty devices file means lvm will see no devices.
The new command vgimportdevices adds PVs from a VG to the devices
file and updates the VG metadata to include the device IDs.
vgimportdevices -a will import all VGs into the system devices file.
LVM commands run by dmeventd not use a devices file by default,
and will look at all devices on the system. A devices file can
be created for dmeventd (/etc/lvm/devices/dmeventd.devices) If
this file exists, lvm commands run by dmeventd will use it.
Internal implementaion:
- device_ids_read - read the devices file
. add struct dev_use (du) to cmd->use_devices for each devices file entry
- dev_cache_scan - get /dev entries
. add struct device (dev) to dev_cache for each device on the system
- device_ids_match - match devices file entries to /dev entries
. match each du on cmd->use_devices to a dev in dev_cache, using device ID
. on match, set du->dev, dev->id, dev->flags MATCHED_USE_ID
- label_scan - read lvm headers and metadata from devices
. filters are applied, those that do not need data from the device
. filter-deviceid skips devs without MATCHED_USE_ID, i.e.
skips /dev entries that are not listed in the devices file
. read lvm label from dev
. filters are applied, those that use data from the device
. read lvm metadata from dev
. add info/vginfo structs for PVs/VGs (info is "lvmcache")
- device_ids_find_renamed_devs - handle devices with unstable devname ID
where devname changed
. this step only needed when devs do not have proper device IDs,
and their dev names change, e.g. after reboot sdb becomes sdc.
. detect incorrect match because PVID in the devices file entry
does not match the PVID found when the device was read above
. undo incorrect match between du and dev above
. search system devices for new location of PVID
. update devices file with new devnames for PVIDs on renamed devices
. label_scan the renamed devs
- continue with command processing
2020-06-23 21:25:41 +03:00
/*
* Copyright ( C ) 2020 Red Hat , Inc . All rights reserved .
*
* This file is part of LVM2 .
*
* This copyrighted material is made available to anyone wishing to use ,
* modify , copy , or redistribute it subject to the terms and conditions
* of the GNU Lesser General Public License v .2 .1 .
*
* You should have received a copy of the GNU Lesser General Public License
* along with this program ; if not , write to the Free Software Foundation ,
* Inc . , 51 Franklin Street , Fifth Floor , Boston , MA 02110 - 1301 USA
*/
# include "tools.h"
# include "lib/cache/lvmcache.h"
# include "lib/device/device_id.h"
skip indexing devices used by LVs in more commands
expands commit d5a06f9a7df5a43b2e2311db62ff8d3011217d74
"pvscan: skip indexing devices used by LVs"
The dev cache index is expensive and slow, so limit it
to commands that are used to observe the state of lvm.
The index is only used to print warnings about incorrect
device use by active LVs, e.g. if an LV is using a
multipath component device instead of the multipath
device. Commands that continue to use the index and
print the warnings:
fullreport, lvmdiskscan, vgs, lvs, pvs,
vgdisplay, lvdisplay, pvdisplay,
vgscan, lvscan, pvscan (excluding --cache)
A couple other commands were borrowing the DEV_USED_FOR_LV
flag to just check if a device was actively in use by LVs.
These are converted to the new dev_is_used_by_active_lv().
2021-07-08 00:59:56 +03:00
# include "lib/device/dev-type.h"
device usage based on devices file
The LVM devices file lists devices that lvm can use. The default
file is /etc/lvm/devices/system.devices, and the lvmdevices(8)
command is used to add or remove device entries. If the file
does not exist, or if lvm.conf includes use_devicesfile=0, then
lvm will not use a devices file. When the devices file is in use,
the regex filter is not used, and the filter settings in lvm.conf
or on the command line are ignored.
LVM records devices in the devices file using hardware-specific
IDs, such as the WWID, and attempts to use subsystem-specific
IDs for virtual device types. These device IDs are also written
in the VG metadata. When no hardware or virtual ID is available,
lvm falls back using the unstable device name as the device ID.
When devnames are used, lvm performs extra scanning to find
devices if their devname changes, e.g. after reboot.
When proper device IDs are used, an lvm command will not look
at devices outside the devices file, but when devnames are used
as a fallback, lvm will scan devices outside the devices file
to locate PVs on renamed devices. A config setting
search_for_devnames can be used to control the scanning for
renamed devname entries.
Related to the devices file, the new command option
--devices <devnames> allows a list of devices to be specified for
the command to use, overriding the devices file. The listed
devices act as a sort of devices file in terms of limiting which
devices lvm will see and use. Devices that are not listed will
appear to be missing to the lvm command.
Multiple devices files can be kept in /etc/lvm/devices, which
allows lvm to be used with different sets of devices, e.g.
system devices do not need to be exposed to a specific application,
and the application can use lvm on its own set of devices that are
not exposed to the system. The option --devicesfile <filename> is
used to select the devices file to use with the command. Without
the option set, the default system devices file is used.
Setting --devicesfile "" causes lvm to not use a devices file.
An existing, empty devices file means lvm will see no devices.
The new command vgimportdevices adds PVs from a VG to the devices
file and updates the VG metadata to include the device IDs.
vgimportdevices -a will import all VGs into the system devices file.
LVM commands run by dmeventd not use a devices file by default,
and will look at all devices on the system. A devices file can
be created for dmeventd (/etc/lvm/devices/dmeventd.devices) If
this file exists, lvm commands run by dmeventd will use it.
Internal implementaion:
- device_ids_read - read the devices file
. add struct dev_use (du) to cmd->use_devices for each devices file entry
- dev_cache_scan - get /dev entries
. add struct device (dev) to dev_cache for each device on the system
- device_ids_match - match devices file entries to /dev entries
. match each du on cmd->use_devices to a dev in dev_cache, using device ID
. on match, set du->dev, dev->id, dev->flags MATCHED_USE_ID
- label_scan - read lvm headers and metadata from devices
. filters are applied, those that do not need data from the device
. filter-deviceid skips devs without MATCHED_USE_ID, i.e.
skips /dev entries that are not listed in the devices file
. read lvm label from dev
. filters are applied, those that use data from the device
. read lvm metadata from dev
. add info/vginfo structs for PVs/VGs (info is "lvmcache")
- device_ids_find_renamed_devs - handle devices with unstable devname ID
where devname changed
. this step only needed when devs do not have proper device IDs,
and their dev names change, e.g. after reboot sdb becomes sdc.
. detect incorrect match because PVID in the devices file entry
does not match the PVID found when the device was read above
. undo incorrect match between du and dev above
. search system devices for new location of PVID
. update devices file with new devnames for PVIDs on renamed devices
. label_scan the renamed devs
- continue with command processing
2020-06-23 21:25:41 +03:00
2021-09-10 00:06:45 +03:00
/* coverity[unnecessary_header] needed for MuslC */
2021-08-23 17:24:45 +03:00
# include <sys/file.h>
add activation services
New systemd services for startup:
lvm-devices-wait.service
Used in place of systemd-udev-settle, this service waits
for udev+pvscan to process PVs listed in system.devices.
It runs the command "lvmdevices --wait pvsonline".
This only waits for PVs that can be matched to a device in
sysfs, so it only waits for devices attached to the system.
It waits specifically for the /run/lvm/pvs_online/<pvid>
files to be created by pvscan. It quits waiting after a
configurable number of seconds. This service gives the
first activation service a chance to activate VGs from
PVs that are available immediately at startup. If this
service quits waiting before all the expected pvid files
appear, then the VG associated with those PVs will most
likely be activated by the -last service rather than the
initial -main service. If those PVs are even slower to
complete processing than the -last service, then the VG
will be activated by event activation whenever they are
finally complete.
lvm-activate-vgs-main.service
Calls "vgchange -aay", after lvm-devices-wait, to activate
complete VGs. It only considers PVs that have been
processed by udev+pvscan and have pvs_online files.
This is expected to activate VGs from basic devices
(not virtual device types) that are present at startup.
lvm-activate-vgs-last.service
Calls "vgchange -aay", after multipathd has started, to
activate VGs that became available after virtual device
services were started, e.g. VGs on multipath devices.
Like -main, it only looks at PVs that have been processed
by pvscan.
This vgchange in the -last service enables event activation
by creating the /run/lvm/event-activation-on file. Event
activation will activate any further VGs that appear on the
system (or complete udev processing) after the -last service.
In the case of event activation, the udev rule will run
vgchange -aay <vgname> via a transient service
lvm-activate-<vgname>.service. This vgchange only scans
PVs in the VG being activated, also based on the pvs_online
files from pvscan.
When there are many VGs that need activation during system
startup, the two fixed services can activate them all much
faster than activating each VG individually via events.
lvm.conf auto_activation_settings can be used to configure
the behavior (default ["service_and_event", "pvscan_hints"]).
"service_and_event" - the behavior described above, where
activation services are used first, and event activation
is used afterward.
"service_only" - only lvm-activate-vgs-* are used, and
no event-based activation occurs after the services finish.
(Equivalent to setting lvm.conf event_activation=0.)
"event_only" - the lvm-activate-vgs* services are skipped,
and all VGs are activated individually with event-based
activation.
"pvscan_hints" - the vgchange autoactivation commands
use pvs_online files created by pvscan. This optimization
limits the devices scanned by the vgchange command to only
PVs that have been processed by pvscan.
2021-09-02 23:15:46 +03:00
# include <time.h>
2021-08-23 17:24:45 +03:00
device usage based on devices file
The LVM devices file lists devices that lvm can use. The default
file is /etc/lvm/devices/system.devices, and the lvmdevices(8)
command is used to add or remove device entries. If the file
does not exist, or if lvm.conf includes use_devicesfile=0, then
lvm will not use a devices file. When the devices file is in use,
the regex filter is not used, and the filter settings in lvm.conf
or on the command line are ignored.
LVM records devices in the devices file using hardware-specific
IDs, such as the WWID, and attempts to use subsystem-specific
IDs for virtual device types. These device IDs are also written
in the VG metadata. When no hardware or virtual ID is available,
lvm falls back using the unstable device name as the device ID.
When devnames are used, lvm performs extra scanning to find
devices if their devname changes, e.g. after reboot.
When proper device IDs are used, an lvm command will not look
at devices outside the devices file, but when devnames are used
as a fallback, lvm will scan devices outside the devices file
to locate PVs on renamed devices. A config setting
search_for_devnames can be used to control the scanning for
renamed devname entries.
Related to the devices file, the new command option
--devices <devnames> allows a list of devices to be specified for
the command to use, overriding the devices file. The listed
devices act as a sort of devices file in terms of limiting which
devices lvm will see and use. Devices that are not listed will
appear to be missing to the lvm command.
Multiple devices files can be kept in /etc/lvm/devices, which
allows lvm to be used with different sets of devices, e.g.
system devices do not need to be exposed to a specific application,
and the application can use lvm on its own set of devices that are
not exposed to the system. The option --devicesfile <filename> is
used to select the devices file to use with the command. Without
the option set, the default system devices file is used.
Setting --devicesfile "" causes lvm to not use a devices file.
An existing, empty devices file means lvm will see no devices.
The new command vgimportdevices adds PVs from a VG to the devices
file and updates the VG metadata to include the device IDs.
vgimportdevices -a will import all VGs into the system devices file.
LVM commands run by dmeventd not use a devices file by default,
and will look at all devices on the system. A devices file can
be created for dmeventd (/etc/lvm/devices/dmeventd.devices) If
this file exists, lvm commands run by dmeventd will use it.
Internal implementaion:
- device_ids_read - read the devices file
. add struct dev_use (du) to cmd->use_devices for each devices file entry
- dev_cache_scan - get /dev entries
. add struct device (dev) to dev_cache for each device on the system
- device_ids_match - match devices file entries to /dev entries
. match each du on cmd->use_devices to a dev in dev_cache, using device ID
. on match, set du->dev, dev->id, dev->flags MATCHED_USE_ID
- label_scan - read lvm headers and metadata from devices
. filters are applied, those that do not need data from the device
. filter-deviceid skips devs without MATCHED_USE_ID, i.e.
skips /dev entries that are not listed in the devices file
. read lvm label from dev
. filters are applied, those that use data from the device
. read lvm metadata from dev
. add info/vginfo structs for PVs/VGs (info is "lvmcache")
- device_ids_find_renamed_devs - handle devices with unstable devname ID
where devname changed
. this step only needed when devs do not have proper device IDs,
and their dev names change, e.g. after reboot sdb becomes sdc.
. detect incorrect match because PVID in the devices file entry
does not match the PVID found when the device was read above
. undo incorrect match between du and dev above
. search system devices for new location of PVID
. update devices file with new devnames for PVIDs on renamed devices
. label_scan the renamed devs
- continue with command processing
2020-06-23 21:25:41 +03:00
static void _search_devs_for_pvids ( struct cmd_context * cmd , struct dm_list * search_pvids , struct dm_list * found_devs )
{
struct dev_iter * iter ;
struct device * dev ;
struct device_list * devl , * devl2 ;
struct device_id_list * dil , * dil2 ;
struct dm_list devs ;
int found ;
dm_list_init ( & devs ) ;
/*
* Create a list of all devices on the system , without applying
* any filters , since we do not want filters to read any of the
* devices yet .
*/
if ( ! ( iter = dev_iter_create ( NULL , 0 ) ) )
return ;
while ( ( dev = dev_iter_get ( cmd , iter ) ) ) {
/* Skip devs with a valid match to a du. */
if ( get_du_for_dev ( cmd , dev ) )
continue ;
if ( ! ( devl = dm_pool_zalloc ( cmd - > mem , sizeof ( * devl ) ) ) )
continue ;
devl - > dev = dev ;
dm_list_add ( & devs , & devl - > list ) ;
}
dev_iter_destroy ( iter ) ;
/*
* Apply the filters that do not require reading the devices
*/
log_debug ( " Filtering devices (no data) for pvid search " ) ;
cmd - > filter_nodata_only = 1 ;
cmd - > filter_deviceid_skip = 1 ;
dm_list_iterate_items_safe ( devl , devl2 , & devs ) {
if ( ! cmd - > filter - > passes_filter ( cmd , cmd - > filter , devl - > dev , NULL ) )
dm_list_del ( & devl - > list ) ;
}
/*
* Read header from each dev to see if it has one of the pvids we ' re
* searching for .
*/
dm_list_iterate_items_safe ( devl , devl2 , & devs ) {
2021-04-24 01:32:37 +03:00
int has_pvid ;
device usage based on devices file
The LVM devices file lists devices that lvm can use. The default
file is /etc/lvm/devices/system.devices, and the lvmdevices(8)
command is used to add or remove device entries. If the file
does not exist, or if lvm.conf includes use_devicesfile=0, then
lvm will not use a devices file. When the devices file is in use,
the regex filter is not used, and the filter settings in lvm.conf
or on the command line are ignored.
LVM records devices in the devices file using hardware-specific
IDs, such as the WWID, and attempts to use subsystem-specific
IDs for virtual device types. These device IDs are also written
in the VG metadata. When no hardware or virtual ID is available,
lvm falls back using the unstable device name as the device ID.
When devnames are used, lvm performs extra scanning to find
devices if their devname changes, e.g. after reboot.
When proper device IDs are used, an lvm command will not look
at devices outside the devices file, but when devnames are used
as a fallback, lvm will scan devices outside the devices file
to locate PVs on renamed devices. A config setting
search_for_devnames can be used to control the scanning for
renamed devname entries.
Related to the devices file, the new command option
--devices <devnames> allows a list of devices to be specified for
the command to use, overriding the devices file. The listed
devices act as a sort of devices file in terms of limiting which
devices lvm will see and use. Devices that are not listed will
appear to be missing to the lvm command.
Multiple devices files can be kept in /etc/lvm/devices, which
allows lvm to be used with different sets of devices, e.g.
system devices do not need to be exposed to a specific application,
and the application can use lvm on its own set of devices that are
not exposed to the system. The option --devicesfile <filename> is
used to select the devices file to use with the command. Without
the option set, the default system devices file is used.
Setting --devicesfile "" causes lvm to not use a devices file.
An existing, empty devices file means lvm will see no devices.
The new command vgimportdevices adds PVs from a VG to the devices
file and updates the VG metadata to include the device IDs.
vgimportdevices -a will import all VGs into the system devices file.
LVM commands run by dmeventd not use a devices file by default,
and will look at all devices on the system. A devices file can
be created for dmeventd (/etc/lvm/devices/dmeventd.devices) If
this file exists, lvm commands run by dmeventd will use it.
Internal implementaion:
- device_ids_read - read the devices file
. add struct dev_use (du) to cmd->use_devices for each devices file entry
- dev_cache_scan - get /dev entries
. add struct device (dev) to dev_cache for each device on the system
- device_ids_match - match devices file entries to /dev entries
. match each du on cmd->use_devices to a dev in dev_cache, using device ID
. on match, set du->dev, dev->id, dev->flags MATCHED_USE_ID
- label_scan - read lvm headers and metadata from devices
. filters are applied, those that do not need data from the device
. filter-deviceid skips devs without MATCHED_USE_ID, i.e.
skips /dev entries that are not listed in the devices file
. read lvm label from dev
. filters are applied, those that use data from the device
. read lvm metadata from dev
. add info/vginfo structs for PVs/VGs (info is "lvmcache")
- device_ids_find_renamed_devs - handle devices with unstable devname ID
where devname changed
. this step only needed when devs do not have proper device IDs,
and their dev names change, e.g. after reboot sdb becomes sdc.
. detect incorrect match because PVID in the devices file entry
does not match the PVID found when the device was read above
. undo incorrect match between du and dev above
. search system devices for new location of PVID
. update devices file with new devnames for PVIDs on renamed devices
. label_scan the renamed devs
- continue with command processing
2020-06-23 21:25:41 +03:00
/* sets dev->pvid if an lvm label with pvid is found */
2021-04-24 01:32:37 +03:00
if ( ! label_read_pvid ( devl - > dev , & has_pvid ) )
continue ;
if ( ! has_pvid )
device usage based on devices file
The LVM devices file lists devices that lvm can use. The default
file is /etc/lvm/devices/system.devices, and the lvmdevices(8)
command is used to add or remove device entries. If the file
does not exist, or if lvm.conf includes use_devicesfile=0, then
lvm will not use a devices file. When the devices file is in use,
the regex filter is not used, and the filter settings in lvm.conf
or on the command line are ignored.
LVM records devices in the devices file using hardware-specific
IDs, such as the WWID, and attempts to use subsystem-specific
IDs for virtual device types. These device IDs are also written
in the VG metadata. When no hardware or virtual ID is available,
lvm falls back using the unstable device name as the device ID.
When devnames are used, lvm performs extra scanning to find
devices if their devname changes, e.g. after reboot.
When proper device IDs are used, an lvm command will not look
at devices outside the devices file, but when devnames are used
as a fallback, lvm will scan devices outside the devices file
to locate PVs on renamed devices. A config setting
search_for_devnames can be used to control the scanning for
renamed devname entries.
Related to the devices file, the new command option
--devices <devnames> allows a list of devices to be specified for
the command to use, overriding the devices file. The listed
devices act as a sort of devices file in terms of limiting which
devices lvm will see and use. Devices that are not listed will
appear to be missing to the lvm command.
Multiple devices files can be kept in /etc/lvm/devices, which
allows lvm to be used with different sets of devices, e.g.
system devices do not need to be exposed to a specific application,
and the application can use lvm on its own set of devices that are
not exposed to the system. The option --devicesfile <filename> is
used to select the devices file to use with the command. Without
the option set, the default system devices file is used.
Setting --devicesfile "" causes lvm to not use a devices file.
An existing, empty devices file means lvm will see no devices.
The new command vgimportdevices adds PVs from a VG to the devices
file and updates the VG metadata to include the device IDs.
vgimportdevices -a will import all VGs into the system devices file.
LVM commands run by dmeventd not use a devices file by default,
and will look at all devices on the system. A devices file can
be created for dmeventd (/etc/lvm/devices/dmeventd.devices) If
this file exists, lvm commands run by dmeventd will use it.
Internal implementaion:
- device_ids_read - read the devices file
. add struct dev_use (du) to cmd->use_devices for each devices file entry
- dev_cache_scan - get /dev entries
. add struct device (dev) to dev_cache for each device on the system
- device_ids_match - match devices file entries to /dev entries
. match each du on cmd->use_devices to a dev in dev_cache, using device ID
. on match, set du->dev, dev->id, dev->flags MATCHED_USE_ID
- label_scan - read lvm headers and metadata from devices
. filters are applied, those that do not need data from the device
. filter-deviceid skips devs without MATCHED_USE_ID, i.e.
skips /dev entries that are not listed in the devices file
. read lvm label from dev
. filters are applied, those that use data from the device
. read lvm metadata from dev
. add info/vginfo structs for PVs/VGs (info is "lvmcache")
- device_ids_find_renamed_devs - handle devices with unstable devname ID
where devname changed
. this step only needed when devs do not have proper device IDs,
and their dev names change, e.g. after reboot sdb becomes sdc.
. detect incorrect match because PVID in the devices file entry
does not match the PVID found when the device was read above
. undo incorrect match between du and dev above
. search system devices for new location of PVID
. update devices file with new devnames for PVIDs on renamed devices
. label_scan the renamed devs
- continue with command processing
2020-06-23 21:25:41 +03:00
continue ;
found = 0 ;
dm_list_iterate_items_safe ( dil , dil2 , search_pvids ) {
if ( ! strcmp ( devl - > dev - > pvid , dil - > pvid ) ) {
dm_list_del ( & devl - > list ) ;
dm_list_del ( & dil - > list ) ;
dm_list_add ( found_devs , & devl - > list ) ;
log_print ( " Found PVID %s on %s. " , dil - > pvid , dev_name ( devl - > dev ) ) ;
found = 1 ;
break ;
}
}
if ( ! found )
label_scan_invalidate ( devl - > dev ) ;
/*
* FIXME : search all devs in case pvid is duplicated on multiple devs .
*/
if ( dm_list_empty ( search_pvids ) )
break ;
}
dm_list_iterate_items ( dil , search_pvids )
log_error ( " PVID %s not found on any devices. " , dil - > pvid ) ;
/*
* Now that the device has been read , apply the filters again
* which will now include filters that read data from the device .
* N . B . we ' ve already skipped devs that were excluded by the
* no - data filters , so if the PVID exists on one of those devices
* no warning is printed .
*/
log_debug ( " Filtering devices (with data) for pvid search " ) ;
cmd - > filter_nodata_only = 0 ;
cmd - > filter_deviceid_skip = 1 ;
dm_list_iterate_items_safe ( devl , devl2 , found_devs ) {
dev = devl - > dev ;
cmd - > filter - > wipe ( cmd , cmd - > filter , dev , NULL ) ;
if ( ! cmd - > filter - > passes_filter ( cmd , cmd - > filter , dev , NULL ) ) {
log_warn ( " WARNING: PVID %s found on %s which is excluded by filter: %s " ,
dev - > pvid , dev_name ( dev ) , dev_filtered_reason ( dev ) ) ;
dm_list_del ( & devl - > list ) ;
}
}
}
add activation services
New systemd services for startup:
lvm-devices-wait.service
Used in place of systemd-udev-settle, this service waits
for udev+pvscan to process PVs listed in system.devices.
It runs the command "lvmdevices --wait pvsonline".
This only waits for PVs that can be matched to a device in
sysfs, so it only waits for devices attached to the system.
It waits specifically for the /run/lvm/pvs_online/<pvid>
files to be created by pvscan. It quits waiting after a
configurable number of seconds. This service gives the
first activation service a chance to activate VGs from
PVs that are available immediately at startup. If this
service quits waiting before all the expected pvid files
appear, then the VG associated with those PVs will most
likely be activated by the -last service rather than the
initial -main service. If those PVs are even slower to
complete processing than the -last service, then the VG
will be activated by event activation whenever they are
finally complete.
lvm-activate-vgs-main.service
Calls "vgchange -aay", after lvm-devices-wait, to activate
complete VGs. It only considers PVs that have been
processed by udev+pvscan and have pvs_online files.
This is expected to activate VGs from basic devices
(not virtual device types) that are present at startup.
lvm-activate-vgs-last.service
Calls "vgchange -aay", after multipathd has started, to
activate VGs that became available after virtual device
services were started, e.g. VGs on multipath devices.
Like -main, it only looks at PVs that have been processed
by pvscan.
This vgchange in the -last service enables event activation
by creating the /run/lvm/event-activation-on file. Event
activation will activate any further VGs that appear on the
system (or complete udev processing) after the -last service.
In the case of event activation, the udev rule will run
vgchange -aay <vgname> via a transient service
lvm-activate-<vgname>.service. This vgchange only scans
PVs in the VG being activated, also based on the pvs_online
files from pvscan.
When there are many VGs that need activation during system
startup, the two fixed services can activate them all much
faster than activating each VG individually via events.
lvm.conf auto_activation_settings can be used to configure
the behavior (default ["service_and_event", "pvscan_hints"]).
"service_and_event" - the behavior described above, where
activation services are used first, and event activation
is used afterward.
"service_only" - only lvm-activate-vgs-* are used, and
no event-based activation occurs after the services finish.
(Equivalent to setting lvm.conf event_activation=0.)
"event_only" - the lvm-activate-vgs* services are skipped,
and all VGs are activated individually with event-based
activation.
"pvscan_hints" - the vgchange autoactivation commands
use pvs_online files created by pvscan. This optimization
limits the devices scanned by the vgchange command to only
PVs that have been processed by pvscan.
2021-09-02 23:15:46 +03:00
static int _all_pvids_online ( struct cmd_context * cmd , struct dm_list * wait_pvids )
{
struct device_id_list * dil , * dil2 ;
int notfound = 0 ;
dm_list_iterate_items_safe ( dil , dil2 , wait_pvids ) {
if ( online_pvid_file_exists ( dil - > pvid ) )
dm_list_del ( & dil - > list ) ;
else
notfound + + ;
}
return notfound ? 0 : 1 ;
}
device usage based on devices file
The LVM devices file lists devices that lvm can use. The default
file is /etc/lvm/devices/system.devices, and the lvmdevices(8)
command is used to add or remove device entries. If the file
does not exist, or if lvm.conf includes use_devicesfile=0, then
lvm will not use a devices file. When the devices file is in use,
the regex filter is not used, and the filter settings in lvm.conf
or on the command line are ignored.
LVM records devices in the devices file using hardware-specific
IDs, such as the WWID, and attempts to use subsystem-specific
IDs for virtual device types. These device IDs are also written
in the VG metadata. When no hardware or virtual ID is available,
lvm falls back using the unstable device name as the device ID.
When devnames are used, lvm performs extra scanning to find
devices if their devname changes, e.g. after reboot.
When proper device IDs are used, an lvm command will not look
at devices outside the devices file, but when devnames are used
as a fallback, lvm will scan devices outside the devices file
to locate PVs on renamed devices. A config setting
search_for_devnames can be used to control the scanning for
renamed devname entries.
Related to the devices file, the new command option
--devices <devnames> allows a list of devices to be specified for
the command to use, overriding the devices file. The listed
devices act as a sort of devices file in terms of limiting which
devices lvm will see and use. Devices that are not listed will
appear to be missing to the lvm command.
Multiple devices files can be kept in /etc/lvm/devices, which
allows lvm to be used with different sets of devices, e.g.
system devices do not need to be exposed to a specific application,
and the application can use lvm on its own set of devices that are
not exposed to the system. The option --devicesfile <filename> is
used to select the devices file to use with the command. Without
the option set, the default system devices file is used.
Setting --devicesfile "" causes lvm to not use a devices file.
An existing, empty devices file means lvm will see no devices.
The new command vgimportdevices adds PVs from a VG to the devices
file and updates the VG metadata to include the device IDs.
vgimportdevices -a will import all VGs into the system devices file.
LVM commands run by dmeventd not use a devices file by default,
and will look at all devices on the system. A devices file can
be created for dmeventd (/etc/lvm/devices/dmeventd.devices) If
this file exists, lvm commands run by dmeventd will use it.
Internal implementaion:
- device_ids_read - read the devices file
. add struct dev_use (du) to cmd->use_devices for each devices file entry
- dev_cache_scan - get /dev entries
. add struct device (dev) to dev_cache for each device on the system
- device_ids_match - match devices file entries to /dev entries
. match each du on cmd->use_devices to a dev in dev_cache, using device ID
. on match, set du->dev, dev->id, dev->flags MATCHED_USE_ID
- label_scan - read lvm headers and metadata from devices
. filters are applied, those that do not need data from the device
. filter-deviceid skips devs without MATCHED_USE_ID, i.e.
skips /dev entries that are not listed in the devices file
. read lvm label from dev
. filters are applied, those that use data from the device
. read lvm metadata from dev
. add info/vginfo structs for PVs/VGs (info is "lvmcache")
- device_ids_find_renamed_devs - handle devices with unstable devname ID
where devname changed
. this step only needed when devs do not have proper device IDs,
and their dev names change, e.g. after reboot sdb becomes sdc.
. detect incorrect match because PVID in the devices file entry
does not match the PVID found when the device was read above
. undo incorrect match between du and dev above
. search system devices for new location of PVID
. update devices file with new devnames for PVIDs on renamed devices
. label_scan the renamed devs
- continue with command processing
2020-06-23 21:25:41 +03:00
int lvmdevices ( struct cmd_context * cmd , int argc , char * * argv )
{
struct dm_list search_pvids ;
add activation services
New systemd services for startup:
lvm-devices-wait.service
Used in place of systemd-udev-settle, this service waits
for udev+pvscan to process PVs listed in system.devices.
It runs the command "lvmdevices --wait pvsonline".
This only waits for PVs that can be matched to a device in
sysfs, so it only waits for devices attached to the system.
It waits specifically for the /run/lvm/pvs_online/<pvid>
files to be created by pvscan. It quits waiting after a
configurable number of seconds. This service gives the
first activation service a chance to activate VGs from
PVs that are available immediately at startup. If this
service quits waiting before all the expected pvid files
appear, then the VG associated with those PVs will most
likely be activated by the -last service rather than the
initial -main service. If those PVs are even slower to
complete processing than the -last service, then the VG
will be activated by event activation whenever they are
finally complete.
lvm-activate-vgs-main.service
Calls "vgchange -aay", after lvm-devices-wait, to activate
complete VGs. It only considers PVs that have been
processed by udev+pvscan and have pvs_online files.
This is expected to activate VGs from basic devices
(not virtual device types) that are present at startup.
lvm-activate-vgs-last.service
Calls "vgchange -aay", after multipathd has started, to
activate VGs that became available after virtual device
services were started, e.g. VGs on multipath devices.
Like -main, it only looks at PVs that have been processed
by pvscan.
This vgchange in the -last service enables event activation
by creating the /run/lvm/event-activation-on file. Event
activation will activate any further VGs that appear on the
system (or complete udev processing) after the -last service.
In the case of event activation, the udev rule will run
vgchange -aay <vgname> via a transient service
lvm-activate-<vgname>.service. This vgchange only scans
PVs in the VG being activated, also based on the pvs_online
files from pvscan.
When there are many VGs that need activation during system
startup, the two fixed services can activate them all much
faster than activating each VG individually via events.
lvm.conf auto_activation_settings can be used to configure
the behavior (default ["service_and_event", "pvscan_hints"]).
"service_and_event" - the behavior described above, where
activation services are used first, and event activation
is used afterward.
"service_only" - only lvm-activate-vgs-* are used, and
no event-based activation occurs after the services finish.
(Equivalent to setting lvm.conf event_activation=0.)
"event_only" - the lvm-activate-vgs* services are skipped,
and all VGs are activated individually with event-based
activation.
"pvscan_hints" - the vgchange autoactivation commands
use pvs_online files created by pvscan. This optimization
limits the devices scanned by the vgchange command to only
PVs that have been processed by pvscan.
2021-09-02 23:15:46 +03:00
struct dm_list wait_pvids ;
device usage based on devices file
The LVM devices file lists devices that lvm can use. The default
file is /etc/lvm/devices/system.devices, and the lvmdevices(8)
command is used to add or remove device entries. If the file
does not exist, or if lvm.conf includes use_devicesfile=0, then
lvm will not use a devices file. When the devices file is in use,
the regex filter is not used, and the filter settings in lvm.conf
or on the command line are ignored.
LVM records devices in the devices file using hardware-specific
IDs, such as the WWID, and attempts to use subsystem-specific
IDs for virtual device types. These device IDs are also written
in the VG metadata. When no hardware or virtual ID is available,
lvm falls back using the unstable device name as the device ID.
When devnames are used, lvm performs extra scanning to find
devices if their devname changes, e.g. after reboot.
When proper device IDs are used, an lvm command will not look
at devices outside the devices file, but when devnames are used
as a fallback, lvm will scan devices outside the devices file
to locate PVs on renamed devices. A config setting
search_for_devnames can be used to control the scanning for
renamed devname entries.
Related to the devices file, the new command option
--devices <devnames> allows a list of devices to be specified for
the command to use, overriding the devices file. The listed
devices act as a sort of devices file in terms of limiting which
devices lvm will see and use. Devices that are not listed will
appear to be missing to the lvm command.
Multiple devices files can be kept in /etc/lvm/devices, which
allows lvm to be used with different sets of devices, e.g.
system devices do not need to be exposed to a specific application,
and the application can use lvm on its own set of devices that are
not exposed to the system. The option --devicesfile <filename> is
used to select the devices file to use with the command. Without
the option set, the default system devices file is used.
Setting --devicesfile "" causes lvm to not use a devices file.
An existing, empty devices file means lvm will see no devices.
The new command vgimportdevices adds PVs from a VG to the devices
file and updates the VG metadata to include the device IDs.
vgimportdevices -a will import all VGs into the system devices file.
LVM commands run by dmeventd not use a devices file by default,
and will look at all devices on the system. A devices file can
be created for dmeventd (/etc/lvm/devices/dmeventd.devices) If
this file exists, lvm commands run by dmeventd will use it.
Internal implementaion:
- device_ids_read - read the devices file
. add struct dev_use (du) to cmd->use_devices for each devices file entry
- dev_cache_scan - get /dev entries
. add struct device (dev) to dev_cache for each device on the system
- device_ids_match - match devices file entries to /dev entries
. match each du on cmd->use_devices to a dev in dev_cache, using device ID
. on match, set du->dev, dev->id, dev->flags MATCHED_USE_ID
- label_scan - read lvm headers and metadata from devices
. filters are applied, those that do not need data from the device
. filter-deviceid skips devs without MATCHED_USE_ID, i.e.
skips /dev entries that are not listed in the devices file
. read lvm label from dev
. filters are applied, those that use data from the device
. read lvm metadata from dev
. add info/vginfo structs for PVs/VGs (info is "lvmcache")
- device_ids_find_renamed_devs - handle devices with unstable devname ID
where devname changed
. this step only needed when devs do not have proper device IDs,
and their dev names change, e.g. after reboot sdb becomes sdc.
. detect incorrect match because PVID in the devices file entry
does not match the PVID found when the device was read above
. undo incorrect match between du and dev above
. search system devices for new location of PVID
. update devices file with new devnames for PVIDs on renamed devices
. label_scan the renamed devs
- continue with command processing
2020-06-23 21:25:41 +03:00
struct dm_list found_devs ;
struct device_id_list * dil ;
struct device_list * devl ;
struct device * dev ;
struct dev_use * du , * du2 ;
2021-09-13 22:11:07 +03:00
const char * deviceidtype ;
add activation services
New systemd services for startup:
lvm-devices-wait.service
Used in place of systemd-udev-settle, this service waits
for udev+pvscan to process PVs listed in system.devices.
It runs the command "lvmdevices --wait pvsonline".
This only waits for PVs that can be matched to a device in
sysfs, so it only waits for devices attached to the system.
It waits specifically for the /run/lvm/pvs_online/<pvid>
files to be created by pvscan. It quits waiting after a
configurable number of seconds. This service gives the
first activation service a chance to activate VGs from
PVs that are available immediately at startup. If this
service quits waiting before all the expected pvid files
appear, then the VG associated with those PVs will most
likely be activated by the -last service rather than the
initial -main service. If those PVs are even slower to
complete processing than the -last service, then the VG
will be activated by event activation whenever they are
finally complete.
lvm-activate-vgs-main.service
Calls "vgchange -aay", after lvm-devices-wait, to activate
complete VGs. It only considers PVs that have been
processed by udev+pvscan and have pvs_online files.
This is expected to activate VGs from basic devices
(not virtual device types) that are present at startup.
lvm-activate-vgs-last.service
Calls "vgchange -aay", after multipathd has started, to
activate VGs that became available after virtual device
services were started, e.g. VGs on multipath devices.
Like -main, it only looks at PVs that have been processed
by pvscan.
This vgchange in the -last service enables event activation
by creating the /run/lvm/event-activation-on file. Event
activation will activate any further VGs that appear on the
system (or complete udev processing) after the -last service.
In the case of event activation, the udev rule will run
vgchange -aay <vgname> via a transient service
lvm-activate-<vgname>.service. This vgchange only scans
PVs in the VG being activated, also based on the pvs_online
files from pvscan.
When there are many VGs that need activation during system
startup, the two fixed services can activate them all much
faster than activating each VG individually via events.
lvm.conf auto_activation_settings can be used to configure
the behavior (default ["service_and_event", "pvscan_hints"]).
"service_and_event" - the behavior described above, where
activation services are used first, and event activation
is used afterward.
"service_only" - only lvm-activate-vgs-* are used, and
no event-based activation occurs after the services finish.
(Equivalent to setting lvm.conf event_activation=0.)
"event_only" - the lvm-activate-vgs* services are skipped,
and all VGs are activated individually with event-based
activation.
"pvscan_hints" - the vgchange autoactivation commands
use pvs_online files created by pvscan. This optimization
limits the devices scanned by the vgchange command to only
PVs that have been processed by pvscan.
2021-09-02 23:15:46 +03:00
time_t begin ;
int wait_sec ;
device usage based on devices file
The LVM devices file lists devices that lvm can use. The default
file is /etc/lvm/devices/system.devices, and the lvmdevices(8)
command is used to add or remove device entries. If the file
does not exist, or if lvm.conf includes use_devicesfile=0, then
lvm will not use a devices file. When the devices file is in use,
the regex filter is not used, and the filter settings in lvm.conf
or on the command line are ignored.
LVM records devices in the devices file using hardware-specific
IDs, such as the WWID, and attempts to use subsystem-specific
IDs for virtual device types. These device IDs are also written
in the VG metadata. When no hardware or virtual ID is available,
lvm falls back using the unstable device name as the device ID.
When devnames are used, lvm performs extra scanning to find
devices if their devname changes, e.g. after reboot.
When proper device IDs are used, an lvm command will not look
at devices outside the devices file, but when devnames are used
as a fallback, lvm will scan devices outside the devices file
to locate PVs on renamed devices. A config setting
search_for_devnames can be used to control the scanning for
renamed devname entries.
Related to the devices file, the new command option
--devices <devnames> allows a list of devices to be specified for
the command to use, overriding the devices file. The listed
devices act as a sort of devices file in terms of limiting which
devices lvm will see and use. Devices that are not listed will
appear to be missing to the lvm command.
Multiple devices files can be kept in /etc/lvm/devices, which
allows lvm to be used with different sets of devices, e.g.
system devices do not need to be exposed to a specific application,
and the application can use lvm on its own set of devices that are
not exposed to the system. The option --devicesfile <filename> is
used to select the devices file to use with the command. Without
the option set, the default system devices file is used.
Setting --devicesfile "" causes lvm to not use a devices file.
An existing, empty devices file means lvm will see no devices.
The new command vgimportdevices adds PVs from a VG to the devices
file and updates the VG metadata to include the device IDs.
vgimportdevices -a will import all VGs into the system devices file.
LVM commands run by dmeventd not use a devices file by default,
and will look at all devices on the system. A devices file can
be created for dmeventd (/etc/lvm/devices/dmeventd.devices) If
this file exists, lvm commands run by dmeventd will use it.
Internal implementaion:
- device_ids_read - read the devices file
. add struct dev_use (du) to cmd->use_devices for each devices file entry
- dev_cache_scan - get /dev entries
. add struct device (dev) to dev_cache for each device on the system
- device_ids_match - match devices file entries to /dev entries
. match each du on cmd->use_devices to a dev in dev_cache, using device ID
. on match, set du->dev, dev->id, dev->flags MATCHED_USE_ID
- label_scan - read lvm headers and metadata from devices
. filters are applied, those that do not need data from the device
. filter-deviceid skips devs without MATCHED_USE_ID, i.e.
skips /dev entries that are not listed in the devices file
. read lvm label from dev
. filters are applied, those that use data from the device
. read lvm metadata from dev
. add info/vginfo structs for PVs/VGs (info is "lvmcache")
- device_ids_find_renamed_devs - handle devices with unstable devname ID
where devname changed
. this step only needed when devs do not have proper device IDs,
and their dev names change, e.g. after reboot sdb becomes sdc.
. detect incorrect match because PVID in the devices file entry
does not match the PVID found when the device was read above
. undo incorrect match between du and dev above
. search system devices for new location of PVID
. update devices file with new devnames for PVIDs on renamed devices
. label_scan the renamed devs
- continue with command processing
2020-06-23 21:25:41 +03:00
int changes = 0 ;
dm_list_init ( & search_pvids ) ;
add activation services
New systemd services for startup:
lvm-devices-wait.service
Used in place of systemd-udev-settle, this service waits
for udev+pvscan to process PVs listed in system.devices.
It runs the command "lvmdevices --wait pvsonline".
This only waits for PVs that can be matched to a device in
sysfs, so it only waits for devices attached to the system.
It waits specifically for the /run/lvm/pvs_online/<pvid>
files to be created by pvscan. It quits waiting after a
configurable number of seconds. This service gives the
first activation service a chance to activate VGs from
PVs that are available immediately at startup. If this
service quits waiting before all the expected pvid files
appear, then the VG associated with those PVs will most
likely be activated by the -last service rather than the
initial -main service. If those PVs are even slower to
complete processing than the -last service, then the VG
will be activated by event activation whenever they are
finally complete.
lvm-activate-vgs-main.service
Calls "vgchange -aay", after lvm-devices-wait, to activate
complete VGs. It only considers PVs that have been
processed by udev+pvscan and have pvs_online files.
This is expected to activate VGs from basic devices
(not virtual device types) that are present at startup.
lvm-activate-vgs-last.service
Calls "vgchange -aay", after multipathd has started, to
activate VGs that became available after virtual device
services were started, e.g. VGs on multipath devices.
Like -main, it only looks at PVs that have been processed
by pvscan.
This vgchange in the -last service enables event activation
by creating the /run/lvm/event-activation-on file. Event
activation will activate any further VGs that appear on the
system (or complete udev processing) after the -last service.
In the case of event activation, the udev rule will run
vgchange -aay <vgname> via a transient service
lvm-activate-<vgname>.service. This vgchange only scans
PVs in the VG being activated, also based on the pvs_online
files from pvscan.
When there are many VGs that need activation during system
startup, the two fixed services can activate them all much
faster than activating each VG individually via events.
lvm.conf auto_activation_settings can be used to configure
the behavior (default ["service_and_event", "pvscan_hints"]).
"service_and_event" - the behavior described above, where
activation services are used first, and event activation
is used afterward.
"service_only" - only lvm-activate-vgs-* are used, and
no event-based activation occurs after the services finish.
(Equivalent to setting lvm.conf event_activation=0.)
"event_only" - the lvm-activate-vgs* services are skipped,
and all VGs are activated individually with event-based
activation.
"pvscan_hints" - the vgchange autoactivation commands
use pvs_online files created by pvscan. This optimization
limits the devices scanned by the vgchange command to only
PVs that have been processed by pvscan.
2021-09-02 23:15:46 +03:00
dm_list_init ( & wait_pvids ) ;
device usage based on devices file
The LVM devices file lists devices that lvm can use. The default
file is /etc/lvm/devices/system.devices, and the lvmdevices(8)
command is used to add or remove device entries. If the file
does not exist, or if lvm.conf includes use_devicesfile=0, then
lvm will not use a devices file. When the devices file is in use,
the regex filter is not used, and the filter settings in lvm.conf
or on the command line are ignored.
LVM records devices in the devices file using hardware-specific
IDs, such as the WWID, and attempts to use subsystem-specific
IDs for virtual device types. These device IDs are also written
in the VG metadata. When no hardware or virtual ID is available,
lvm falls back using the unstable device name as the device ID.
When devnames are used, lvm performs extra scanning to find
devices if their devname changes, e.g. after reboot.
When proper device IDs are used, an lvm command will not look
at devices outside the devices file, but when devnames are used
as a fallback, lvm will scan devices outside the devices file
to locate PVs on renamed devices. A config setting
search_for_devnames can be used to control the scanning for
renamed devname entries.
Related to the devices file, the new command option
--devices <devnames> allows a list of devices to be specified for
the command to use, overriding the devices file. The listed
devices act as a sort of devices file in terms of limiting which
devices lvm will see and use. Devices that are not listed will
appear to be missing to the lvm command.
Multiple devices files can be kept in /etc/lvm/devices, which
allows lvm to be used with different sets of devices, e.g.
system devices do not need to be exposed to a specific application,
and the application can use lvm on its own set of devices that are
not exposed to the system. The option --devicesfile <filename> is
used to select the devices file to use with the command. Without
the option set, the default system devices file is used.
Setting --devicesfile "" causes lvm to not use a devices file.
An existing, empty devices file means lvm will see no devices.
The new command vgimportdevices adds PVs from a VG to the devices
file and updates the VG metadata to include the device IDs.
vgimportdevices -a will import all VGs into the system devices file.
LVM commands run by dmeventd not use a devices file by default,
and will look at all devices on the system. A devices file can
be created for dmeventd (/etc/lvm/devices/dmeventd.devices) If
this file exists, lvm commands run by dmeventd will use it.
Internal implementaion:
- device_ids_read - read the devices file
. add struct dev_use (du) to cmd->use_devices for each devices file entry
- dev_cache_scan - get /dev entries
. add struct device (dev) to dev_cache for each device on the system
- device_ids_match - match devices file entries to /dev entries
. match each du on cmd->use_devices to a dev in dev_cache, using device ID
. on match, set du->dev, dev->id, dev->flags MATCHED_USE_ID
- label_scan - read lvm headers and metadata from devices
. filters are applied, those that do not need data from the device
. filter-deviceid skips devs without MATCHED_USE_ID, i.e.
skips /dev entries that are not listed in the devices file
. read lvm label from dev
. filters are applied, those that use data from the device
. read lvm metadata from dev
. add info/vginfo structs for PVs/VGs (info is "lvmcache")
- device_ids_find_renamed_devs - handle devices with unstable devname ID
where devname changed
. this step only needed when devs do not have proper device IDs,
and their dev names change, e.g. after reboot sdb becomes sdc.
. detect incorrect match because PVID in the devices file entry
does not match the PVID found when the device was read above
. undo incorrect match between du and dev above
. search system devices for new location of PVID
. update devices file with new devnames for PVIDs on renamed devices
. label_scan the renamed devs
- continue with command processing
2020-06-23 21:25:41 +03:00
dm_list_init ( & found_devs ) ;
if ( ! setup_devices_file ( cmd ) )
return ECMD_FAILED ;
if ( ! cmd - > enable_devices_file ) {
log_error ( " Devices file not enabled. " ) ;
return ECMD_FAILED ;
}
add activation services
New systemd services for startup:
lvm-devices-wait.service
Used in place of systemd-udev-settle, this service waits
for udev+pvscan to process PVs listed in system.devices.
It runs the command "lvmdevices --wait pvsonline".
This only waits for PVs that can be matched to a device in
sysfs, so it only waits for devices attached to the system.
It waits specifically for the /run/lvm/pvs_online/<pvid>
files to be created by pvscan. It quits waiting after a
configurable number of seconds. This service gives the
first activation service a chance to activate VGs from
PVs that are available immediately at startup. If this
service quits waiting before all the expected pvid files
appear, then the VG associated with those PVs will most
likely be activated by the -last service rather than the
initial -main service. If those PVs are even slower to
complete processing than the -last service, then the VG
will be activated by event activation whenever they are
finally complete.
lvm-activate-vgs-main.service
Calls "vgchange -aay", after lvm-devices-wait, to activate
complete VGs. It only considers PVs that have been
processed by udev+pvscan and have pvs_online files.
This is expected to activate VGs from basic devices
(not virtual device types) that are present at startup.
lvm-activate-vgs-last.service
Calls "vgchange -aay", after multipathd has started, to
activate VGs that became available after virtual device
services were started, e.g. VGs on multipath devices.
Like -main, it only looks at PVs that have been processed
by pvscan.
This vgchange in the -last service enables event activation
by creating the /run/lvm/event-activation-on file. Event
activation will activate any further VGs that appear on the
system (or complete udev processing) after the -last service.
In the case of event activation, the udev rule will run
vgchange -aay <vgname> via a transient service
lvm-activate-<vgname>.service. This vgchange only scans
PVs in the VG being activated, also based on the pvs_online
files from pvscan.
When there are many VGs that need activation during system
startup, the two fixed services can activate them all much
faster than activating each VG individually via events.
lvm.conf auto_activation_settings can be used to configure
the behavior (default ["service_and_event", "pvscan_hints"]).
"service_and_event" - the behavior described above, where
activation services are used first, and event activation
is used afterward.
"service_only" - only lvm-activate-vgs-* are used, and
no event-based activation occurs after the services finish.
(Equivalent to setting lvm.conf event_activation=0.)
"event_only" - the lvm-activate-vgs* services are skipped,
and all VGs are activated individually with event-based
activation.
"pvscan_hints" - the vgchange autoactivation commands
use pvs_online files created by pvscan. This optimization
limits the devices scanned by the vgchange command to only
PVs that have been processed by pvscan.
2021-09-02 23:15:46 +03:00
if ( arg_is_set ( cmd , wait_ARG ) )
cmd - > print_device_id_not_found = 0 ;
device usage based on devices file
The LVM devices file lists devices that lvm can use. The default
file is /etc/lvm/devices/system.devices, and the lvmdevices(8)
command is used to add or remove device entries. If the file
does not exist, or if lvm.conf includes use_devicesfile=0, then
lvm will not use a devices file. When the devices file is in use,
the regex filter is not used, and the filter settings in lvm.conf
or on the command line are ignored.
LVM records devices in the devices file using hardware-specific
IDs, such as the WWID, and attempts to use subsystem-specific
IDs for virtual device types. These device IDs are also written
in the VG metadata. When no hardware or virtual ID is available,
lvm falls back using the unstable device name as the device ID.
When devnames are used, lvm performs extra scanning to find
devices if their devname changes, e.g. after reboot.
When proper device IDs are used, an lvm command will not look
at devices outside the devices file, but when devnames are used
as a fallback, lvm will scan devices outside the devices file
to locate PVs on renamed devices. A config setting
search_for_devnames can be used to control the scanning for
renamed devname entries.
Related to the devices file, the new command option
--devices <devnames> allows a list of devices to be specified for
the command to use, overriding the devices file. The listed
devices act as a sort of devices file in terms of limiting which
devices lvm will see and use. Devices that are not listed will
appear to be missing to the lvm command.
Multiple devices files can be kept in /etc/lvm/devices, which
allows lvm to be used with different sets of devices, e.g.
system devices do not need to be exposed to a specific application,
and the application can use lvm on its own set of devices that are
not exposed to the system. The option --devicesfile <filename> is
used to select the devices file to use with the command. Without
the option set, the default system devices file is used.
Setting --devicesfile "" causes lvm to not use a devices file.
An existing, empty devices file means lvm will see no devices.
The new command vgimportdevices adds PVs from a VG to the devices
file and updates the VG metadata to include the device IDs.
vgimportdevices -a will import all VGs into the system devices file.
LVM commands run by dmeventd not use a devices file by default,
and will look at all devices on the system. A devices file can
be created for dmeventd (/etc/lvm/devices/dmeventd.devices) If
this file exists, lvm commands run by dmeventd will use it.
Internal implementaion:
- device_ids_read - read the devices file
. add struct dev_use (du) to cmd->use_devices for each devices file entry
- dev_cache_scan - get /dev entries
. add struct device (dev) to dev_cache for each device on the system
- device_ids_match - match devices file entries to /dev entries
. match each du on cmd->use_devices to a dev in dev_cache, using device ID
. on match, set du->dev, dev->id, dev->flags MATCHED_USE_ID
- label_scan - read lvm headers and metadata from devices
. filters are applied, those that do not need data from the device
. filter-deviceid skips devs without MATCHED_USE_ID, i.e.
skips /dev entries that are not listed in the devices file
. read lvm label from dev
. filters are applied, those that use data from the device
. read lvm metadata from dev
. add info/vginfo structs for PVs/VGs (info is "lvmcache")
- device_ids_find_renamed_devs - handle devices with unstable devname ID
where devname changed
. this step only needed when devs do not have proper device IDs,
and their dev names change, e.g. after reboot sdb becomes sdc.
. detect incorrect match because PVID in the devices file entry
does not match the PVID found when the device was read above
. undo incorrect match between du and dev above
. search system devices for new location of PVID
. update devices file with new devnames for PVIDs on renamed devices
. label_scan the renamed devs
- continue with command processing
2020-06-23 21:25:41 +03:00
if ( arg_is_set ( cmd , update_ARG ) | |
arg_is_set ( cmd , adddev_ARG ) | | arg_is_set ( cmd , deldev_ARG ) | |
arg_is_set ( cmd , addpvid_ARG ) | | arg_is_set ( cmd , delpvid_ARG ) ) {
if ( ! lock_devices_file ( cmd , LOCK_EX ) ) {
log_error ( " Failed to lock the devices file to create. " ) ;
return ECMD_FAILED ;
}
if ( ! devices_file_exists ( cmd ) ) {
if ( ! devices_file_touch ( cmd ) ) {
log_error ( " Failed to create the devices file. " ) ;
return ECMD_FAILED ;
}
}
/*
* The hint file is associated with the default / system devices file ,
* so don ' t clear hints when using a different - - devicesfile .
*/
if ( ! cmd - > devicesfile )
clear_hint_file ( cmd ) ;
} else {
if ( ! lock_devices_file ( cmd , LOCK_SH ) ) {
log_error ( " Failed to lock the devices file. " ) ;
return ECMD_FAILED ;
}
if ( ! devices_file_exists ( cmd ) ) {
log_error ( " Devices file does not exist. " ) ;
return ECMD_FAILED ;
}
}
if ( ! device_ids_read ( cmd ) ) {
log_error ( " Failed to read the devices file. " ) ;
return ECMD_FAILED ;
}
2021-10-29 22:49:36 +03:00
prepare_open_file_limit ( cmd , dm_list_size ( & cmd - > use_devices ) ) ;
2021-07-02 01:25:43 +03:00
dev_cache_scan ( cmd ) ;
device usage based on devices file
The LVM devices file lists devices that lvm can use. The default
file is /etc/lvm/devices/system.devices, and the lvmdevices(8)
command is used to add or remove device entries. If the file
does not exist, or if lvm.conf includes use_devicesfile=0, then
lvm will not use a devices file. When the devices file is in use,
the regex filter is not used, and the filter settings in lvm.conf
or on the command line are ignored.
LVM records devices in the devices file using hardware-specific
IDs, such as the WWID, and attempts to use subsystem-specific
IDs for virtual device types. These device IDs are also written
in the VG metadata. When no hardware or virtual ID is available,
lvm falls back using the unstable device name as the device ID.
When devnames are used, lvm performs extra scanning to find
devices if their devname changes, e.g. after reboot.
When proper device IDs are used, an lvm command will not look
at devices outside the devices file, but when devnames are used
as a fallback, lvm will scan devices outside the devices file
to locate PVs on renamed devices. A config setting
search_for_devnames can be used to control the scanning for
renamed devname entries.
Related to the devices file, the new command option
--devices <devnames> allows a list of devices to be specified for
the command to use, overriding the devices file. The listed
devices act as a sort of devices file in terms of limiting which
devices lvm will see and use. Devices that are not listed will
appear to be missing to the lvm command.
Multiple devices files can be kept in /etc/lvm/devices, which
allows lvm to be used with different sets of devices, e.g.
system devices do not need to be exposed to a specific application,
and the application can use lvm on its own set of devices that are
not exposed to the system. The option --devicesfile <filename> is
used to select the devices file to use with the command. Without
the option set, the default system devices file is used.
Setting --devicesfile "" causes lvm to not use a devices file.
An existing, empty devices file means lvm will see no devices.
The new command vgimportdevices adds PVs from a VG to the devices
file and updates the VG metadata to include the device IDs.
vgimportdevices -a will import all VGs into the system devices file.
LVM commands run by dmeventd not use a devices file by default,
and will look at all devices on the system. A devices file can
be created for dmeventd (/etc/lvm/devices/dmeventd.devices) If
this file exists, lvm commands run by dmeventd will use it.
Internal implementaion:
- device_ids_read - read the devices file
. add struct dev_use (du) to cmd->use_devices for each devices file entry
- dev_cache_scan - get /dev entries
. add struct device (dev) to dev_cache for each device on the system
- device_ids_match - match devices file entries to /dev entries
. match each du on cmd->use_devices to a dev in dev_cache, using device ID
. on match, set du->dev, dev->id, dev->flags MATCHED_USE_ID
- label_scan - read lvm headers and metadata from devices
. filters are applied, those that do not need data from the device
. filter-deviceid skips devs without MATCHED_USE_ID, i.e.
skips /dev entries that are not listed in the devices file
. read lvm label from dev
. filters are applied, those that use data from the device
. read lvm metadata from dev
. add info/vginfo structs for PVs/VGs (info is "lvmcache")
- device_ids_find_renamed_devs - handle devices with unstable devname ID
where devname changed
. this step only needed when devs do not have proper device IDs,
and their dev names change, e.g. after reboot sdb becomes sdc.
. detect incorrect match because PVID in the devices file entry
does not match the PVID found when the device was read above
. undo incorrect match between du and dev above
. search system devices for new location of PVID
. update devices file with new devnames for PVIDs on renamed devices
. label_scan the renamed devs
- continue with command processing
2020-06-23 21:25:41 +03:00
device_ids_match ( cmd ) ;
if ( arg_is_set ( cmd , check_ARG ) | | arg_is_set ( cmd , update_ARG ) ) {
int search_count = 0 ;
int invalid = 0 ;
label_scan_setup_bcache ( ) ;
dm_list_iterate_items ( du , & cmd - > use_devices ) {
if ( ! du - > dev )
continue ;
dev = du - > dev ;
2021-04-24 01:32:37 +03:00
if ( ! label_read_pvid ( dev , NULL ) )
continue ;
device usage based on devices file
The LVM devices file lists devices that lvm can use. The default
file is /etc/lvm/devices/system.devices, and the lvmdevices(8)
command is used to add or remove device entries. If the file
does not exist, or if lvm.conf includes use_devicesfile=0, then
lvm will not use a devices file. When the devices file is in use,
the regex filter is not used, and the filter settings in lvm.conf
or on the command line are ignored.
LVM records devices in the devices file using hardware-specific
IDs, such as the WWID, and attempts to use subsystem-specific
IDs for virtual device types. These device IDs are also written
in the VG metadata. When no hardware or virtual ID is available,
lvm falls back using the unstable device name as the device ID.
When devnames are used, lvm performs extra scanning to find
devices if their devname changes, e.g. after reboot.
When proper device IDs are used, an lvm command will not look
at devices outside the devices file, but when devnames are used
as a fallback, lvm will scan devices outside the devices file
to locate PVs on renamed devices. A config setting
search_for_devnames can be used to control the scanning for
renamed devname entries.
Related to the devices file, the new command option
--devices <devnames> allows a list of devices to be specified for
the command to use, overriding the devices file. The listed
devices act as a sort of devices file in terms of limiting which
devices lvm will see and use. Devices that are not listed will
appear to be missing to the lvm command.
Multiple devices files can be kept in /etc/lvm/devices, which
allows lvm to be used with different sets of devices, e.g.
system devices do not need to be exposed to a specific application,
and the application can use lvm on its own set of devices that are
not exposed to the system. The option --devicesfile <filename> is
used to select the devices file to use with the command. Without
the option set, the default system devices file is used.
Setting --devicesfile "" causes lvm to not use a devices file.
An existing, empty devices file means lvm will see no devices.
The new command vgimportdevices adds PVs from a VG to the devices
file and updates the VG metadata to include the device IDs.
vgimportdevices -a will import all VGs into the system devices file.
LVM commands run by dmeventd not use a devices file by default,
and will look at all devices on the system. A devices file can
be created for dmeventd (/etc/lvm/devices/dmeventd.devices) If
this file exists, lvm commands run by dmeventd will use it.
Internal implementaion:
- device_ids_read - read the devices file
. add struct dev_use (du) to cmd->use_devices for each devices file entry
- dev_cache_scan - get /dev entries
. add struct device (dev) to dev_cache for each device on the system
- device_ids_match - match devices file entries to /dev entries
. match each du on cmd->use_devices to a dev in dev_cache, using device ID
. on match, set du->dev, dev->id, dev->flags MATCHED_USE_ID
- label_scan - read lvm headers and metadata from devices
. filters are applied, those that do not need data from the device
. filter-deviceid skips devs without MATCHED_USE_ID, i.e.
skips /dev entries that are not listed in the devices file
. read lvm label from dev
. filters are applied, those that use data from the device
. read lvm metadata from dev
. add info/vginfo structs for PVs/VGs (info is "lvmcache")
- device_ids_find_renamed_devs - handle devices with unstable devname ID
where devname changed
. this step only needed when devs do not have proper device IDs,
and their dev names change, e.g. after reboot sdb becomes sdc.
. detect incorrect match because PVID in the devices file entry
does not match the PVID found when the device was read above
. undo incorrect match between du and dev above
. search system devices for new location of PVID
. update devices file with new devnames for PVIDs on renamed devices
. label_scan the renamed devs
- continue with command processing
2020-06-23 21:25:41 +03:00
/*
* label_read_pvid has read the first 4 K of the device
* so these filters should not for the most part need
* to do any further reading of the device .
*
* We run the filters here for the first time in the
* check | update command . device_ids_validate ( ) then
* checks the result of this filtering ( by checking the
* " persistent " filter explicitly ) , and prints a warning
* if a devices file entry does not pass the filters .
* The ! passes_filter here is log_debug instead of log_warn
* to avoid repeating the same message as device_ids_validate .
* ( We could also print the warning here and then pass a
* parameter to suppress the warning in device_ids_validate . )
*/
log_debug ( " Checking filters with data for %s " , dev_name ( dev ) ) ;
if ( ! cmd - > filter - > passes_filter ( cmd , cmd - > filter , dev , NULL ) ) {
log_debug ( " filter result: %s in devices file is excluded by filter: %s. " ,
dev_name ( dev ) , dev_filtered_reason ( dev ) ) ;
}
}
/*
* Check that the pvid read from the lvm label matches the pvid
* for this devices file entry . Also print a warning if a dev
* from use_devices does not pass the filters that have been
* run just above .
*/
device_ids_validate ( cmd , NULL , & invalid , 1 ) ;
/*
* Find and fix any devname entries that have moved to a
* renamed device .
*/
device_ids_find_renamed_devs ( cmd , & found_devs , & search_count , 1 ) ;
if ( search_count & & ! strcmp ( cmd - > search_for_devnames , " none " ) )
log_print ( " Not searching for missing devnames, search_for_devnames= \" none \" . " ) ;
dm_list_iterate_items ( du , & cmd - > use_devices ) {
if ( du - > dev )
label_scan_invalidate ( du - > dev ) ;
}
/*
* check du - > part
*/
dm_list_iterate_items ( du , & cmd - > use_devices ) {
int part = 0 ;
if ( ! du - > dev )
continue ;
dev = du - > dev ;
dev_get_partition_number ( dev , & part ) ;
if ( part ! = du - > part ) {
log_warn ( " Device %s partition %u has incorrect PART in devices file (%u) " ,
dev_name ( dev ) , part , du - > part ) ;
du - > part = part ;
changes + + ;
}
}
if ( arg_is_set ( cmd , update_ARG ) ) {
if ( invalid | | ! dm_list_empty ( & found_devs ) ) {
if ( ! device_ids_write ( cmd ) )
goto_bad ;
log_print ( " Updated devices file to version %s " , devices_file_version ( ) ) ;
} else {
log_print ( " No update for devices file is needed. " ) ;
}
}
goto out ;
}
if ( arg_is_set ( cmd , adddev_ARG ) ) {
const char * devname ;
if ( ! ( devname = arg_str_value ( cmd , adddev_ARG , NULL ) ) )
goto_bad ;
/*
* addev will add a device to devices_file even if that device
* is excluded by filters .
*/
/*
* No filter applied here ( only the non - data filters would
* be applied since we haven ' t read the device yet .
*/
if ( ! ( dev = dev_cache_get ( cmd , devname , NULL ) ) ) {
log_error ( " No device found for %s. " , devname ) ;
2021-03-09 13:42:29 +03:00
goto bad ;
device usage based on devices file
The LVM devices file lists devices that lvm can use. The default
file is /etc/lvm/devices/system.devices, and the lvmdevices(8)
command is used to add or remove device entries. If the file
does not exist, or if lvm.conf includes use_devicesfile=0, then
lvm will not use a devices file. When the devices file is in use,
the regex filter is not used, and the filter settings in lvm.conf
or on the command line are ignored.
LVM records devices in the devices file using hardware-specific
IDs, such as the WWID, and attempts to use subsystem-specific
IDs for virtual device types. These device IDs are also written
in the VG metadata. When no hardware or virtual ID is available,
lvm falls back using the unstable device name as the device ID.
When devnames are used, lvm performs extra scanning to find
devices if their devname changes, e.g. after reboot.
When proper device IDs are used, an lvm command will not look
at devices outside the devices file, but when devnames are used
as a fallback, lvm will scan devices outside the devices file
to locate PVs on renamed devices. A config setting
search_for_devnames can be used to control the scanning for
renamed devname entries.
Related to the devices file, the new command option
--devices <devnames> allows a list of devices to be specified for
the command to use, overriding the devices file. The listed
devices act as a sort of devices file in terms of limiting which
devices lvm will see and use. Devices that are not listed will
appear to be missing to the lvm command.
Multiple devices files can be kept in /etc/lvm/devices, which
allows lvm to be used with different sets of devices, e.g.
system devices do not need to be exposed to a specific application,
and the application can use lvm on its own set of devices that are
not exposed to the system. The option --devicesfile <filename> is
used to select the devices file to use with the command. Without
the option set, the default system devices file is used.
Setting --devicesfile "" causes lvm to not use a devices file.
An existing, empty devices file means lvm will see no devices.
The new command vgimportdevices adds PVs from a VG to the devices
file and updates the VG metadata to include the device IDs.
vgimportdevices -a will import all VGs into the system devices file.
LVM commands run by dmeventd not use a devices file by default,
and will look at all devices on the system. A devices file can
be created for dmeventd (/etc/lvm/devices/dmeventd.devices) If
this file exists, lvm commands run by dmeventd will use it.
Internal implementaion:
- device_ids_read - read the devices file
. add struct dev_use (du) to cmd->use_devices for each devices file entry
- dev_cache_scan - get /dev entries
. add struct device (dev) to dev_cache for each device on the system
- device_ids_match - match devices file entries to /dev entries
. match each du on cmd->use_devices to a dev in dev_cache, using device ID
. on match, set du->dev, dev->id, dev->flags MATCHED_USE_ID
- label_scan - read lvm headers and metadata from devices
. filters are applied, those that do not need data from the device
. filter-deviceid skips devs without MATCHED_USE_ID, i.e.
skips /dev entries that are not listed in the devices file
. read lvm label from dev
. filters are applied, those that use data from the device
. read lvm metadata from dev
. add info/vginfo structs for PVs/VGs (info is "lvmcache")
- device_ids_find_renamed_devs - handle devices with unstable devname ID
where devname changed
. this step only needed when devs do not have proper device IDs,
and their dev names change, e.g. after reboot sdb becomes sdc.
. detect incorrect match because PVID in the devices file entry
does not match the PVID found when the device was read above
. undo incorrect match between du and dev above
. search system devices for new location of PVID
. update devices file with new devnames for PVIDs on renamed devices
. label_scan the renamed devs
- continue with command processing
2020-06-23 21:25:41 +03:00
}
/*
* reads pvid from dev header , sets dev - > pvid .
* ( it ' s ok if the device is not a PV and has no PVID )
*/
label_scan_setup_bcache ( ) ;
2021-04-24 01:32:37 +03:00
if ( ! label_read_pvid ( dev , NULL ) ) {
log_error ( " Failed to read %s. " , devname ) ;
goto bad ;
}
device usage based on devices file
The LVM devices file lists devices that lvm can use. The default
file is /etc/lvm/devices/system.devices, and the lvmdevices(8)
command is used to add or remove device entries. If the file
does not exist, or if lvm.conf includes use_devicesfile=0, then
lvm will not use a devices file. When the devices file is in use,
the regex filter is not used, and the filter settings in lvm.conf
or on the command line are ignored.
LVM records devices in the devices file using hardware-specific
IDs, such as the WWID, and attempts to use subsystem-specific
IDs for virtual device types. These device IDs are also written
in the VG metadata. When no hardware or virtual ID is available,
lvm falls back using the unstable device name as the device ID.
When devnames are used, lvm performs extra scanning to find
devices if their devname changes, e.g. after reboot.
When proper device IDs are used, an lvm command will not look
at devices outside the devices file, but when devnames are used
as a fallback, lvm will scan devices outside the devices file
to locate PVs on renamed devices. A config setting
search_for_devnames can be used to control the scanning for
renamed devname entries.
Related to the devices file, the new command option
--devices <devnames> allows a list of devices to be specified for
the command to use, overriding the devices file. The listed
devices act as a sort of devices file in terms of limiting which
devices lvm will see and use. Devices that are not listed will
appear to be missing to the lvm command.
Multiple devices files can be kept in /etc/lvm/devices, which
allows lvm to be used with different sets of devices, e.g.
system devices do not need to be exposed to a specific application,
and the application can use lvm on its own set of devices that are
not exposed to the system. The option --devicesfile <filename> is
used to select the devices file to use with the command. Without
the option set, the default system devices file is used.
Setting --devicesfile "" causes lvm to not use a devices file.
An existing, empty devices file means lvm will see no devices.
The new command vgimportdevices adds PVs from a VG to the devices
file and updates the VG metadata to include the device IDs.
vgimportdevices -a will import all VGs into the system devices file.
LVM commands run by dmeventd not use a devices file by default,
and will look at all devices on the system. A devices file can
be created for dmeventd (/etc/lvm/devices/dmeventd.devices) If
this file exists, lvm commands run by dmeventd will use it.
Internal implementaion:
- device_ids_read - read the devices file
. add struct dev_use (du) to cmd->use_devices for each devices file entry
- dev_cache_scan - get /dev entries
. add struct device (dev) to dev_cache for each device on the system
- device_ids_match - match devices file entries to /dev entries
. match each du on cmd->use_devices to a dev in dev_cache, using device ID
. on match, set du->dev, dev->id, dev->flags MATCHED_USE_ID
- label_scan - read lvm headers and metadata from devices
. filters are applied, those that do not need data from the device
. filter-deviceid skips devs without MATCHED_USE_ID, i.e.
skips /dev entries that are not listed in the devices file
. read lvm label from dev
. filters are applied, those that use data from the device
. read lvm metadata from dev
. add info/vginfo structs for PVs/VGs (info is "lvmcache")
- device_ids_find_renamed_devs - handle devices with unstable devname ID
where devname changed
. this step only needed when devs do not have proper device IDs,
and their dev names change, e.g. after reboot sdb becomes sdc.
. detect incorrect match because PVID in the devices file entry
does not match the PVID found when the device was read above
. undo incorrect match between du and dev above
. search system devices for new location of PVID
. update devices file with new devnames for PVIDs on renamed devices
. label_scan the renamed devs
- continue with command processing
2020-06-23 21:25:41 +03:00
/*
* Allow filtered devices to be added to devices_file , but
* check if it ' s excluded by filters to print a warning .
* Since label_read_pvid has read the first 4 K of the device ,
* the filters should not for the most part need to do any further
* reading of the device .
*
* ( This is the first time filters are being run , so we do
* not need to wipe filters of any previous result that was
* based on filter_deviceid_skip = 0. )
*/
cmd - > filter_deviceid_skip = 1 ;
if ( ! cmd - > filter - > passes_filter ( cmd , cmd - > filter , dev , NULL ) ) {
log_warn ( " WARNING: adding device %s that is excluded by filter: %s. " ,
dev_name ( dev ) , dev_filtered_reason ( dev ) ) ;
}
2021-06-08 22:49:34 +03:00
/* also allow deviceid_ARG ? */
deviceidtype = arg_str_value ( cmd , deviceidtype_ARG , NULL ) ;
if ( ! device_id_add ( cmd , dev , dev - > pvid , deviceidtype , NULL ) )
device usage based on devices file
The LVM devices file lists devices that lvm can use. The default
file is /etc/lvm/devices/system.devices, and the lvmdevices(8)
command is used to add or remove device entries. If the file
does not exist, or if lvm.conf includes use_devicesfile=0, then
lvm will not use a devices file. When the devices file is in use,
the regex filter is not used, and the filter settings in lvm.conf
or on the command line are ignored.
LVM records devices in the devices file using hardware-specific
IDs, such as the WWID, and attempts to use subsystem-specific
IDs for virtual device types. These device IDs are also written
in the VG metadata. When no hardware or virtual ID is available,
lvm falls back using the unstable device name as the device ID.
When devnames are used, lvm performs extra scanning to find
devices if their devname changes, e.g. after reboot.
When proper device IDs are used, an lvm command will not look
at devices outside the devices file, but when devnames are used
as a fallback, lvm will scan devices outside the devices file
to locate PVs on renamed devices. A config setting
search_for_devnames can be used to control the scanning for
renamed devname entries.
Related to the devices file, the new command option
--devices <devnames> allows a list of devices to be specified for
the command to use, overriding the devices file. The listed
devices act as a sort of devices file in terms of limiting which
devices lvm will see and use. Devices that are not listed will
appear to be missing to the lvm command.
Multiple devices files can be kept in /etc/lvm/devices, which
allows lvm to be used with different sets of devices, e.g.
system devices do not need to be exposed to a specific application,
and the application can use lvm on its own set of devices that are
not exposed to the system. The option --devicesfile <filename> is
used to select the devices file to use with the command. Without
the option set, the default system devices file is used.
Setting --devicesfile "" causes lvm to not use a devices file.
An existing, empty devices file means lvm will see no devices.
The new command vgimportdevices adds PVs from a VG to the devices
file and updates the VG metadata to include the device IDs.
vgimportdevices -a will import all VGs into the system devices file.
LVM commands run by dmeventd not use a devices file by default,
and will look at all devices on the system. A devices file can
be created for dmeventd (/etc/lvm/devices/dmeventd.devices) If
this file exists, lvm commands run by dmeventd will use it.
Internal implementaion:
- device_ids_read - read the devices file
. add struct dev_use (du) to cmd->use_devices for each devices file entry
- dev_cache_scan - get /dev entries
. add struct device (dev) to dev_cache for each device on the system
- device_ids_match - match devices file entries to /dev entries
. match each du on cmd->use_devices to a dev in dev_cache, using device ID
. on match, set du->dev, dev->id, dev->flags MATCHED_USE_ID
- label_scan - read lvm headers and metadata from devices
. filters are applied, those that do not need data from the device
. filter-deviceid skips devs without MATCHED_USE_ID, i.e.
skips /dev entries that are not listed in the devices file
. read lvm label from dev
. filters are applied, those that use data from the device
. read lvm metadata from dev
. add info/vginfo structs for PVs/VGs (info is "lvmcache")
- device_ids_find_renamed_devs - handle devices with unstable devname ID
where devname changed
. this step only needed when devs do not have proper device IDs,
and their dev names change, e.g. after reboot sdb becomes sdc.
. detect incorrect match because PVID in the devices file entry
does not match the PVID found when the device was read above
. undo incorrect match between du and dev above
. search system devices for new location of PVID
. update devices file with new devnames for PVIDs on renamed devices
. label_scan the renamed devs
- continue with command processing
2020-06-23 21:25:41 +03:00
goto_bad ;
if ( ! device_ids_write ( cmd ) )
goto_bad ;
goto out ;
}
if ( arg_is_set ( cmd , addpvid_ARG ) ) {
struct id id ;
char pvid [ ID_LEN + 1 ] = { 0 } ;
const char * pvid_arg ;
label_scan_setup_bcache ( ) ;
/*
* Iterate through all devs on the system , reading the
* pvid of each to check if it has this pvid .
* Devices that are excluded by no - data filters will not
* be checked for the PVID .
* addpvid will not add a device to devices_file if it ' s
* excluded by filters .
*/
pvid_arg = arg_str_value ( cmd , addpvid_ARG , NULL ) ;
if ( ! id_read_format_try ( & id , pvid_arg ) ) {
log_error ( " Invalid PVID. " ) ;
goto bad ;
}
memcpy ( pvid , & id . uuid , ID_LEN ) ;
if ( ( du = get_du_for_pvid ( cmd , pvid ) ) ) {
log_error ( " PVID already exists in devices file for %s. " , dev_name ( du - > dev ) ) ;
goto bad ;
}
if ( ! ( dil = dm_pool_zalloc ( cmd - > mem , sizeof ( * dil ) ) ) )
goto_bad ;
memcpy ( dil - > pvid , & pvid , ID_LEN ) ;
dm_list_add ( & search_pvids , & dil - > list ) ;
_search_devs_for_pvids ( cmd , & search_pvids , & found_devs ) ;
if ( dm_list_empty ( & found_devs ) ) {
log_error ( " PVID %s not found on any devices. " , pvid ) ;
goto bad ;
}
dm_list_iterate_items ( devl , & found_devs ) {
2021-09-13 22:11:07 +03:00
deviceidtype = arg_str_value ( cmd , deviceidtype_ARG , NULL ) ;
if ( ! device_id_add ( cmd , devl - > dev , devl - > dev - > pvid , deviceidtype , NULL ) )
device usage based on devices file
The LVM devices file lists devices that lvm can use. The default
file is /etc/lvm/devices/system.devices, and the lvmdevices(8)
command is used to add or remove device entries. If the file
does not exist, or if lvm.conf includes use_devicesfile=0, then
lvm will not use a devices file. When the devices file is in use,
the regex filter is not used, and the filter settings in lvm.conf
or on the command line are ignored.
LVM records devices in the devices file using hardware-specific
IDs, such as the WWID, and attempts to use subsystem-specific
IDs for virtual device types. These device IDs are also written
in the VG metadata. When no hardware or virtual ID is available,
lvm falls back using the unstable device name as the device ID.
When devnames are used, lvm performs extra scanning to find
devices if their devname changes, e.g. after reboot.
When proper device IDs are used, an lvm command will not look
at devices outside the devices file, but when devnames are used
as a fallback, lvm will scan devices outside the devices file
to locate PVs on renamed devices. A config setting
search_for_devnames can be used to control the scanning for
renamed devname entries.
Related to the devices file, the new command option
--devices <devnames> allows a list of devices to be specified for
the command to use, overriding the devices file. The listed
devices act as a sort of devices file in terms of limiting which
devices lvm will see and use. Devices that are not listed will
appear to be missing to the lvm command.
Multiple devices files can be kept in /etc/lvm/devices, which
allows lvm to be used with different sets of devices, e.g.
system devices do not need to be exposed to a specific application,
and the application can use lvm on its own set of devices that are
not exposed to the system. The option --devicesfile <filename> is
used to select the devices file to use with the command. Without
the option set, the default system devices file is used.
Setting --devicesfile "" causes lvm to not use a devices file.
An existing, empty devices file means lvm will see no devices.
The new command vgimportdevices adds PVs from a VG to the devices
file and updates the VG metadata to include the device IDs.
vgimportdevices -a will import all VGs into the system devices file.
LVM commands run by dmeventd not use a devices file by default,
and will look at all devices on the system. A devices file can
be created for dmeventd (/etc/lvm/devices/dmeventd.devices) If
this file exists, lvm commands run by dmeventd will use it.
Internal implementaion:
- device_ids_read - read the devices file
. add struct dev_use (du) to cmd->use_devices for each devices file entry
- dev_cache_scan - get /dev entries
. add struct device (dev) to dev_cache for each device on the system
- device_ids_match - match devices file entries to /dev entries
. match each du on cmd->use_devices to a dev in dev_cache, using device ID
. on match, set du->dev, dev->id, dev->flags MATCHED_USE_ID
- label_scan - read lvm headers and metadata from devices
. filters are applied, those that do not need data from the device
. filter-deviceid skips devs without MATCHED_USE_ID, i.e.
skips /dev entries that are not listed in the devices file
. read lvm label from dev
. filters are applied, those that use data from the device
. read lvm metadata from dev
. add info/vginfo structs for PVs/VGs (info is "lvmcache")
- device_ids_find_renamed_devs - handle devices with unstable devname ID
where devname changed
. this step only needed when devs do not have proper device IDs,
and their dev names change, e.g. after reboot sdb becomes sdc.
. detect incorrect match because PVID in the devices file entry
does not match the PVID found when the device was read above
. undo incorrect match between du and dev above
. search system devices for new location of PVID
. update devices file with new devnames for PVIDs on renamed devices
. label_scan the renamed devs
- continue with command processing
2020-06-23 21:25:41 +03:00
goto_bad ;
}
if ( ! device_ids_write ( cmd ) )
goto_bad ;
goto out ;
}
if ( arg_is_set ( cmd , deldev_ARG ) ) {
const char * devname ;
if ( ! ( devname = arg_str_value ( cmd , deldev_ARG , NULL ) ) )
goto_bad ;
/*
* No filter because we always want to allow removing a device
* by name from the devices file .
*/
if ( ! ( dev = dev_cache_get ( cmd , devname , NULL ) ) ) {
log_error ( " No device found for %s. " , devname ) ;
goto bad ;
}
/*
* dev_cache_scan uses sysfs to check if an LV is using each dev
* and sets this flag is so .
*/
skip indexing devices used by LVs in more commands
expands commit d5a06f9a7df5a43b2e2311db62ff8d3011217d74
"pvscan: skip indexing devices used by LVs"
The dev cache index is expensive and slow, so limit it
to commands that are used to observe the state of lvm.
The index is only used to print warnings about incorrect
device use by active LVs, e.g. if an LV is using a
multipath component device instead of the multipath
device. Commands that continue to use the index and
print the warnings:
fullreport, lvmdiskscan, vgs, lvs, pvs,
vgdisplay, lvdisplay, pvdisplay,
vgscan, lvscan, pvscan (excluding --cache)
A couple other commands were borrowing the DEV_USED_FOR_LV
flag to just check if a device was actively in use by LVs.
These are converted to the new dev_is_used_by_active_lv().
2021-07-08 00:59:56 +03:00
if ( dev_is_used_by_active_lv ( cmd , dev , NULL , NULL , NULL , NULL ) ) {
device usage based on devices file
The LVM devices file lists devices that lvm can use. The default
file is /etc/lvm/devices/system.devices, and the lvmdevices(8)
command is used to add or remove device entries. If the file
does not exist, or if lvm.conf includes use_devicesfile=0, then
lvm will not use a devices file. When the devices file is in use,
the regex filter is not used, and the filter settings in lvm.conf
or on the command line are ignored.
LVM records devices in the devices file using hardware-specific
IDs, such as the WWID, and attempts to use subsystem-specific
IDs for virtual device types. These device IDs are also written
in the VG metadata. When no hardware or virtual ID is available,
lvm falls back using the unstable device name as the device ID.
When devnames are used, lvm performs extra scanning to find
devices if their devname changes, e.g. after reboot.
When proper device IDs are used, an lvm command will not look
at devices outside the devices file, but when devnames are used
as a fallback, lvm will scan devices outside the devices file
to locate PVs on renamed devices. A config setting
search_for_devnames can be used to control the scanning for
renamed devname entries.
Related to the devices file, the new command option
--devices <devnames> allows a list of devices to be specified for
the command to use, overriding the devices file. The listed
devices act as a sort of devices file in terms of limiting which
devices lvm will see and use. Devices that are not listed will
appear to be missing to the lvm command.
Multiple devices files can be kept in /etc/lvm/devices, which
allows lvm to be used with different sets of devices, e.g.
system devices do not need to be exposed to a specific application,
and the application can use lvm on its own set of devices that are
not exposed to the system. The option --devicesfile <filename> is
used to select the devices file to use with the command. Without
the option set, the default system devices file is used.
Setting --devicesfile "" causes lvm to not use a devices file.
An existing, empty devices file means lvm will see no devices.
The new command vgimportdevices adds PVs from a VG to the devices
file and updates the VG metadata to include the device IDs.
vgimportdevices -a will import all VGs into the system devices file.
LVM commands run by dmeventd not use a devices file by default,
and will look at all devices on the system. A devices file can
be created for dmeventd (/etc/lvm/devices/dmeventd.devices) If
this file exists, lvm commands run by dmeventd will use it.
Internal implementaion:
- device_ids_read - read the devices file
. add struct dev_use (du) to cmd->use_devices for each devices file entry
- dev_cache_scan - get /dev entries
. add struct device (dev) to dev_cache for each device on the system
- device_ids_match - match devices file entries to /dev entries
. match each du on cmd->use_devices to a dev in dev_cache, using device ID
. on match, set du->dev, dev->id, dev->flags MATCHED_USE_ID
- label_scan - read lvm headers and metadata from devices
. filters are applied, those that do not need data from the device
. filter-deviceid skips devs without MATCHED_USE_ID, i.e.
skips /dev entries that are not listed in the devices file
. read lvm label from dev
. filters are applied, those that use data from the device
. read lvm metadata from dev
. add info/vginfo structs for PVs/VGs (info is "lvmcache")
- device_ids_find_renamed_devs - handle devices with unstable devname ID
where devname changed
. this step only needed when devs do not have proper device IDs,
and their dev names change, e.g. after reboot sdb becomes sdc.
. detect incorrect match because PVID in the devices file entry
does not match the PVID found when the device was read above
. undo incorrect match between du and dev above
. search system devices for new location of PVID
. update devices file with new devnames for PVIDs on renamed devices
. label_scan the renamed devs
- continue with command processing
2020-06-23 21:25:41 +03:00
if ( ! arg_count ( cmd , yes_ARG ) & &
yes_no_prompt ( " Device %s is used by an active LV, continue to remove? " , devname ) = = ' n ' ) {
log_error ( " Device not removed. " ) ;
goto bad ;
}
}
if ( ! ( du = get_du_for_dev ( cmd , dev ) ) ) {
log_error ( " Device not found in devices file. " ) ;
goto bad ;
}
dm_list_del ( & du - > list ) ;
free_du ( du ) ;
device_ids_write ( cmd ) ;
goto out ;
}
if ( arg_is_set ( cmd , delpvid_ARG ) ) {
struct id id ;
char pvid [ ID_LEN + 1 ] = { 0 } ;
const char * pvid_arg ;
pvid_arg = arg_str_value ( cmd , delpvid_ARG , NULL ) ;
if ( ! id_read_format_try ( & id , pvid_arg ) ) {
log_error ( " Invalid PVID. " ) ;
goto bad ;
}
memcpy ( pvid , & id . uuid , ID_LEN ) ;
if ( ! ( du = get_du_for_pvid ( cmd , pvid ) ) ) {
log_error ( " PVID not found in devices file. " ) ;
2021-03-09 13:42:29 +03:00
goto bad ;
device usage based on devices file
The LVM devices file lists devices that lvm can use. The default
file is /etc/lvm/devices/system.devices, and the lvmdevices(8)
command is used to add or remove device entries. If the file
does not exist, or if lvm.conf includes use_devicesfile=0, then
lvm will not use a devices file. When the devices file is in use,
the regex filter is not used, and the filter settings in lvm.conf
or on the command line are ignored.
LVM records devices in the devices file using hardware-specific
IDs, such as the WWID, and attempts to use subsystem-specific
IDs for virtual device types. These device IDs are also written
in the VG metadata. When no hardware or virtual ID is available,
lvm falls back using the unstable device name as the device ID.
When devnames are used, lvm performs extra scanning to find
devices if their devname changes, e.g. after reboot.
When proper device IDs are used, an lvm command will not look
at devices outside the devices file, but when devnames are used
as a fallback, lvm will scan devices outside the devices file
to locate PVs on renamed devices. A config setting
search_for_devnames can be used to control the scanning for
renamed devname entries.
Related to the devices file, the new command option
--devices <devnames> allows a list of devices to be specified for
the command to use, overriding the devices file. The listed
devices act as a sort of devices file in terms of limiting which
devices lvm will see and use. Devices that are not listed will
appear to be missing to the lvm command.
Multiple devices files can be kept in /etc/lvm/devices, which
allows lvm to be used with different sets of devices, e.g.
system devices do not need to be exposed to a specific application,
and the application can use lvm on its own set of devices that are
not exposed to the system. The option --devicesfile <filename> is
used to select the devices file to use with the command. Without
the option set, the default system devices file is used.
Setting --devicesfile "" causes lvm to not use a devices file.
An existing, empty devices file means lvm will see no devices.
The new command vgimportdevices adds PVs from a VG to the devices
file and updates the VG metadata to include the device IDs.
vgimportdevices -a will import all VGs into the system devices file.
LVM commands run by dmeventd not use a devices file by default,
and will look at all devices on the system. A devices file can
be created for dmeventd (/etc/lvm/devices/dmeventd.devices) If
this file exists, lvm commands run by dmeventd will use it.
Internal implementaion:
- device_ids_read - read the devices file
. add struct dev_use (du) to cmd->use_devices for each devices file entry
- dev_cache_scan - get /dev entries
. add struct device (dev) to dev_cache for each device on the system
- device_ids_match - match devices file entries to /dev entries
. match each du on cmd->use_devices to a dev in dev_cache, using device ID
. on match, set du->dev, dev->id, dev->flags MATCHED_USE_ID
- label_scan - read lvm headers and metadata from devices
. filters are applied, those that do not need data from the device
. filter-deviceid skips devs without MATCHED_USE_ID, i.e.
skips /dev entries that are not listed in the devices file
. read lvm label from dev
. filters are applied, those that use data from the device
. read lvm metadata from dev
. add info/vginfo structs for PVs/VGs (info is "lvmcache")
- device_ids_find_renamed_devs - handle devices with unstable devname ID
where devname changed
. this step only needed when devs do not have proper device IDs,
and their dev names change, e.g. after reboot sdb becomes sdc.
. detect incorrect match because PVID in the devices file entry
does not match the PVID found when the device was read above
. undo incorrect match between du and dev above
. search system devices for new location of PVID
. update devices file with new devnames for PVIDs on renamed devices
. label_scan the renamed devs
- continue with command processing
2020-06-23 21:25:41 +03:00
}
dm_list_del ( & du - > list ) ;
if ( ( du2 = get_du_for_pvid ( cmd , pvid ) ) ) {
log_error ( " Multiple devices file entries for PVID %s (%s %s), remove by device name. " ,
pvid , du - > devname , du2 - > devname ) ;
2021-03-09 13:42:29 +03:00
goto bad ;
device usage based on devices file
The LVM devices file lists devices that lvm can use. The default
file is /etc/lvm/devices/system.devices, and the lvmdevices(8)
command is used to add or remove device entries. If the file
does not exist, or if lvm.conf includes use_devicesfile=0, then
lvm will not use a devices file. When the devices file is in use,
the regex filter is not used, and the filter settings in lvm.conf
or on the command line are ignored.
LVM records devices in the devices file using hardware-specific
IDs, such as the WWID, and attempts to use subsystem-specific
IDs for virtual device types. These device IDs are also written
in the VG metadata. When no hardware or virtual ID is available,
lvm falls back using the unstable device name as the device ID.
When devnames are used, lvm performs extra scanning to find
devices if their devname changes, e.g. after reboot.
When proper device IDs are used, an lvm command will not look
at devices outside the devices file, but when devnames are used
as a fallback, lvm will scan devices outside the devices file
to locate PVs on renamed devices. A config setting
search_for_devnames can be used to control the scanning for
renamed devname entries.
Related to the devices file, the new command option
--devices <devnames> allows a list of devices to be specified for
the command to use, overriding the devices file. The listed
devices act as a sort of devices file in terms of limiting which
devices lvm will see and use. Devices that are not listed will
appear to be missing to the lvm command.
Multiple devices files can be kept in /etc/lvm/devices, which
allows lvm to be used with different sets of devices, e.g.
system devices do not need to be exposed to a specific application,
and the application can use lvm on its own set of devices that are
not exposed to the system. The option --devicesfile <filename> is
used to select the devices file to use with the command. Without
the option set, the default system devices file is used.
Setting --devicesfile "" causes lvm to not use a devices file.
An existing, empty devices file means lvm will see no devices.
The new command vgimportdevices adds PVs from a VG to the devices
file and updates the VG metadata to include the device IDs.
vgimportdevices -a will import all VGs into the system devices file.
LVM commands run by dmeventd not use a devices file by default,
and will look at all devices on the system. A devices file can
be created for dmeventd (/etc/lvm/devices/dmeventd.devices) If
this file exists, lvm commands run by dmeventd will use it.
Internal implementaion:
- device_ids_read - read the devices file
. add struct dev_use (du) to cmd->use_devices for each devices file entry
- dev_cache_scan - get /dev entries
. add struct device (dev) to dev_cache for each device on the system
- device_ids_match - match devices file entries to /dev entries
. match each du on cmd->use_devices to a dev in dev_cache, using device ID
. on match, set du->dev, dev->id, dev->flags MATCHED_USE_ID
- label_scan - read lvm headers and metadata from devices
. filters are applied, those that do not need data from the device
. filter-deviceid skips devs without MATCHED_USE_ID, i.e.
skips /dev entries that are not listed in the devices file
. read lvm label from dev
. filters are applied, those that use data from the device
. read lvm metadata from dev
. add info/vginfo structs for PVs/VGs (info is "lvmcache")
- device_ids_find_renamed_devs - handle devices with unstable devname ID
where devname changed
. this step only needed when devs do not have proper device IDs,
and their dev names change, e.g. after reboot sdb becomes sdc.
. detect incorrect match because PVID in the devices file entry
does not match the PVID found when the device was read above
. undo incorrect match between du and dev above
. search system devices for new location of PVID
. update devices file with new devnames for PVIDs on renamed devices
. label_scan the renamed devs
- continue with command processing
2020-06-23 21:25:41 +03:00
}
if ( du - > devname & & ( du - > devname [ 0 ] ! = ' . ' ) ) {
if ( ( dev = dev_cache_get ( cmd , du - > devname , NULL ) ) & &
skip indexing devices used by LVs in more commands
expands commit d5a06f9a7df5a43b2e2311db62ff8d3011217d74
"pvscan: skip indexing devices used by LVs"
The dev cache index is expensive and slow, so limit it
to commands that are used to observe the state of lvm.
The index is only used to print warnings about incorrect
device use by active LVs, e.g. if an LV is using a
multipath component device instead of the multipath
device. Commands that continue to use the index and
print the warnings:
fullreport, lvmdiskscan, vgs, lvs, pvs,
vgdisplay, lvdisplay, pvdisplay,
vgscan, lvscan, pvscan (excluding --cache)
A couple other commands were borrowing the DEV_USED_FOR_LV
flag to just check if a device was actively in use by LVs.
These are converted to the new dev_is_used_by_active_lv().
2021-07-08 00:59:56 +03:00
dev_is_used_by_active_lv ( cmd , dev , NULL , NULL , NULL , NULL ) ) {
device usage based on devices file
The LVM devices file lists devices that lvm can use. The default
file is /etc/lvm/devices/system.devices, and the lvmdevices(8)
command is used to add or remove device entries. If the file
does not exist, or if lvm.conf includes use_devicesfile=0, then
lvm will not use a devices file. When the devices file is in use,
the regex filter is not used, and the filter settings in lvm.conf
or on the command line are ignored.
LVM records devices in the devices file using hardware-specific
IDs, such as the WWID, and attempts to use subsystem-specific
IDs for virtual device types. These device IDs are also written
in the VG metadata. When no hardware or virtual ID is available,
lvm falls back using the unstable device name as the device ID.
When devnames are used, lvm performs extra scanning to find
devices if their devname changes, e.g. after reboot.
When proper device IDs are used, an lvm command will not look
at devices outside the devices file, but when devnames are used
as a fallback, lvm will scan devices outside the devices file
to locate PVs on renamed devices. A config setting
search_for_devnames can be used to control the scanning for
renamed devname entries.
Related to the devices file, the new command option
--devices <devnames> allows a list of devices to be specified for
the command to use, overriding the devices file. The listed
devices act as a sort of devices file in terms of limiting which
devices lvm will see and use. Devices that are not listed will
appear to be missing to the lvm command.
Multiple devices files can be kept in /etc/lvm/devices, which
allows lvm to be used with different sets of devices, e.g.
system devices do not need to be exposed to a specific application,
and the application can use lvm on its own set of devices that are
not exposed to the system. The option --devicesfile <filename> is
used to select the devices file to use with the command. Without
the option set, the default system devices file is used.
Setting --devicesfile "" causes lvm to not use a devices file.
An existing, empty devices file means lvm will see no devices.
The new command vgimportdevices adds PVs from a VG to the devices
file and updates the VG metadata to include the device IDs.
vgimportdevices -a will import all VGs into the system devices file.
LVM commands run by dmeventd not use a devices file by default,
and will look at all devices on the system. A devices file can
be created for dmeventd (/etc/lvm/devices/dmeventd.devices) If
this file exists, lvm commands run by dmeventd will use it.
Internal implementaion:
- device_ids_read - read the devices file
. add struct dev_use (du) to cmd->use_devices for each devices file entry
- dev_cache_scan - get /dev entries
. add struct device (dev) to dev_cache for each device on the system
- device_ids_match - match devices file entries to /dev entries
. match each du on cmd->use_devices to a dev in dev_cache, using device ID
. on match, set du->dev, dev->id, dev->flags MATCHED_USE_ID
- label_scan - read lvm headers and metadata from devices
. filters are applied, those that do not need data from the device
. filter-deviceid skips devs without MATCHED_USE_ID, i.e.
skips /dev entries that are not listed in the devices file
. read lvm label from dev
. filters are applied, those that use data from the device
. read lvm metadata from dev
. add info/vginfo structs for PVs/VGs (info is "lvmcache")
- device_ids_find_renamed_devs - handle devices with unstable devname ID
where devname changed
. this step only needed when devs do not have proper device IDs,
and their dev names change, e.g. after reboot sdb becomes sdc.
. detect incorrect match because PVID in the devices file entry
does not match the PVID found when the device was read above
. undo incorrect match between du and dev above
. search system devices for new location of PVID
. update devices file with new devnames for PVIDs on renamed devices
. label_scan the renamed devs
- continue with command processing
2020-06-23 21:25:41 +03:00
if ( ! arg_count ( cmd , yes_ARG ) & &
yes_no_prompt ( " Device %s is used by an active LV, continue to remove? " , du - > devname ) = = ' n ' ) {
log_error ( " Device not removed. " ) ;
goto bad ;
}
}
}
free_du ( du ) ;
device_ids_write ( cmd ) ;
goto out ;
}
add activation services
New systemd services for startup:
lvm-devices-wait.service
Used in place of systemd-udev-settle, this service waits
for udev+pvscan to process PVs listed in system.devices.
It runs the command "lvmdevices --wait pvsonline".
This only waits for PVs that can be matched to a device in
sysfs, so it only waits for devices attached to the system.
It waits specifically for the /run/lvm/pvs_online/<pvid>
files to be created by pvscan. It quits waiting after a
configurable number of seconds. This service gives the
first activation service a chance to activate VGs from
PVs that are available immediately at startup. If this
service quits waiting before all the expected pvid files
appear, then the VG associated with those PVs will most
likely be activated by the -last service rather than the
initial -main service. If those PVs are even slower to
complete processing than the -last service, then the VG
will be activated by event activation whenever they are
finally complete.
lvm-activate-vgs-main.service
Calls "vgchange -aay", after lvm-devices-wait, to activate
complete VGs. It only considers PVs that have been
processed by udev+pvscan and have pvs_online files.
This is expected to activate VGs from basic devices
(not virtual device types) that are present at startup.
lvm-activate-vgs-last.service
Calls "vgchange -aay", after multipathd has started, to
activate VGs that became available after virtual device
services were started, e.g. VGs on multipath devices.
Like -main, it only looks at PVs that have been processed
by pvscan.
This vgchange in the -last service enables event activation
by creating the /run/lvm/event-activation-on file. Event
activation will activate any further VGs that appear on the
system (or complete udev processing) after the -last service.
In the case of event activation, the udev rule will run
vgchange -aay <vgname> via a transient service
lvm-activate-<vgname>.service. This vgchange only scans
PVs in the VG being activated, also based on the pvs_online
files from pvscan.
When there are many VGs that need activation during system
startup, the two fixed services can activate them all much
faster than activating each VG individually via events.
lvm.conf auto_activation_settings can be used to configure
the behavior (default ["service_and_event", "pvscan_hints"]).
"service_and_event" - the behavior described above, where
activation services are used first, and event activation
is used afterward.
"service_only" - only lvm-activate-vgs-* are used, and
no event-based activation occurs after the services finish.
(Equivalent to setting lvm.conf event_activation=0.)
"event_only" - the lvm-activate-vgs* services are skipped,
and all VGs are activated individually with event-based
activation.
"pvscan_hints" - the vgchange autoactivation commands
use pvs_online files created by pvscan. This optimization
limits the devices scanned by the vgchange command to only
PVs that have been processed by pvscan.
2021-09-02 23:15:46 +03:00
if ( arg_is_set ( cmd , wait_ARG ) ) {
if ( strcmp ( " pvsonline " , arg_str_value ( cmd , wait_ARG , " " ) ) ) {
log_error ( " wait option invalid. " ) ;
goto bad ;
}
/* TODO: lvm.conf lvmdevices_wait_settings "disabled" do nothing */
/* TODO: lvm.conf auto_activation_settings "event_only" do nothing */
/* TODO: if no devices file exists, what should this do?
do a udev - settle ? do nothing and cause more event - based activations ? */
/* for each du, if du->wwid matched, wait for /run/lvm/pvs_online/du->pvid */
dm_list_iterate_items ( du , & cmd - > use_devices ) {
if ( ! du - > dev )
continue ;
if ( ! ( dil = dm_pool_zalloc ( cmd - > mem , sizeof ( * dil ) ) ) )
continue ;
dil - > dev = du - > dev ;
memcpy ( dil - > pvid , du - > pvid , ID_LEN ) ;
dm_list_add ( & wait_pvids , & dil - > list ) ;
}
log_print ( " Waiting for PVs online for %u matched devices file entries. " , dm_list_size ( & wait_pvids ) ) ;
wait_sec = find_config_tree_int ( cmd , devices_lvmdevices_wait_seconds_CFG , 0 ) ;
begin = time ( NULL ) ;
while ( 1 ) {
if ( _all_pvids_online ( cmd , & wait_pvids ) ) {
log_print ( " Found all PVs online " ) ;
goto out ;
}
log_print ( " Waiting for PVs online for %u devices. " , dm_list_size ( & wait_pvids ) ) ;
/* TODO: lvm.conf lvmdevices_wait_ids "sys_wwid=111", "sys_wwid=222" etc
waits for the specifically named devices even if the devices do not exist . */
if ( ! wait_sec | | ( time ( NULL ) - begin > = wait_sec ) ) {
log_print ( " Time out waiting for PVs online: " ) ;
dm_list_iterate_items ( dil , & wait_pvids )
log_print ( " Need PVID %s on %s " , dil - > pvid , dev_name ( dil - > dev ) ) ;
break ;
}
if ( dm_list_size ( & wait_pvids ) > 10 ) {
if ( interruptible_usleep ( 1000000 ) ) /* 1 sec */
break ;
} else {
if ( interruptible_usleep ( 500000 ) ) /* .5 sec */
break ;
}
}
goto out ;
}
device usage based on devices file
The LVM devices file lists devices that lvm can use. The default
file is /etc/lvm/devices/system.devices, and the lvmdevices(8)
command is used to add or remove device entries. If the file
does not exist, or if lvm.conf includes use_devicesfile=0, then
lvm will not use a devices file. When the devices file is in use,
the regex filter is not used, and the filter settings in lvm.conf
or on the command line are ignored.
LVM records devices in the devices file using hardware-specific
IDs, such as the WWID, and attempts to use subsystem-specific
IDs for virtual device types. These device IDs are also written
in the VG metadata. When no hardware or virtual ID is available,
lvm falls back using the unstable device name as the device ID.
When devnames are used, lvm performs extra scanning to find
devices if their devname changes, e.g. after reboot.
When proper device IDs are used, an lvm command will not look
at devices outside the devices file, but when devnames are used
as a fallback, lvm will scan devices outside the devices file
to locate PVs on renamed devices. A config setting
search_for_devnames can be used to control the scanning for
renamed devname entries.
Related to the devices file, the new command option
--devices <devnames> allows a list of devices to be specified for
the command to use, overriding the devices file. The listed
devices act as a sort of devices file in terms of limiting which
devices lvm will see and use. Devices that are not listed will
appear to be missing to the lvm command.
Multiple devices files can be kept in /etc/lvm/devices, which
allows lvm to be used with different sets of devices, e.g.
system devices do not need to be exposed to a specific application,
and the application can use lvm on its own set of devices that are
not exposed to the system. The option --devicesfile <filename> is
used to select the devices file to use with the command. Without
the option set, the default system devices file is used.
Setting --devicesfile "" causes lvm to not use a devices file.
An existing, empty devices file means lvm will see no devices.
The new command vgimportdevices adds PVs from a VG to the devices
file and updates the VG metadata to include the device IDs.
vgimportdevices -a will import all VGs into the system devices file.
LVM commands run by dmeventd not use a devices file by default,
and will look at all devices on the system. A devices file can
be created for dmeventd (/etc/lvm/devices/dmeventd.devices) If
this file exists, lvm commands run by dmeventd will use it.
Internal implementaion:
- device_ids_read - read the devices file
. add struct dev_use (du) to cmd->use_devices for each devices file entry
- dev_cache_scan - get /dev entries
. add struct device (dev) to dev_cache for each device on the system
- device_ids_match - match devices file entries to /dev entries
. match each du on cmd->use_devices to a dev in dev_cache, using device ID
. on match, set du->dev, dev->id, dev->flags MATCHED_USE_ID
- label_scan - read lvm headers and metadata from devices
. filters are applied, those that do not need data from the device
. filter-deviceid skips devs without MATCHED_USE_ID, i.e.
skips /dev entries that are not listed in the devices file
. read lvm label from dev
. filters are applied, those that use data from the device
. read lvm metadata from dev
. add info/vginfo structs for PVs/VGs (info is "lvmcache")
- device_ids_find_renamed_devs - handle devices with unstable devname ID
where devname changed
. this step only needed when devs do not have proper device IDs,
and their dev names change, e.g. after reboot sdb becomes sdc.
. detect incorrect match because PVID in the devices file entry
does not match the PVID found when the device was read above
. undo incorrect match between du and dev above
. search system devices for new location of PVID
. update devices file with new devnames for PVIDs on renamed devices
. label_scan the renamed devs
- continue with command processing
2020-06-23 21:25:41 +03:00
/* If no options, print use_devices list */
dm_list_iterate_items ( du , & cmd - > use_devices ) {
char part_buf [ 64 ] = { 0 } ;
if ( du - > part )
snprintf ( part_buf , 63 , " PART=%d " , du - > part ) ;
log_print ( " Device %s IDTYPE=%s IDNAME=%s DEVNAME=%s PVID=%s%s " ,
du - > dev ? dev_name ( du - > dev ) : " none " ,
du - > idtype ? idtype_to_str ( du - > idtype ) : " none " ,
du - > idname ? du - > idname : " none " ,
du - > devname ? du - > devname : " none " ,
du - > pvid ? ( char * ) du - > pvid : " none " ,
part_buf ) ;
}
out :
return ECMD_PROCESSED ;
bad :
return ECMD_FAILED ;
}