2001-10-03 21:03:25 +04:00
/*
2008-01-30 17:00:02 +03:00
* Copyright ( C ) 2001 - 2004 Sistina Software , Inc . All rights reserved .
2015-02-12 19:37:47 +03:00
* Copyright ( C ) 2004 - 2015 Red Hat , Inc . All rights reserved .
2001-10-03 21:03:25 +04:00
*
2004-03-30 23:35:44 +04:00
* This file is part of LVM2 .
2001-10-03 21:03:25 +04:00
*
2004-03-30 23:35:44 +04:00
* This copyrighted material is made available to anyone wishing to use ,
* modify , copy , or redistribute it subject to the terms and conditions
2007-08-21 00:55:30 +04:00
* of the GNU Lesser General Public License v .2 .1 .
2001-10-03 21:03:25 +04:00
*
2007-08-21 00:55:30 +04:00
* You should have received a copy of the GNU Lesser General Public License
2004-03-30 23:35:44 +04:00
* along with this program ; if not , write to the Free Software Foundation ,
2016-01-21 13:49:46 +03:00
* Inc . , 51 Franklin Street , Fifth Floor , Boston , MA 02110 - 1301 USA
2001-10-03 21:03:25 +04:00
*/
# include "tools.h"
2015-02-12 19:37:47 +03:00
struct pvchange_params {
unsigned done ;
unsigned total ;
} ;
2010-05-19 15:53:00 +04:00
static int _pvchange_single ( struct cmd_context * cmd , struct volume_group * vg ,
2015-02-12 19:37:47 +03:00
struct physical_volume * pv , struct processing_handle * handle )
2001-10-03 21:03:25 +04:00
{
2015-02-12 19:37:47 +03:00
struct pvchange_params * params = ( struct pvchange_params * ) handle - > custom_handle ;
2007-10-12 18:29:32 +04:00
const char * pv_name = pv_dev_name ( pv ) ;
2021-08-03 23:32:33 +03:00
char pvid [ ID_LEN + 1 ] __attribute__ ( ( aligned ( 8 ) ) ) = { 0 } ;
2010-07-09 19:34:40 +04:00
char uuid [ 64 ] __attribute__ ( ( aligned ( 8 ) ) ) ;
device usage based on devices file
The LVM devices file lists devices that lvm can use. The default
file is /etc/lvm/devices/system.devices, and the lvmdevices(8)
command is used to add or remove device entries. If the file
does not exist, or if lvm.conf includes use_devicesfile=0, then
lvm will not use a devices file. When the devices file is in use,
the regex filter is not used, and the filter settings in lvm.conf
or on the command line are ignored.
LVM records devices in the devices file using hardware-specific
IDs, such as the WWID, and attempts to use subsystem-specific
IDs for virtual device types. These device IDs are also written
in the VG metadata. When no hardware or virtual ID is available,
lvm falls back using the unstable device name as the device ID.
When devnames are used, lvm performs extra scanning to find
devices if their devname changes, e.g. after reboot.
When proper device IDs are used, an lvm command will not look
at devices outside the devices file, but when devnames are used
as a fallback, lvm will scan devices outside the devices file
to locate PVs on renamed devices. A config setting
search_for_devnames can be used to control the scanning for
renamed devname entries.
Related to the devices file, the new command option
--devices <devnames> allows a list of devices to be specified for
the command to use, overriding the devices file. The listed
devices act as a sort of devices file in terms of limiting which
devices lvm will see and use. Devices that are not listed will
appear to be missing to the lvm command.
Multiple devices files can be kept in /etc/lvm/devices, which
allows lvm to be used with different sets of devices, e.g.
system devices do not need to be exposed to a specific application,
and the application can use lvm on its own set of devices that are
not exposed to the system. The option --devicesfile <filename> is
used to select the devices file to use with the command. Without
the option set, the default system devices file is used.
Setting --devicesfile "" causes lvm to not use a devices file.
An existing, empty devices file means lvm will see no devices.
The new command vgimportdevices adds PVs from a VG to the devices
file and updates the VG metadata to include the device IDs.
vgimportdevices -a will import all VGs into the system devices file.
LVM commands run by dmeventd not use a devices file by default,
and will look at all devices on the system. A devices file can
be created for dmeventd (/etc/lvm/devices/dmeventd.devices) If
this file exists, lvm commands run by dmeventd will use it.
Internal implementaion:
- device_ids_read - read the devices file
. add struct dev_use (du) to cmd->use_devices for each devices file entry
- dev_cache_scan - get /dev entries
. add struct device (dev) to dev_cache for each device on the system
- device_ids_match - match devices file entries to /dev entries
. match each du on cmd->use_devices to a dev in dev_cache, using device ID
. on match, set du->dev, dev->id, dev->flags MATCHED_USE_ID
- label_scan - read lvm headers and metadata from devices
. filters are applied, those that do not need data from the device
. filter-deviceid skips devs without MATCHED_USE_ID, i.e.
skips /dev entries that are not listed in the devices file
. read lvm label from dev
. filters are applied, those that use data from the device
. read lvm metadata from dev
. add info/vginfo structs for PVs/VGs (info is "lvmcache")
- device_ids_find_renamed_devs - handle devices with unstable devname ID
where devname changed
. this step only needed when devs do not have proper device IDs,
and their dev names change, e.g. after reboot sdb becomes sdc.
. detect incorrect match because PVID in the devices file entry
does not match the PVID found when the device was read above
. undo incorrect match between du and dev above
. search system devices for new location of PVID
. update devices file with new devnames for PVIDs on renamed devices
. label_scan the renamed devs
- continue with command processing
2020-06-23 21:25:41 +03:00
struct dev_use * du = NULL ;
2015-02-12 19:37:47 +03:00
unsigned done = 0 ;
2015-03-10 13:25:14 +03:00
int used ;
2001-10-03 21:03:25 +04:00
2014-10-11 20:17:46 +04:00
int allocatable = arg_int_value ( cmd , allocatable_ARG , 0 ) ;
int mda_ignore = arg_int_value ( cmd , metadataignore_ARG , 0 ) ;
2016-06-22 00:24:52 +03:00
int tagargs = arg_is_set ( cmd , addtag_ARG ) + arg_is_set ( cmd , deltag_ARG ) ;
2001-10-03 21:03:25 +04:00
2015-02-12 19:37:47 +03:00
params - > total + + ;
lvmcache: process duplicate PVs directly
Previously, duplicate PVs were processed as a side effect
of processing the "chosen" PV in lvmcache. The duplicate
PV would be hacked into lvmcache temporarily in place of
the chosen PV.
In the old way, we had to always process the "chosen" PV
device, even if a duplicate of it was named on the command
line. This meant we were processing a different device than
was asked for. This could be worked around by naming
multiple duplicate devs on the command line in which case
they were swapped in and out of lvmcache for processing.
Now, the duplicate devs are processed directly in their
own processing loop. This means we can remove the old
hacks related to processing dups as a side effect of
processing the chosen device. We can now simply process
the device that was named on the command line.
When the same PVID exists on two or more devices, one device
is preferred and used in the VG, and the others are duplicates
and are not used in the VG. The preferred device exists in
lvmcache as usual. The duplicates exist in a specical list
of unused duplicate devices.
The duplicate devs have the "d" attribute and the "duplicate"
reporting field displays "duplicate" for them.
'pvs' warns about duplicates, but the formal output only
includes the single preferred PV.
'pvs -a' has the same warnings, and the duplicate devs are
included in the output.
'pvs <path>' has the same warnings, and displays the named
device, whether it is preferred or a duplicate.
2016-02-11 21:37:36 +03:00
/*
* The primary location of this check is in vg_write ( ) , but it needs
* to be copied here to prevent the pv_write ( ) which is called before
* the vg_write ( ) .
*/
2019-08-01 21:50:04 +03:00
if ( vg & & lvmcache_has_duplicate_devs ( ) & & vg_has_duplicate_pvs ( vg ) ) {
lvmcache: process duplicate PVs directly
Previously, duplicate PVs were processed as a side effect
of processing the "chosen" PV in lvmcache. The duplicate
PV would be hacked into lvmcache temporarily in place of
the chosen PV.
In the old way, we had to always process the "chosen" PV
device, even if a duplicate of it was named on the command
line. This meant we were processing a different device than
was asked for. This could be worked around by naming
multiple duplicate devs on the command line in which case
they were swapped in and out of lvmcache for processing.
Now, the duplicate devs are processed directly in their
own processing loop. This means we can remove the old
hacks related to processing dups as a side effect of
processing the chosen device. We can now simply process
the device that was named on the command line.
When the same PVID exists on two or more devices, one device
is preferred and used in the VG, and the others are duplicates
and are not used in the VG. The preferred device exists in
lvmcache as usual. The duplicates exist in a specical list
of unused duplicate devices.
The duplicate devs have the "d" attribute and the "duplicate"
reporting field displays "duplicate" for them.
'pvs' warns about duplicates, but the formal output only
includes the single preferred PV.
'pvs -a' has the same warnings, and the duplicate devs are
included in the output.
'pvs <path>' has the same warnings, and displays the named
device, whether it is preferred or a duplicate.
2016-02-11 21:37:36 +03:00
if ( ! find_config_tree_bool ( vg - > cmd , devices_allow_changes_with_duplicate_pvs_CFG , NULL ) ) {
log_error ( " Cannot update volume group %s with duplicate PV devices. " ,
vg - > name ) ;
goto bad ;
}
2016-06-22 00:24:52 +03:00
if ( arg_is_set ( cmd , uuid_ARG ) ) {
lvmcache: process duplicate PVs directly
Previously, duplicate PVs were processed as a side effect
of processing the "chosen" PV in lvmcache. The duplicate
PV would be hacked into lvmcache temporarily in place of
the chosen PV.
In the old way, we had to always process the "chosen" PV
device, even if a duplicate of it was named on the command
line. This meant we were processing a different device than
was asked for. This could be worked around by naming
multiple duplicate devs on the command line in which case
they were swapped in and out of lvmcache for processing.
Now, the duplicate devs are processed directly in their
own processing loop. This means we can remove the old
hacks related to processing dups as a side effect of
processing the chosen device. We can now simply process
the device that was named on the command line.
When the same PVID exists on two or more devices, one device
is preferred and used in the VG, and the others are duplicates
and are not used in the VG. The preferred device exists in
lvmcache as usual. The duplicates exist in a specical list
of unused duplicate devices.
The duplicate devs have the "d" attribute and the "duplicate"
reporting field displays "duplicate" for them.
'pvs' warns about duplicates, but the formal output only
includes the single preferred PV.
'pvs -a' has the same warnings, and the duplicate devs are
included in the output.
'pvs <path>' has the same warnings, and displays the named
device, whether it is preferred or a duplicate.
2016-02-11 21:37:36 +03:00
log_error ( " Resolve duplicate PV UUIDs with vgimportclone (or filters). " ) ;
goto bad ;
}
}
2002-01-29 20:23:33 +03:00
/* If in a VG, must change using volume group. */
2021-04-22 23:42:54 +03:00
if ( vg & & ! is_orphan ( pv ) ) {
2011-01-24 16:38:31 +03:00
if ( tagargs & & ! ( vg - > fid - > fmt - > features & FMT_TAGS ) ) {
2004-03-08 20:19:15 +03:00
log_error ( " Volume group containing %s does not "
" support tags " , pv_name ) ;
2015-02-12 19:37:47 +03:00
goto bad ;
2004-03-08 20:19:15 +03:00
}
2016-06-22 00:24:52 +03:00
if ( arg_is_set ( cmd , uuid_ARG ) & & lvs_in_vg_activated ( vg ) ) {
2004-01-13 21:42:05 +03:00
log_error ( " Volume group containing %s has active "
" logical volumes " , pv_name ) ;
2015-02-12 19:37:47 +03:00
goto bad ;
2004-01-13 21:42:05 +03:00
}
2015-03-10 13:25:14 +03:00
} else {
if ( tagargs ) {
log_error ( " Can't change tag on Physical Volume %s not "
" in volume group " , pv_name ) ;
goto bad ;
}
if ( ( used = is_used_pv ( pv ) ) < 0 )
goto_bad ;
if ( used & & ( arg_count ( cmd , force_ARG ) ! = DONT_PROMPT_OVERRIDE ) ) {
2016-02-25 23:12:08 +03:00
log_error ( " PV %s is used by a VG but its metadata is missing. " , pv_name ) ;
2015-03-10 13:25:14 +03:00
log_error ( " Can't change PV '%s' without -ff. " , pv_name ) ;
goto bad ;
}
2001-10-17 19:29:31 +04:00
}
2001-10-03 21:03:25 +04:00
2016-06-22 00:24:52 +03:00
if ( arg_is_set ( cmd , allocatable_ARG ) ) {
2007-11-02 17:54:40 +03:00
if ( is_orphan ( pv ) & &
2004-01-13 21:42:05 +03:00
! ( pv - > fmt - > features & FMT_ORPHAN_ALLOCATABLE ) ) {
log_error ( " Allocatability not supported by orphan "
" %s format PV %s " , pv - > fmt - > name , pv_name ) ;
2015-02-12 19:37:47 +03:00
goto bad ;
2004-01-13 21:42:05 +03:00
}
2001-10-03 21:03:25 +04:00
2004-01-13 21:42:05 +03:00
/* change allocatability for a PV */
2007-06-16 02:16:55 +04:00
if ( allocatable & & ( pv_status ( pv ) & ALLOCATABLE_PV ) ) {
2012-10-16 12:14:41 +04:00
log_warn ( " Physical volume \" %s \" is already "
" allocatable. " , pv_name ) ;
2015-02-12 19:37:47 +03:00
} else if ( ! allocatable & & ! ( pv_status ( pv ) & ALLOCATABLE_PV ) ) {
2012-10-16 12:14:41 +04:00
log_warn ( " Physical volume \" %s \" is already "
" unallocatable. " , pv_name ) ;
2015-02-12 19:37:47 +03:00
} else if ( allocatable ) {
2004-01-13 21:42:05 +03:00
log_verbose ( " Setting physical volume \" %s \" "
" allocatable " , pv_name ) ;
pv - > status | = ALLOCATABLE_PV ;
2015-02-12 19:37:47 +03:00
done = 1 ;
2004-01-13 21:42:05 +03:00
} else {
log_verbose ( " Setting physical volume \" %s \" NOT "
" allocatable " , pv_name ) ;
pv - > status & = ~ ALLOCATABLE_PV ;
2015-02-12 19:37:47 +03:00
done = 1 ;
2004-01-13 21:42:05 +03:00
}
2011-01-24 16:38:31 +03:00
}
2010-11-11 20:29:05 +03:00
2015-03-05 23:00:44 +03:00
/*
* Needed to change a property on an orphan PV .
* i . e . the global lock is only needed for orphans .
* Convert sh to ex .
*/
2016-01-16 00:18:25 +03:00
if ( is_orphan ( pv ) ) {
locking: unify global lock for flock and lockd
There have been two file locks used to protect lvm
"global state": "ORPHANS" and "GLOBAL".
Commands that used the ORPHAN flock in exclusive mode:
pvcreate, pvremove, vgcreate, vgextend, vgremove,
vgcfgrestore
Commands that used the ORPHAN flock in shared mode:
vgimportclone, pvs, pvscan, pvresize, pvmove,
pvdisplay, pvchange, fullreport
Commands that used the GLOBAL flock in exclusive mode:
pvchange, pvscan, vgimportclone, vgscan
Commands that used the GLOBAL flock in shared mode:
pvscan --cache, pvs
The ORPHAN lock covers the important cases of serializing
the use of orphan PVs. It also partially covers the
reporting of orphan PVs (although not correctly as
explained below.)
The GLOBAL lock doesn't seem to have a clear purpose
(it may have eroded over time.)
Neither lock correctly protects the VG namespace, or
orphan PV properties.
To simplify and correct these issues, the two separate
flocks are combined into the one GLOBAL flock, and this flock
is used from the locking sites that are in place for the
lvmlockd global lock.
The logic behind the lvmlockd (distributed) global lock is
that any command that changes "global state" needs to take
the global lock in ex mode. Global state in lvm is: the list
of VG names, the set of orphan PVs, and any properties of
orphan PVs. Reading this global state can use the global lock
in sh mode to ensure it doesn't change while being reported.
The locking of global state now looks like:
lockd_global()
previously named lockd_gl(), acquires the distributed
global lock through lvmlockd. This is unchanged.
It serializes distributed lvm commands that are changing
global state. This is a no-op when lvmlockd is not in use.
lockf_global()
acquires an flock on a local file. It serializes local lvm
commands that are changing global state.
lock_global()
first calls lockf_global() to acquire the local flock for
global state, and if this succeeds, it calls lockd_global()
to acquire the distributed lock for global state.
Replace instances of lockd_gl() with lock_global(), so that the
existing sites for lvmlockd global state locking are now also
used for local file locking of global state. Remove the previous
file locking calls lock_vol(GLOBAL) and lock_vol(ORPHAN).
The following commands which change global state are now
serialized with the exclusive global flock:
pvchange (of orphan), pvresize (of orphan), pvcreate, pvremove,
vgcreate, vgextend, vgremove, vgreduce, vgrename,
vgcfgrestore, vgimportclone, vgmerge, vgsplit
Commands that use a shared flock to read global state (and will
be serialized against the prior list) are those that use
process_each functions that are based on processing a list of
all VG names, or all PVs. The list of all VGs or all PVs is
global state and the shared lock prevents those lists from
changing while the command is processing them.
The ORPHAN lock previously attempted to produce an accurate
listing of orphan PVs, but it was only acquired at the end of
the command during the fake vg_read of the fake orphan vg.
This is not when orphan PVs were determined; they were
determined by elimination beforehand by processing all real
VGs, and subtracting the PVs in the real VGs from the list
of all PVs that had been identified during the initial scan.
This is fixed by holding the single global lock in shared mode
while processing all VGs to determine the list of orphan PVs.
2019-04-18 23:01:19 +03:00
if ( ! lock_global_convert ( cmd , " ex " ) )
2016-01-16 00:18:25 +03:00
return_ECMD_FAILED ;
}
2015-03-05 23:00:44 +03:00
2011-01-24 16:38:31 +03:00
if ( tagargs ) {
/* tag or deltag */
2016-06-22 00:24:52 +03:00
if ( arg_is_set ( cmd , addtag_ARG ) & & ! change_tag ( cmd , NULL , NULL , pv , addtag_ARG ) )
2015-02-12 19:37:47 +03:00
goto_bad ;
2010-11-11 20:29:05 +03:00
2016-06-22 00:24:52 +03:00
if ( arg_is_set ( cmd , deltag_ARG ) & & ! change_tag ( cmd , NULL , NULL , pv , deltag_ARG ) )
2015-02-12 19:37:47 +03:00
goto_bad ;
done = 1 ;
2011-01-24 16:38:31 +03:00
}
2010-11-11 20:29:05 +03:00
2016-06-22 00:24:52 +03:00
if ( arg_is_set ( cmd , metadataignore_ARG ) ) {
2021-04-22 23:42:54 +03:00
if ( vg & & ( vg_mda_copies ( vg ) ! = VGMETADATACOPIES_UNMANAGED ) & &
2010-07-07 23:14:57 +04:00
( arg_count ( cmd , force_ARG ) = = PROMPT ) & &
2010-07-08 01:30:07 +04:00
yes_no_prompt ( " Override preferred number of copies "
" of VG %s metadata? [y/n]: " ,
2015-02-12 19:37:47 +03:00
pv_vg_name ( pv ) ) = = ' n ' )
goto_bad ;
2010-06-30 01:32:44 +04:00
if ( ! pv_change_metadataignore ( pv , mda_ignore ) )
2015-02-12 19:37:47 +03:00
goto_bad ;
done = 1 ;
2011-01-24 16:38:31 +03:00
}
2016-06-22 00:24:52 +03:00
if ( arg_is_set ( cmd , uuid_ARG ) ) {
2004-01-13 21:42:05 +03:00
/* --uuid: Change PV ID randomly */
device usage based on devices file
The LVM devices file lists devices that lvm can use. The default
file is /etc/lvm/devices/system.devices, and the lvmdevices(8)
command is used to add or remove device entries. If the file
does not exist, or if lvm.conf includes use_devicesfile=0, then
lvm will not use a devices file. When the devices file is in use,
the regex filter is not used, and the filter settings in lvm.conf
or on the command line are ignored.
LVM records devices in the devices file using hardware-specific
IDs, such as the WWID, and attempts to use subsystem-specific
IDs for virtual device types. These device IDs are also written
in the VG metadata. When no hardware or virtual ID is available,
lvm falls back using the unstable device name as the device ID.
When devnames are used, lvm performs extra scanning to find
devices if their devname changes, e.g. after reboot.
When proper device IDs are used, an lvm command will not look
at devices outside the devices file, but when devnames are used
as a fallback, lvm will scan devices outside the devices file
to locate PVs on renamed devices. A config setting
search_for_devnames can be used to control the scanning for
renamed devname entries.
Related to the devices file, the new command option
--devices <devnames> allows a list of devices to be specified for
the command to use, overriding the devices file. The listed
devices act as a sort of devices file in terms of limiting which
devices lvm will see and use. Devices that are not listed will
appear to be missing to the lvm command.
Multiple devices files can be kept in /etc/lvm/devices, which
allows lvm to be used with different sets of devices, e.g.
system devices do not need to be exposed to a specific application,
and the application can use lvm on its own set of devices that are
not exposed to the system. The option --devicesfile <filename> is
used to select the devices file to use with the command. Without
the option set, the default system devices file is used.
Setting --devicesfile "" causes lvm to not use a devices file.
An existing, empty devices file means lvm will see no devices.
The new command vgimportdevices adds PVs from a VG to the devices
file and updates the VG metadata to include the device IDs.
vgimportdevices -a will import all VGs into the system devices file.
LVM commands run by dmeventd not use a devices file by default,
and will look at all devices on the system. A devices file can
be created for dmeventd (/etc/lvm/devices/dmeventd.devices) If
this file exists, lvm commands run by dmeventd will use it.
Internal implementaion:
- device_ids_read - read the devices file
. add struct dev_use (du) to cmd->use_devices for each devices file entry
- dev_cache_scan - get /dev entries
. add struct device (dev) to dev_cache for each device on the system
- device_ids_match - match devices file entries to /dev entries
. match each du on cmd->use_devices to a dev in dev_cache, using device ID
. on match, set du->dev, dev->id, dev->flags MATCHED_USE_ID
- label_scan - read lvm headers and metadata from devices
. filters are applied, those that do not need data from the device
. filter-deviceid skips devs without MATCHED_USE_ID, i.e.
skips /dev entries that are not listed in the devices file
. read lvm label from dev
. filters are applied, those that use data from the device
. read lvm metadata from dev
. add info/vginfo structs for PVs/VGs (info is "lvmcache")
- device_ids_find_renamed_devs - handle devices with unstable devname ID
where devname changed
. this step only needed when devs do not have proper device IDs,
and their dev names change, e.g. after reboot sdb becomes sdc.
. detect incorrect match because PVID in the devices file entry
does not match the PVID found when the device was read above
. undo incorrect match between du and dev above
. search system devices for new location of PVID
. update devices file with new devnames for PVIDs on renamed devices
. label_scan the renamed devs
- continue with command processing
2020-06-23 21:25:41 +03:00
du = get_du_for_pvid ( cmd , pv - > dev - > pvid ) ;
2011-02-21 15:31:28 +03:00
memcpy ( & pv - > old_id , & pv - > id , sizeof ( pv - > id ) ) ;
2005-01-20 21:11:53 +03:00
if ( ! id_create ( & pv - > id ) ) {
log_error ( " Failed to generate new random UUID for %s. " ,
pv_name ) ;
2015-02-12 19:37:47 +03:00
goto bad ;
2005-05-24 21:38:26 +04:00
}
2009-04-10 14:01:38 +04:00
if ( ! id_write_format ( & pv - > id , uuid , sizeof ( uuid ) ) )
2015-02-12 19:37:47 +03:00
goto_bad ;
2005-05-24 21:38:26 +04:00
log_verbose ( " Changing uuid of %s to %s. " , pv_name , uuid ) ;
2013-03-25 17:30:40 +04:00
if ( ! is_orphan ( pv ) & & ( ! pv_write ( cmd , pv , 1 ) ) ) {
log_error ( " pv_write with new uuid failed "
" for %s. " , pv_name ) ;
2015-02-12 19:37:47 +03:00
goto bad ;
2005-05-24 21:38:26 +04:00
}
2015-02-12 19:37:47 +03:00
done = 1 ;
}
if ( ! done ) {
log_print_unless_silent ( " Physical volume %s not changed " , pv_name ) ;
return ECMD_PROCESSED ;
2001-10-03 21:03:25 +04:00
}
2002-01-30 18:04:48 +03:00
log_verbose ( " Updating physical volume \" %s \" " , pv_name ) ;
2021-04-22 23:42:54 +03:00
if ( vg & & ! is_orphan ( pv ) ) {
2003-07-05 02:34:56 +04:00
if ( ! vg_write ( vg ) | | ! vg_commit ( vg ) ) {
2002-01-30 18:04:48 +03:00
log_error ( " Failed to store physical volume \" %s \" in "
" volume group \" %s \" " , pv_name , vg - > name ) ;
2015-02-12 19:37:47 +03:00
goto bad ;
2001-10-03 21:03:25 +04:00
}
2002-01-07 14:12:11 +03:00
backup ( vg ) ;
2011-02-28 16:19:02 +03:00
} else if ( ! ( pv_write ( cmd , pv , 0 ) ) ) {
2007-11-02 23:40:05 +03:00
log_error ( " Failed to store physical volume \" %s \" " ,
pv_name ) ;
2015-02-12 19:37:47 +03:00
goto bad ;
2001-10-03 21:03:25 +04:00
}
2001-10-17 19:29:31 +04:00
device usage based on devices file
The LVM devices file lists devices that lvm can use. The default
file is /etc/lvm/devices/system.devices, and the lvmdevices(8)
command is used to add or remove device entries. If the file
does not exist, or if lvm.conf includes use_devicesfile=0, then
lvm will not use a devices file. When the devices file is in use,
the regex filter is not used, and the filter settings in lvm.conf
or on the command line are ignored.
LVM records devices in the devices file using hardware-specific
IDs, such as the WWID, and attempts to use subsystem-specific
IDs for virtual device types. These device IDs are also written
in the VG metadata. When no hardware or virtual ID is available,
lvm falls back using the unstable device name as the device ID.
When devnames are used, lvm performs extra scanning to find
devices if their devname changes, e.g. after reboot.
When proper device IDs are used, an lvm command will not look
at devices outside the devices file, but when devnames are used
as a fallback, lvm will scan devices outside the devices file
to locate PVs on renamed devices. A config setting
search_for_devnames can be used to control the scanning for
renamed devname entries.
Related to the devices file, the new command option
--devices <devnames> allows a list of devices to be specified for
the command to use, overriding the devices file. The listed
devices act as a sort of devices file in terms of limiting which
devices lvm will see and use. Devices that are not listed will
appear to be missing to the lvm command.
Multiple devices files can be kept in /etc/lvm/devices, which
allows lvm to be used with different sets of devices, e.g.
system devices do not need to be exposed to a specific application,
and the application can use lvm on its own set of devices that are
not exposed to the system. The option --devicesfile <filename> is
used to select the devices file to use with the command. Without
the option set, the default system devices file is used.
Setting --devicesfile "" causes lvm to not use a devices file.
An existing, empty devices file means lvm will see no devices.
The new command vgimportdevices adds PVs from a VG to the devices
file and updates the VG metadata to include the device IDs.
vgimportdevices -a will import all VGs into the system devices file.
LVM commands run by dmeventd not use a devices file by default,
and will look at all devices on the system. A devices file can
be created for dmeventd (/etc/lvm/devices/dmeventd.devices) If
this file exists, lvm commands run by dmeventd will use it.
Internal implementaion:
- device_ids_read - read the devices file
. add struct dev_use (du) to cmd->use_devices for each devices file entry
- dev_cache_scan - get /dev entries
. add struct device (dev) to dev_cache for each device on the system
- device_ids_match - match devices file entries to /dev entries
. match each du on cmd->use_devices to a dev in dev_cache, using device ID
. on match, set du->dev, dev->id, dev->flags MATCHED_USE_ID
- label_scan - read lvm headers and metadata from devices
. filters are applied, those that do not need data from the device
. filter-deviceid skips devs without MATCHED_USE_ID, i.e.
skips /dev entries that are not listed in the devices file
. read lvm label from dev
. filters are applied, those that use data from the device
. read lvm metadata from dev
. add info/vginfo structs for PVs/VGs (info is "lvmcache")
- device_ids_find_renamed_devs - handle devices with unstable devname ID
where devname changed
. this step only needed when devs do not have proper device IDs,
and their dev names change, e.g. after reboot sdb becomes sdc.
. detect incorrect match because PVID in the devices file entry
does not match the PVID found when the device was read above
. undo incorrect match between du and dev above
. search system devices for new location of PVID
. update devices file with new devnames for PVIDs on renamed devices
. label_scan the renamed devs
- continue with command processing
2020-06-23 21:25:41 +03:00
if ( du ) {
2021-08-03 23:32:33 +03:00
memcpy ( pvid , & pv - > id . uuid , ID_LEN ) ;
2021-02-27 16:14:25 +03:00
free ( du - > pvid ) ;
2023-11-08 20:46:38 +03:00
if ( ! ( du - > pvid = strdup_pvid ( pvid ) ) )
device usage based on devices file
The LVM devices file lists devices that lvm can use. The default
file is /etc/lvm/devices/system.devices, and the lvmdevices(8)
command is used to add or remove device entries. If the file
does not exist, or if lvm.conf includes use_devicesfile=0, then
lvm will not use a devices file. When the devices file is in use,
the regex filter is not used, and the filter settings in lvm.conf
or on the command line are ignored.
LVM records devices in the devices file using hardware-specific
IDs, such as the WWID, and attempts to use subsystem-specific
IDs for virtual device types. These device IDs are also written
in the VG metadata. When no hardware or virtual ID is available,
lvm falls back using the unstable device name as the device ID.
When devnames are used, lvm performs extra scanning to find
devices if their devname changes, e.g. after reboot.
When proper device IDs are used, an lvm command will not look
at devices outside the devices file, but when devnames are used
as a fallback, lvm will scan devices outside the devices file
to locate PVs on renamed devices. A config setting
search_for_devnames can be used to control the scanning for
renamed devname entries.
Related to the devices file, the new command option
--devices <devnames> allows a list of devices to be specified for
the command to use, overriding the devices file. The listed
devices act as a sort of devices file in terms of limiting which
devices lvm will see and use. Devices that are not listed will
appear to be missing to the lvm command.
Multiple devices files can be kept in /etc/lvm/devices, which
allows lvm to be used with different sets of devices, e.g.
system devices do not need to be exposed to a specific application,
and the application can use lvm on its own set of devices that are
not exposed to the system. The option --devicesfile <filename> is
used to select the devices file to use with the command. Without
the option set, the default system devices file is used.
Setting --devicesfile "" causes lvm to not use a devices file.
An existing, empty devices file means lvm will see no devices.
The new command vgimportdevices adds PVs from a VG to the devices
file and updates the VG metadata to include the device IDs.
vgimportdevices -a will import all VGs into the system devices file.
LVM commands run by dmeventd not use a devices file by default,
and will look at all devices on the system. A devices file can
be created for dmeventd (/etc/lvm/devices/dmeventd.devices) If
this file exists, lvm commands run by dmeventd will use it.
Internal implementaion:
- device_ids_read - read the devices file
. add struct dev_use (du) to cmd->use_devices for each devices file entry
- dev_cache_scan - get /dev entries
. add struct device (dev) to dev_cache for each device on the system
- device_ids_match - match devices file entries to /dev entries
. match each du on cmd->use_devices to a dev in dev_cache, using device ID
. on match, set du->dev, dev->id, dev->flags MATCHED_USE_ID
- label_scan - read lvm headers and metadata from devices
. filters are applied, those that do not need data from the device
. filter-deviceid skips devs without MATCHED_USE_ID, i.e.
skips /dev entries that are not listed in the devices file
. read lvm label from dev
. filters are applied, those that use data from the device
. read lvm metadata from dev
. add info/vginfo structs for PVs/VGs (info is "lvmcache")
- device_ids_find_renamed_devs - handle devices with unstable devname ID
where devname changed
. this step only needed when devs do not have proper device IDs,
and their dev names change, e.g. after reboot sdb becomes sdc.
. detect incorrect match because PVID in the devices file entry
does not match the PVID found when the device was read above
. undo incorrect match between du and dev above
. search system devices for new location of PVID
. update devices file with new devnames for PVIDs on renamed devices
. label_scan the renamed devs
- continue with command processing
2020-06-23 21:25:41 +03:00
log_error ( " Failed to set pvid for devices file. " ) ;
if ( ! device_ids_write ( cmd ) )
log_warn ( " Failed to update devices file. " ) ;
unlock_devices_file ( cmd ) ;
}
config: add silent mode
Accept -q as the short form of --quiet.
Suppress non-essential standard output if -q is given twice.
Treat log/silent in lvm.conf as equivalent to -qq.
Review all log_print messages and change some to
log_print_unless_silent.
When silent, the following commands still produce output:
dumpconfig, lvdisplay, lvmdiskscan, lvs, pvck, pvdisplay,
pvs, version, vgcfgrestore -l, vgdisplay, vgs.
[Needs checking.]
Non-essential messages are shifted from log level 4 to log level 5
for syslog and lvm2_log_fn purposes.
2012-08-25 23:35:48 +04:00
log_print_unless_silent ( " Physical volume \" %s \" changed " , pv_name ) ;
2001-10-17 19:29:31 +04:00
2015-02-12 19:37:47 +03:00
params - > done + + ;
return ECMD_PROCESSED ;
bad :
log_error ( " Physical volume %s not changed " , pv_name ) ;
return ECMD_FAILED ;
2001-10-03 21:03:25 +04:00
}
2002-11-18 17:04:08 +03:00
int pvchange ( struct cmd_context * cmd , int argc , char * * argv )
{
2015-02-12 19:37:47 +03:00
struct pvchange_params params = { 0 } ;
2014-12-03 12:02:13 +03:00
struct processing_handle * handle = NULL ;
2015-02-19 16:03:45 +03:00
int ret ;
2014-12-03 12:02:13 +03:00
2016-06-22 00:24:52 +03:00
if ( ! ( arg_is_set ( cmd , allocatable_ARG ) + arg_is_set ( cmd , addtag_ARG ) +
arg_is_set ( cmd , deltag_ARG ) + arg_is_set ( cmd , uuid_ARG ) +
arg_is_set ( cmd , metadataignore_ARG ) ) ) {
2011-01-24 16:38:31 +03:00
log_error ( " Please give one or more of -x, -uuid, "
" --addtag, --deltag or --metadataignore " ) ;
2015-02-12 19:37:47 +03:00
ret = EINVALID_CMD_LINE ;
2014-12-03 12:02:13 +03:00
goto out ;
}
device usage based on devices file
The LVM devices file lists devices that lvm can use. The default
file is /etc/lvm/devices/system.devices, and the lvmdevices(8)
command is used to add or remove device entries. If the file
does not exist, or if lvm.conf includes use_devicesfile=0, then
lvm will not use a devices file. When the devices file is in use,
the regex filter is not used, and the filter settings in lvm.conf
or on the command line are ignored.
LVM records devices in the devices file using hardware-specific
IDs, such as the WWID, and attempts to use subsystem-specific
IDs for virtual device types. These device IDs are also written
in the VG metadata. When no hardware or virtual ID is available,
lvm falls back using the unstable device name as the device ID.
When devnames are used, lvm performs extra scanning to find
devices if their devname changes, e.g. after reboot.
When proper device IDs are used, an lvm command will not look
at devices outside the devices file, but when devnames are used
as a fallback, lvm will scan devices outside the devices file
to locate PVs on renamed devices. A config setting
search_for_devnames can be used to control the scanning for
renamed devname entries.
Related to the devices file, the new command option
--devices <devnames> allows a list of devices to be specified for
the command to use, overriding the devices file. The listed
devices act as a sort of devices file in terms of limiting which
devices lvm will see and use. Devices that are not listed will
appear to be missing to the lvm command.
Multiple devices files can be kept in /etc/lvm/devices, which
allows lvm to be used with different sets of devices, e.g.
system devices do not need to be exposed to a specific application,
and the application can use lvm on its own set of devices that are
not exposed to the system. The option --devicesfile <filename> is
used to select the devices file to use with the command. Without
the option set, the default system devices file is used.
Setting --devicesfile "" causes lvm to not use a devices file.
An existing, empty devices file means lvm will see no devices.
The new command vgimportdevices adds PVs from a VG to the devices
file and updates the VG metadata to include the device IDs.
vgimportdevices -a will import all VGs into the system devices file.
LVM commands run by dmeventd not use a devices file by default,
and will look at all devices on the system. A devices file can
be created for dmeventd (/etc/lvm/devices/dmeventd.devices) If
this file exists, lvm commands run by dmeventd will use it.
Internal implementaion:
- device_ids_read - read the devices file
. add struct dev_use (du) to cmd->use_devices for each devices file entry
- dev_cache_scan - get /dev entries
. add struct device (dev) to dev_cache for each device on the system
- device_ids_match - match devices file entries to /dev entries
. match each du on cmd->use_devices to a dev in dev_cache, using device ID
. on match, set du->dev, dev->id, dev->flags MATCHED_USE_ID
- label_scan - read lvm headers and metadata from devices
. filters are applied, those that do not need data from the device
. filter-deviceid skips devs without MATCHED_USE_ID, i.e.
skips /dev entries that are not listed in the devices file
. read lvm label from dev
. filters are applied, those that use data from the device
. read lvm metadata from dev
. add info/vginfo structs for PVs/VGs (info is "lvmcache")
- device_ids_find_renamed_devs - handle devices with unstable devname ID
where devname changed
. this step only needed when devs do not have proper device IDs,
and their dev names change, e.g. after reboot sdb becomes sdc.
. detect incorrect match because PVID in the devices file entry
does not match the PVID found when the device was read above
. undo incorrect match between du and dev above
. search system devices for new location of PVID
. update devices file with new devnames for PVIDs on renamed devices
. label_scan the renamed devs
- continue with command processing
2020-06-23 21:25:41 +03:00
if ( arg_is_set ( cmd , uuid_ARG ) )
cmd - > edit_devices_file = 1 ;
2016-05-31 13:24:05 +03:00
if ( ! ( handle = init_processing_handle ( cmd , NULL ) ) ) {
2014-12-03 12:02:13 +03:00
log_error ( " Failed to initialize processing handle. " ) ;
2015-02-12 19:37:47 +03:00
ret = ECMD_FAILED ;
2014-12-03 12:02:13 +03:00
goto out ;
2002-11-18 17:04:08 +03:00
}
2015-02-12 19:37:47 +03:00
handle - > custom_handle = & params ;
2016-06-22 00:24:52 +03:00
if ( ! ( arg_is_set ( cmd , all_ARG ) ) & & ! argc & & ! handle - > internal_report_for_select ) {
2015-03-04 16:40:04 +03:00
log_error ( " Please give a physical volume path or use --select for selection. " ) ;
2015-02-12 19:37:47 +03:00
ret = EINVALID_CMD_LINE ;
2014-12-03 12:02:13 +03:00
goto out ;
2002-11-18 17:04:08 +03:00
}
2016-06-22 00:24:52 +03:00
if ( arg_is_set ( cmd , all_ARG ) & & argc ) {
2014-05-22 01:10:35 +04:00
log_error ( " Option --all and PhysicalVolumePath are exclusive. " ) ;
2015-02-12 19:37:47 +03:00
ret = EINVALID_CMD_LINE ;
2014-12-03 12:02:13 +03:00
goto out ;
2002-11-18 17:04:08 +03:00
}
2016-02-22 18:42:03 +03:00
set_pv_notify ( cmd ) ;
2021-06-03 00:29:54 +03:00
/*
* Changing a PV uuid is the only pvchange that invalidates hints .
* Invalidating hints ( clear_hint_file ) is called at the start of
* the command and takes the hints lock .
* The global lock must always be taken first , then the hints lock
* ( the required lock ordering . )
*
* Because of these constraints , the global lock is taken ex here
* for any PV uuid change , even though the global lock is technically
* required only for changing an orphan PV ( we don ' t know until later
* if the PV is an orphan ) . The VG lock is used when changing
* non - orphan PVs .
*
* For changes other than uuid on an orphan PV , the global lock is
* taken sh by process_each , then converted to ex in pvchange_single ,
* which works because the hints lock is not held .
*
* ( Eventually , perhaps always do lock_global ( ex ) here to simplify . )
*/
if ( arg_is_set ( cmd , uuid_ARG ) ) {
if ( ! lock_global ( cmd , " ex " ) ) {
ret = ECMD_FAILED ;
goto out ;
}
clear_hint_file ( cmd ) ;
}
2018-12-07 23:35:22 +03:00
2019-06-21 21:37:11 +03:00
ret = process_each_pv ( cmd , argc , argv , NULL , 0 , READ_FOR_UPDATE , handle , _pvchange_single ) ;
2015-02-12 19:37:47 +03:00
log_print_unless_silent ( " %d physical volume%s changed / %d physical volume%s not changed " ,
params . done , params . done = = 1 ? " " : " s " ,
params . total - params . done , ( params . total - params . done ) = = 1 ? " " : " s " ) ;
2002-11-18 17:04:08 +03:00
2014-12-03 12:02:13 +03:00
out :
2015-02-13 12:42:21 +03:00
destroy_processing_handle ( cmd , handle ) ;
2015-02-12 19:37:47 +03:00
return ret ;
2002-11-18 17:04:08 +03:00
}