IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
user creates a file listing real devices they want
lvm tests to use, and sets LVM_TEST_DEVICE_LIST.
lvm tests can use these with prepare_real_devs
and get_real_devs.
Other aux functions do not work with these devs.
The devices file /etc/lvm/devices/system.devices
is a list of devices that lvm can use.
The option --devicesfile can specify a different file
name with a separate set of devices for lvm to use.
This option allows different applications to use
lvm on different sets of devices.
In most cases (with limited exceptions), lvm will not
read or use a device not listed in the devices file.
When the devices file is used, the filter-regex is
not used and the filter settings in lvm.conf are
ignored. filter-deviceid is used when the devices
file is enabled and rejects any device that does not
match an entry in the devices file.
Set use_devicesfile = 0 in lvm.conf or set
--devicesfile "" on the command line to disable the
use of a devices file. When disabled, lvm will see
and use any device on the system that passes the
regex filter.
A device_id, e.g. wwid or serial number from sysfs,
is a unique ID that identifies a device without
reading it. Two devices with identical content
should have different device_ids in most common
cases. The device_id is used in the devices file
and is included in VG metadata sections.
Each device_id has a device_id_type which indicates
where the device_id comes from, e.g. "sys_wwid"
means the device_id comes from the sysfs wwid file.
Others are sys_serial, mpath_uuid, loop_file, devname.
(devname is the device path which is a fallback when
no other proper device_id_type is available.)
filter-deviceid permits lvm to use only devices
on the system that have a device_id matching a
devices file entry. Using the device_id, lvm can
determine the set of devices to use without reading
any devices, so the devices file will constrain lvm
in two ways:
1. it limits the devices that lvm will read.
2. it limits the devices that lvm will use.
In some uncommon cases, e.g. when devices have no
unique ID and device_id has to fall back to using
the devname, lvm may need to read all devices on the
system to determine which ones correspond to the
devices file entries. In this case, the devices file
does not limit the devices that lvm reads, but it does
limit the devices that lvm uses.
pvcreate/vgcreate/vgextend are not constrained by
the devices file, and will look outside it to find
the new PV. They assign the new PV a device_id
and add it to the devices file. It is also possible
to explicitly add new PVs to the devices file before
using them in pvcreate/etc, in which case these commands
would not need to access devices outside the devices file.
vgimportdevices VG looks at all devices on the system
to find an existing VG and add its devices to the
devices file. The command is not limited by an
existing devices file. The command will also add
device_ids to the VG metadata if the VG does not yet
include device_ids. vgimportdevices -a imports devices
for all accessible VGs. Since vgimportdevices does not
limit itself to devices in an existing devices file, the
lvm.conf regex filter applies. Adding --foreign will
import devices for foreign VGs, but device_ids are
not added to foreign VGs. Incomplete VGs are not
imported.
The lvmdevices command manages the devices file.
The primary purpose is to edit the devices file,
but it will read PV headers to find/check PVIDs.
(It does not read, process or modify VG metadata.)
lvmdevices
. Displays devices file entries.
lvmdevices --check
. Checks devices file entries.
lvmdevices --update
. Updates devices file entries.
lvmdevices --adddev <devname>
. Adds devices_file entry (reads pv header).
lvmdevices --deldev <devname>
. Removes devices file entry.
lvmdevices --addpvid <pvid>
. Reads pv header of all devices to find <pvid>,
and if found adds devices file entry.
lvmdevices --delpvid <pvid>
. Removes devices file entry.
The vgimportclone command has a new option --importdevices
that does the equivalent of vgimportdevices with the cloned
devices that are being imported. The devices are "uncloned"
(new vgname and pvids) while at the same time adding the
devices to the devices file. This allows cloned PVs to be
imported without duplicate PVs ever appearing on the system.
TODO:
device_id_type for other special devices (nbd, drbd, others?)
dmeventd run commands with --devicesfile dmeventd.devices
OTHER:
allow operations with duplicate pvs if device id and size match only one dev
shortsystemid crc of systemid and written in pv header
use shortsystemid for new filter and orphan PV ownership
command to set boot flag on devices file entries needed for boot
vgchange -ay option to use devices file entries with boot flag
Verify that corruption is corrected for raid levels other
than raid1. For other raid levels, attempt to corrupt the
given file pattern on each underlying device, since we don't
know which device contains the file being corrupted.
This ensures that corruption is actually be introduced
when testing the other raid levels.
Verify that corruption is being corrected by checking
the integritymismatches count is non-zero for the raid LV,
which includes the total from all images (since we don't
know which image will have the corruption.)
Each integrity image in a raid LV reports its own number
of integrity mismatches, e.g.
lvs -o integritymismatches vg/lv_rimage_0
lvs -o integritymismatches vg/lv_rimage_1
In addition to this, allow the total number of integrity
mismatches from all images to be displayed for the raid LV.
lvs -o integritymismatches vg/lv
shows the number of mismatches from both lv_rimage_0 and
lv_rimage_1.
Enhance VDO man page with description of memory usage
and space requirements chapter.
Remove some unneeded blank lines in man page.
Use more precise terminology.
Correct examples since cpool and vpool are protected names.
filters needing io weren't being run because bcache
wasn't set up. Read the first 4k of the device
before doing filtering or reading ondisk structs to
reduce reads.
It's possible for a machine with a non-4k page size
to create a PV with an mda_header at an offset other
than 4k. Fix pvck --dump to work with these other
mda offsets. pvck --repair will write a new first
mda at 4096 but lvm with other page sizes will work
with this.
The args for pvcreate/pvremove (and vgcreate/vgextend
when applicable) were not efficiently opened, scanned,
and filtered. This change reorganizes the opening
and filtering in the following steps:
- label scan and filter all devs
. open ro
. standard label scan at the start of command
- label scan and filter dev args
. open ro
. uses full md component check
. typically the first scan and filter of pvcreate devs
- close and reopen dev args
. open rw and excl
- repeat label scan and filter dev args
. using reopened rw excl fd
- wipe and write new headers
. using reopened rw excl fd
Older blockdev tool return failure error code with --help,
and since now the tool abort on command failure, lets
detect missing --getsize64 support directly by running
command and check if it returns something usable.
It's likely very hard to have the system with
such old blockdev tool and newer lvm2 compiled.
Since 'kilobytes' could be seen in 2 way - SI as '1000',
while all programmers sees it as '1024' - switch to
commonly acceptted KiB, MiB....
Resolves RHBZ 1496255.
Test case where filesystem has been corrected via fsck.
In such case fsck returns '1' as success and should be
handled in a same way as '0' since fs is correct.
Set more secure bash failure mode for pipilines.
Avoid using unset variables.
Enhnace error reporting for failing command.
Avoid using error via 'case..esac || error'.
In some cases the dev size may not have been read yet
in set_pv_devices(). In this case get the dev size
before comparing the dev size with the pv size.
Restructure the pvscan code, and add new temporary files
that list pvids in a VG, used for processing PVs that
have no metadata.
The new temp files, in /run/lvm/pvs_lookup/<vgname>, allow a
proper pvscan --cache to be done on PVs that have no metadata.
pvscan --cache <dev> is only supposed to read <dev>, but when
<dev> has no metadata, this had not been possible. The
command had to fall back to scanning all devices to read all
VG metadata to get the list of all PVIDs needed to check for
a complete VG. Now, the temp file can be used in place of
reading metadata from all PVs on the system.
To read the lvm headers and set dev->pvid if the
device is a PV. Difference from label_scan_ functions
is this does not read any vg metadata or add any info
to lvmcache.
Filtering in label_scan was controlled indirectly by
the fact that bcache was not yet set up when label_scan
first ran. The result is that filters that needed data
would not run and would return -EAGAIN, which would
result in the dev flag FILTER_AFTER_SCAN being set.
After the dev header was read for checking the label,
filters would be rechecked because of FILTER_AFTER_SCAN.
All filters would be checked this time because bcache
was now set up, and the filters needing data would
largely use data already scanned for reading the label.
This design worked but is hard to adjust for future
cases where bcache is already set up.
Replace this method (based on setting up bcache, or not)
with a new cmd flag filter_nodata_only. When this flag
is set filters that need data will not run. This allows
the same label_scan behavior when bcache has been set up.
There are no expected changes in behavior.
Touch of stack allocation validated given size with rlimit
and if the reserved_stack was above rlimit, its been completely
ignored - now we will always touch stack upto rlimit/2 size.
cmd context has 'threaded' value that used be set
by clvmd - and allowed proper memory locking management.
Reuse same bit for dmeventd.
Since dmeventd is using 300KiB stack per thread,
we will ignore any user settings for allocation/reserved_stack
until some better solution is find.
This avoids crashing of dmevend when user changes this value
and because in most cases lvm2 should work ok with 64K stack
size, this change should not cause any problems.
Check that type is always defined, if not make it explicit internal
error (although logged as debug - so catched only with proper lvm.conf
setting).
This ensures later type being NULL can't be dereferenced with coredump.
DM tree keeps track of created device while preloading a device tree.
When fail occures during such preload, it will now try to remove
all created and preloaded device. This makes it easier to maintain
stacking of device, since we do not need to check in-depth for
existance of all possible created devices during the failure.