IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
The devices file /etc/lvm/devices/system.devices is a list of
devices that lvm can use. This is the default system devices
file, which is specified in lvm.conf devices/devicesfile.
The command option --devicesfile <filename> allows lvm to be
used with a different set of devices. This allows different
applications to use lvm on different sets of devices, e.g.
system devices do not need to be exposed to an application
using lvm on its own devices, and application devices do not
need to be exposed to the system.
In most cases (with limited exceptions), lvm will not read or
use a device not listed in the devices file. When the devices
file is used, the regex filter is not used, and the filter
settings in lvm.conf are ignored. filter-deviceid is used
when the devices file is enabled, and rejects any device that
does not match an entry in the devices file.
Set use_devicesfile=0 in lvm.conf or set --devicesfile ""
on the command line to disable the use of a devices file.
When disabled, lvm will see and use any device on the system
that passes the regex filter (and other standard filters.)
A device ID, e.g. wwid or serial number from sysfs, is a
unique ID that identifies a device. The device ID is
generally independent of the device content, and lvm can
get the device ID without reading the device.
The device ID is used in the devices file as the primary
method of identifying device entries, and is also included
in VG metadata for PVs.
Each device_id has a device_id_type which indicates where
the device_id comes from, e.g. "sys_wwid" means the device_id
comes from the sysfs wwid file. Others are sys_serial,
mpath_uuid, loop_file, md_uuid, devname. (devname is the
device path, which is a fallback when no other proper
device_id_type is available.)
filter-deviceid permits lvm to use only devices on the system
that have a device_id matching a devices file entry. Using
the device_id, lvm can determine the set of devices to use
without reading any devices, so the devices file will constrain
lvm in two ways:
1. it limits the devices that lvm will read.
2. it limits the devices that lvm will use.
In some uncommon cases, e.g. when devices have no unique ID
and device_id has to fall back to using the devname, lvm may
need to read all devices on the system to determine which
ones correspond to the devices file entries. In this case,
the devices file does not limit the devices that lvm reads,
but it does limit the devices that lvm uses.
pvcreate/vgcreate/vgextend are not constrained by the devices
file, and will look outside it to find the new PV. They assign
the new PV a device_id and add it to the devices file. It is
also possible to explicitly add new PVs to the devices file before
using them in pvcreate/etc, in which case these commands would not
need to look outside the devices file for the new device.
vgimportdevices VG looks at all devices on the system to find an
existing VG and add its devices to the devices file. The command
is not limited by an existing devices file. The command will also
add device_ids to the VG metadata if the VG does not yet include
device_ids. vgimportdevices -a imports devices for all accessible
VGs. Since vgimportdevices does not limit itself to devices in
an existing devices file, the lvm.conf regex filter applies.
Adding --foreign will import devices for foreign VGs, but device_ids
are not added to foreign VGs. Incomplete VGs are not imported.
The lvmdevices command manages the devices file. The primary
purpose is to edit the devices file, but it will read PV headers
to find/check PVIDs. (It does not read, process or modify VG
metadata.)
lvmdevices
. Displays devices file entries.
lvmdevices --check
. Checks devices file entries.
lvmdevices --update
. Updates devices file entries.
lvmdevices --adddev <devname>
. Adds devices_file entry (reads pv header).
lvmdevices --deldev <devname>
. Removes devices file entry.
lvmdevices --addpvid <pvid>
. Reads pv header of all devices to find <pvid>,
and if found adds devices file entry.
lvmdevices --delpvid <pvid>
. Removes devices file entry.
The vgimportclone command has a new option --importdevices
that does the equivalent of vgimportdevices with the cloned
devices that are being imported. The devices are "uncloned"
(new vgname and pvids) while at the same time adding the
devices to the devices file. This allows cloned PVs to be
imported without duplicate PVs ever appearing on the system.
The command option --devices <devnames> allows a specific
list of devices to be exposed to the lvm command, overriding
the devices file.
Since lvm2 normally block signals during protected
phase where it does not want to be interrupted.
Support interruptible processing when allowed
in section between sigint_allow() ... sigint_restore())
and let the 'io_getenvents()' finish with EINTR.
When bcache tries to write data to a faulty device,
it may get out of caching blocks and then just busy-loops
on a CPU - so this check protects this by checking
if there is already max_io (~64) errored blocks.
Call _wait_all() which does check whether there is still
some pending IO before sleep. Otherwise it may happen
our submitted IO operations have been already dispatched
and this call then endlessly waits for IO which are all done.
This can be reproduced when device returns quickly errors
on write requests.
Add a "device index" (di) for each device, and use this
in the bcache api to the rest of lvm. This replaces the
file descriptor (fd) in the api. The rest of lvm uses
new functions bcache_set_fd(), bcache_clear_fd(), and
bcache_change_fd() to control which fd bcache uses for
io to a particular device.
. lvm opens a dev and gets and fd.
fd = open(dev);
. lvm passes fd to the bcache layer and gets a di
to use in the bcache api for the dev.
di = bcache_set_fd(fd);
. lvm uses bcache functions, passing di for the dev.
bcache_write_bytes(di, ...), etc.
. bcache translates di to fd to do io.
. lvm closes the device and clears the di/fd bcache state.
close(fd);
bcache_clear_fd(di);
In the bcache layer, a di-to-fd translation table
(int *_fd_table) is added. When bcache needs to
perform io on a di, it uses _fd_table[di].
In the following commit, lvm will make use of the new
bcache_change_fd() function to change the fd that
bcache uses for the dev, without dropping cached blocks.
Switch remaining zero sized struct to flexible arrays to be C99
complient.
These simple rules should apply:
- The incomplete array type must be the last element within the structure.
- There cannot be an array of structures that contain a flexible array member.
- Structures that contain a flexible array member cannot be used as a member of another structure.
- The structure must contain at least one named member in addition to the flexible array member.
Although some of the code pieces should be still improved.
When initiated larger write request, it may have happened, bcache
got out of free chunks - fix the loop, that is supposed to wait
until next free chunk becomes avain available.
It's possible for a dev-cache entry to remain after all
paths for it have been removed, and other parts of the
code expect that a dev always has a name. A better fix
may be to remove a device from dev-cache after all paths
to it have been removed.
dm-integrity stores checksums of the data written to an
LV, and returns an error if data read from the LV does
not match the previously saved checksum. When used on
raid images, dm-raid will correct the error by reading
the block from another image, and the device user sees
no error. The integrity metadata (checksums) are stored
on an internal LV allocated by lvm for each linear image.
The internal LV is allocated on the same PV as the image.
Create a raid LV with an integrity layer over each
raid image (for raid levels 1,4,5,6,10):
lvcreate --type raidN --raidintegrity y [options]
Add an integrity layer to images of an existing raid LV:
lvconvert --raidintegrity y LV
Remove the integrity layer from images of a raid LV:
lvconvert --raidintegrity n LV
Settings
Use --raidintegritymode journal|bitmap (journal is default)
to configure the method used by dm-integrity to ensure
crash consistency.
Initialization
When integrity is added to an LV, the kernel needs to
initialize the integrity metadata/checksums for all blocks
in the LV. The data corruption checking performed by
dm-integrity will only operate on areas of the LV that
are already initialized. The progress of integrity
initialization is reported by the "syncpercent" LV
reporting field (and under the Cpy%Sync lvs column.)
Example: create a raid1 LV with integrity:
$ lvcreate --type raid1 -m1 --raidintegrity y -n rr -L1G foo
Creating integrity metadata LV rr_rimage_0_imeta with size 12.00 MiB.
Logical volume "rr_rimage_0_imeta" created.
Creating integrity metadata LV rr_rimage_1_imeta with size 12.00 MiB.
Logical volume "rr_rimage_1_imeta" created.
Logical volume "rr" created.
$ lvs -a foo
LV VG Attr LSize Origin Cpy%Sync
rr foo rwi-a-r--- 1.00g 4.93
[rr_rimage_0] foo gwi-aor--- 1.00g [rr_rimage_0_iorig] 41.02
[rr_rimage_0_imeta] foo ewi-ao---- 12.00m
[rr_rimage_0_iorig] foo -wi-ao---- 1.00g
[rr_rimage_1] foo gwi-aor--- 1.00g [rr_rimage_1_iorig] 39.45
[rr_rimage_1_imeta] foo ewi-ao---- 12.00m
[rr_rimage_1_iorig] foo -wi-ao---- 1.00g
[rr_rmeta_0] foo ewi-aor--- 4.00m
[rr_rmeta_1] foo ewi-aor--- 4.00m
Do this at two levels, although one would be enough to
fix the problem seen recently:
- Ignore any reported sector size other than 512 of 4096.
If either sector size (physical or logical) is reported
as 512, then use 512. If neither are reported as 512,
and one or the other is reported as 4096, then use 4096.
If neither is reported as either 512 or 4096, then use 512.
- When rounding up a limited write in bcache to be a multiple
of the sector size, check that the resulting write size is
not larger than the bcache block itself. (This shouldn't
happen if the sector size is 512 or 4096.)
An active md device with an end superblock causes lvm to
enable full md component detection. This was being done
within the filter loop instead of before, so the full
filtering of some devs could be missed.
Also incorporate the recently added config setting that
controls the md component detection.
If udev info is missing for a device, (which would indicate
if it's an MD component), then do an end-of-device read to
check if a PV is an MD component. (This is skipped when
using hints since we already know devs in hints are good.)
A new config setting md_component_checks can be used to
disable the additional end-of-device MD checks, or to
always enable end-of-device MD checks.
When both hints and udev info are disabled/unavailable,
the end of PVs will now be scanned by default. If md
devices with end-of-device superblocks are not being
used, the extra I/O overhead can be avoided by setting
md_component_checks="start".
udev_dev_is_md_component and udev_dev_is_mpath_component
are not used for obtaining the device list, but they still
use libudev for device info. When there are problems with
udev, these functions can get stuck. So, use the existing
obtain_device_list_from_udev config setting to also control
whether these "is component" functions are used, which gives
us a way to avoid using libudev entirely when it's causing
problems.
Save the list of PVs in /run/lvm/hints. These hints
are used to reduce scanning in a number of commands
to only the PVs on the system, or only the PVs in a
requested VG (rather than all devices on the system.)