IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
This vdo parameter existed in the early stage of integration of vdo into lvm2,
but later it's been removed from vdoformat tool - so actually if
there would be any non-zero value it would cause error on lvcreate.
Option was not stored on disk in lvm2 metadata.
Remove this vdo parameter from lvm2 sources.
(Although this vdo parameter will be still accepted on cmdline through
--vdosettings option, but it will be ignored.)
(cherry picked from commit 6ff65e6755)
Instead of using size of 'empty header' in vdopool use fixed size 4K
for a 'wrappeing' vdo-pool device.
This fixes the issue when user tried to activate vdo-pool after
a conversion from vdo managed device with 'vgchange -ay' - where
this command activated all LVs with 'vdo-pool' wrapping device as well,
but this converted pool uses 0-length header.
This 4k size should usually prevent other tools like 'blkid' recognize
such device as anything - so it shouldn't cause any problems with
duplicate indentification of devices.
(cherry picked from commit e8e6347ba3)
ATM kernel VDO target does not handle resize of inactive VDO LVs
so prevent users corrupting such LVs and require active such LVs.
(manually backported from commit c20f01a0cb)
Enhance checking vdo constains so it also handles changes of active VDO LVs
where only added difference is considered now.
For this also the reported informational message about used memory
was improved to only list consuming RAM blocks.
(cherry picked from commit 2451bc568f)
Introduce struct vdo_pool_size_config usable to calculate necessary
memory size for active VDO volume.
Function lv_vdo_pool_size_config() is able to read out this
configuration out of runtime DM table line.
(cherry picked from commit 1bed2cafe8)
When we are actually resizing VDO device - we need to check size only in
non-critical section - otherwise we are checking
(cherry picked from commit 773b88e028)
Since lvm2 may create VDO pool virtual size aligned only on extent size
while VDO itself is just 4K aligned - we need to support such misalign.
(cherry picked from commit 2e79b005c2)
When the volume size is extended, there is no need to flush
IO operations (nothing can be targeting new space yet).
VDO target is supported as target that can safely work with
this condition.
Such support is also needed, when extending VDOPOOL size
while the pool is reaching its capacity - since this allows
to continue working without reaching 'out-of-space' condition
due to flushing of all in flight IO.
(cherry picked from commit e26c21cb8d)
When lvcreate is makeing VDO pool and user has not specified -V size,
ATM we actually run 'vdoformat' twice to get properly 'extent' aligned
size matching lvm2 properties - so the 2nd. run of vdoformat actually
can stay with 'log_verbose()' so the standard printed result
is not showing confusing info (which is now also correctly using
print_unless_silent)
(cherry picked from commit fc5bc5985d)
When creating VDO pool based of % values, lvm2 is now more clever
and avoids to create 'unsupportable' sizes of physical backend
volumes as 16TiB is maximum size supported by VDO target
(and also limited by maximum supportable slabs (8192) based on slab
size.
If the requested virtual size is approaching max supported size 4PiB,
switch header size to 0.
(cherry picked from commit e2e31d9acf)
Newer VDO kernel target require to have matching virtual size - this
however cause incompatiblity when lvcreate is let to format VDO data
device and read the usable size from vdoformat.
Altough this is a kernel regression and will likely get fixed,
lvm2 can actually reformat VDO device to use properly aligned VDO LV
size to make this problem disappear.
(cherry picked from commit a477490e81)
Add function to check for avaialble memory for particular VDO
configuration - to avoid unnecessary machine swapping for configs
that will not fit into memory (possibly in locked section).
Formula tries to estimate RAM size machine can use also with
swapping for kernel target - but still leaving some amount of
usable RAM.
Estimation is based on documented RAM usage of VDO target.
If the /proc/meminfo would be theoretically unavailable, try to use
'sysinfo()' function, however this is giving only free RAM without
the knowledge about how much RAM could be eventually swapped.
TODO: move _get_memory_info() into generic lvm2 API function used
by other targets with non-trivial memory requirements.
(cherry picked from commit ebad057579)
Keep single source for most of values printed in lvm.conf
(still needs some conversion)
Correct max for logical threads to 60
(we may refuse some older configuration which might eventually
user higher numbers - but so far let's assume no user have ever set this
as it's been non-trivial and if would complicate code unnecessarily.)
Accept maximum of 4PiB for virtual size of VDO LV
(lvm2 will drop 'header borders to 0 for this case').
(cherry picked from commit b5c8e591ed)
Add era lenght validation into dm_vdo_validate_target_params()
and reuse this validator also for _check_lv_segment().
(cherry picked from commit 8ca2b1bc21)
When thin-pool had queued some delete message on extension operation
such message has been 'lost' and thin-pool kernel metadata has been
left with a thin volume that no longer existed for lvm2 metadata.
(cherry picked from commit 0937113146)
Take the devices file lock before creating a new devices file.
(Was missed by the change to preemptively create the devices
file prior to setup_devices(), which was done to improve the
error path.)
pvcreate with --uuid would segfault if a devices file entry matched
the specified pvid, but the devices file entry had no device_id, which
could happen if the entry has a devname idtype.
multipath_component_detection=0 has always applied to the filter-based
component detection. Also apply this setting to the duplicate-PV
handling which also eliminates multipath components (based on duplicate
PVs having the same wwid.)
to compare with wwids in /etc/multipath/wwids when
excluding multipath components. The wwid printed
from the sysfs wwid file may not be the wwid used
in multipath wwids. Save the wwids found for each
device on dev->wwids to avoid repeating reading
and parsing the sysfs files.
Fixes commit 494372b4ee
"filter-mpath: use multipath blacklist"
to handle wwids with initial type digits 1 and 2 used
for t10 and eui ids. Originally recognized type 3 naa.
A typo of the filename after --devicesfile should result in a
command error rather than the command falling back to using no
devices file at all. Exception is vgcreate|pvcreate which
create a new devices file if the file name doesn't exist.
Explicit wwid's from these sections control whether the
same wwid in /etc/multipath/wwids is recognized as a
multipath component. Other non-wwid keywords are not
used, and may require disabling the use of the multipath
wwids file in lvm.conf.
Change messages that refer to devices being "excluded by filters"
to say just "excluded". This will avoid mistaking the word
"filters" with the lvm.conf filter setting.
dev_name(dev) returns "[unknown]" if there are no names
on dev->aliases. It's meant mainly for log messages.
Many places assume a valid path name is returned, and
use it directly. A caller that wants to use the path
from dev_name() must first check if the dev has any
paths with dm_list_empty(&dev->aliases).
Use dev_cache_get_existing() in a few common, high level
locations where it's obvious that only existing dev-cache
entries are wanted. This can be expanded and used in more
locations (or dev_cache_get can stop creating new entries.)
along with some basic checks for cases when a device
has no aliases.
lvm itself creates many situations where a struct device
has no valid paths, when it activates and opens an LV,
does something with it, e.g. zeroing, and then closes
and deactivates it. (dev-cache is intended for PVs, and
the use of LVs should be moved out of dev-cache in a
future patch.)
In a certain disconnected state, a block device is present on
the system, can be opened, reports a valid size, reports the
correct device id (wwid), and matches a devices file entry.
But, reading the device can still fail. In this case,
device_ids_validate() was misinterpreting the read error as
the device having no data/label on it (and no PVID).
The validate function would then clear the PVID from the
devices file entry for the device, thinking that it was
fixing the devices file (making it consistent with the on disk
state.) Fix this by not attempting to check and correct a
devices file entry that cannot be read. Also make this case
explicit in the hints validation code (which was doing the
right thing but indirectly.)
Removes some incorrect and unnecessary checks for other entries
when adding a new devices. The removed checks and corrections were
mostly redundant with what is already done by device id matching.
Other checking is reworked so the warnings are a bit different.
Improve handling of md components that get through the
filter, like the previous improvement for multipath.
If md components get through the filter and trigger
duplicate PV code, then eliminate any devs entirely
that are not an md device.
If multipath component devices get through the filter and
cause lvm to see duplicate PVs, then check the wwid of the
devs and drop the component devices as if they had been
filtered. If a dm mpath device was found among the duplicates
then use that as the PV, otherwise do not use any of the
components as the PV.
"duplicate PVs" associated with multipath configs will no
longer stop commands from working.
. error exit means that lvmdevices --update would make a change.
. remove check of PART field from --check because it isn't used.
. unlink searched_devnames file to ensure check|update will search
Remove the searched_devnames file in a couple more places:
. When hints need refreshing it's possible that a missing
devices file entry could be found by searching devices
again.
. When a devices file entry devname is first found to be
incorrect, a new search for missing entries may be
useful.
When devnames are used as device ids and devnames change,
then new devices need to be located for the PVs. If the old
devname is now used by a filtered device, this was preventing
the code from searching for the new device, so the PV was
reported as missing.