IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
When thin-pool had queued some delete message on extension operation
such message has been 'lost' and thin-pool kernel metadata has been
left with a thin volume that no longer existed for lvm2 metadata.
dev_name(dev) returns "[unknown]" if there are no names
on dev->aliases. It's meant mainly for log messages.
Many places assume a valid path name is returned, and
use it directly. A caller that wants to use the path
from dev_name() must first check if the dev has any
paths with dm_list_empty(&dev->aliases).
Use dev_cache_get_existing() in a few common, high level
locations where it's obvious that only existing dev-cache
entries are wanted. This can be expanded and used in more
locations (or dev_cache_get can stop creating new entries.)
along with some basic checks for cases when a device
has no aliases.
lvm itself creates many situations where a struct device
has no valid paths, when it activates and opens an LV,
does something with it, e.g. zeroing, and then closes
and deactivates it. (dev-cache is intended for PVs, and
the use of LVs should be moved out of dev-cache in a
future patch.)
Different target type for LV it's not an internal error.
i.e. when target type is replaced with 'error' type - it should be
reported as regular warning and not cause interruption of command with
internall error.
With very old version of DM target driver we have to avoid
trying to use newuuid setting - otherwise we get error
during ioctl preparation phase.
Patch is fixing regression from commit:
988ea0e94c
In a certain disconnected state, a block device is present on
the system, can be opened, reports a valid size, reports the
correct device id (wwid), and matches a devices file entry.
But, reading the device can still fail. In this case,
device_ids_validate() was misinterpreting the read error as
the device having no data/label on it (and no PVID).
The validate function would then clear the PVID from the
devices file entry for the device, thinking that it was
fixing the devices file (making it consistent with the on disk
state.) Fix this by not attempting to check and correct a
devices file entry that cannot be read. Also make this case
explicit in the hints validation code (which was doing the
right thing but indirectly.)
When compiled and used with:
CFLAGS="-fsanitize=address -g -O0"
ASAN_OPTIONS=strict_string_checks=1:detect_stack_use_after_return=1:check_initialization_order=1:strict_init_order=1
we have few reported issue - they where not normally spotted, since
we were still accessing our own memory - but ouf of buffer-range.
TODO: there is still something to enhance with handling of #orphan vgids
event based autoactivation is now the only method that lvm
provides for autoactivation.
Setting lvm.conf event_activation=0 can still be used to disable
event based autoactivation commands, but doing so will no longer
enable static autoactivation.
Removes some incorrect and unnecessary checks for other entries
when adding a new devices. The removed checks and corrections were
mostly redundant with what is already done by device id matching.
Other checking is reworked so the warnings are a bit different.
When a device has a wwid (from sysfs), but lvm ignores the wwid,
e.g. because it contains an unreliable "QEMU" value, then lvm
falls back to using IDTYPE=devname (the device name) as the
device id. If the device name changes after reboot, then lvm
automatically searches for the PV on other devices to find the
new device name and correct system.devices. When searching for
the PV, lvm skips looking at devices that would use other id types,
e.g. if a device would use a wwid and not a devname, then it
skips checking it. However, it failed to account for the fact
that a device may have wwid that was ignored, in which case it
should be checked.
. error exit means that lvmdevices --update would make a change.
. remove check of PART field from --check because it isn't used.
. unlink searched_devnames file to ensure check|update will search
The approach to duplicate VGIDs has been that it is not possible
or not allowed, so the behavior has been undefined. The actual
result was unpredictable and/or broken, and generally unhelpful.
Improve this by recognizing the problem, displaying the VGs,
and printing a warning to fix the problem. Beyond this,
using VGs with duplicate VGIDs remains undefined, but should
work well enough to correct the problem with vgchange -u.
It's possible to create this condition without too much difficulty
by cloning PVs, followed by an incomplete attempt at making the two
VGs unique (vgrename and pvchange -u, but missing vgchange -u.)
After a vg_write, this function was used to attempt to
make lvmcache data match the new state written to disk.
It was not updated correctly in a many or most cases,
and the resulting lvmcache is not actually used after
vg_write, making the update unnecessary.
This reverts commit bd2baeaaa6.
This commit broke vgrename because vgrename relies on old bugs
in lvmcache_update_vg_from_write and lvmcache_update_vgname
which need to be fixed first.
The approach to duplicate VGIDs has been that it is not possible
or not allowed, so the behavior has been undefined. The actual
result was unpredictable and/or broken, and generally unhelpful.
Improve this by recognizing the problem, displaying the VGs,
and printing a warning to fix the problem. Beyond this,
using VGs with duplicate VGIDs remains undefined, but should
work well enough to correct the problem with vgchange -u.
It's possible to create this condition without too much difficulty
by cloning PVs, followed by an incomplete attempt at making the two
VGs unique (vgrename and pvchange -u, but missing vgchange -u.)
Since we check for present DM devices - cache result for
futher use of checking presence of such device.
lvm2 uses cache result for label scan, but also when
it tries to activate or deactivate LV - however only simple
target 'striped' is reasonably supported.
Use disable_dm_devs to be able to control when lv_info()
get cache or uncached results.
TODO: support more type, however this is getting very complicated.
Description stolen from linux d/b/rbd.c L3:
rbd.c -- Export ceph rados objects as a Linux block device
16 partitions seem to make sense according to L90:
#define RBD_SINGLE_MAJOR_PART_SHIFT 4
Running *scan -vvvvvvdddddd yields
#filters/filter-type.c:28 /dev/rbd1p5: Skipping: Unrecognised LVM device type 252
#filters/filter-persistent.c:131 filter caching bad /dev/rbd1p5
right now, and adding
types = ["rbd", 252]
to /e/l/lvm.conf (with the matching "252 rbd" in /p/devices) works as a
per-machine fix:
rbd1 252:16 0 1T 1 disk
|-rbd1p1 252:17 0 243M 1 part
|-rbd1p2 252:18 0 1K 1 part
`-rbd1p5 252:21 0 1023.8G 1 part
`-dev01--vg-root 253:0 0 1023.8G 0 lvm
but rbd is supported by upstream so it'd be nice to have it work OOB
Improve handling of md components that get through the
filter, like the previous improvement for multipath.
If md components get through the filter and trigger
duplicate PV code, then eliminate any devs entirely
that are not an md device.
If multipath component devices get through the filter and
cause lvm to see duplicate PVs, then check the wwid of the
devs and drop the component devices as if they had been
filtered. If a dm mpath device was found among the duplicates
then use that as the PV, otherwise do not use any of the
components as the PV.
"duplicate PVs" associated with multipath configs will no
longer stop commands from working.
Remove the searched_devnames file in a couple more places:
. When hints need refreshing it's possible that a missing
devices file entry could be found by searching devices
again.
. When a devices file entry devname is first found to be
incorrect, a new search for missing entries may be
useful.
When devnames are used as device ids and devnames change,
then new devices need to be located for the PVs. If the old
devname is now used by a filtered device, this was preventing
the code from searching for the new device, so the PV was
reported as missing.
If the optimized label scan fails (using online files),
then clear the device state prior to falling back to the
standard label_scan. This avoids printing output about
unexpected state.
Copy another optimization from pvscan -aay to vgchange -aay.
When using the optimized label scan for only one VG, acquire the
VG lock prior to the scan. This allows vg_read to then skip the
repeated label scan that normally happens after locking the vg.
Include the device name in the /run/lvm/pvs_online/pvid files.
Commands using the pvid file can use the devname to more quickly
find the correct device, vs finding the device using the
major:minor number. If the devname in the pvid file is missing
or incorrect, fall back to using the devno.
For completeness and consistency, adjust the behavior
for some variations of:
vgchange -aay --autoactivation event [vgname]
The current standard use is with a VG name arg, and the
command is only called when all pvs_online files exist.
This is the optimal case, in which only pvs_online devs
are read. This remains the same.
Clean up behaviors for some other unexpected uses of the
command:
. With no VG name arg, the command activates any VGs
that are complete according to pvs_online. If no
pvs_online files exist, it does nothing.
. If a VG name is used but no PVs online files exist for
the VG, or the PVs online files are incomplete, then
consider there could be a problem with the pvs_online
files, and fall back to a full label scan prior to
attempting the activation.
Part of the optimization to avoid a full dev_cache_scan requires
translating major:minor numbers to a device name. If this devno
translation fails, then fall back to doing a full dev_cache_scan
which is slower but certain to provide the info. This preserves
the most important part of the label scanning optimization in the
vgchange aay (avoiding dev_cache_scan is a relatively small part
of the optimized activation compared to label scanning.)
Port another optimization from pvscan -aay to vgchange -aay:
"pvscan: only add device args to dev cache"
This optimization avoids doing a full dev_cache_scan, and
instead populates dev-cache with only the devices in the
VG being activated.
This involves shifting the use of pvs_online files from
the hints interface up to the higher level label_scan
interface. This specialized label_scan is structured
around creating a list of devices from the pvs_online
files. Previously, a list of all devices was created
first, and then reduced based on the pvs_online files.
The initial step of listing all devices was slow when
thousands of devices are present on the system.
This optimization extends the previous optimization that
used pvs_online files to limit the devices that were
actually scanned (i.e. reading to identify the device):
"vgchange -aay: optimize device scan using pvs_online files"
The information in /run/lvm/pvs_online/<pvid> files can
be used to build a list of devices for a given VG.
The pvscan -aay command has long used this information to
activate a VG while scanning only devices in that VG, which
is an important optimization for autoactivation.
This patch implements the same thing through the existing
device hints interface, so that the optimization can be
applied elsewhere. A future patch will take advantage of
this optimization in vgchange -aay, which is now used in
place of pvscan -aay for event activation.
When a device id is set for a device, using an idtype other
than devname, it means that sysfs has been used with the device
to match the device id. So, checking for a sysfs entry for the
device in filter-sysfs is redundant. For any other cases not
covered by this (e.g. devname ids), have filter-sysfs simply
stat /sys/dev/block/major:minor to test if the device exists
in sysfs.
The extensive processing done by filter-sysfs init is removed.
It was taking an immense amount of time with many devices, e.g.
. 1024 PVs in 520 VGs
. 520 concurrent vgchange -ay <vgname> commands
. vgchange scans only PVs in the named VG (based on pvs_online
files from a pending patch)
A large number of the vgchange commands were taking over 1 min,
and nearly half of that time was used by filter-sysfs init.
With this patch, the vgchange commands take about half the time.
When scanning configured /dev dir, avoid entring
directories with different filesystem.
This minimizes risk we will block on i.e. entring
directory with mount point.
Resolve event_activation configure option just once.
Do not print debug_devs about 'bad' filtering, when
actually filter already printed reason for skipping
Do not trace more then once about backup being disabled.
No debug when unlinked file does not exists in pvscan.
Reporting non-PVs / "all devices" is only done by
pvs -a or pvdisplay -a, so avoid the work managing
a list of all devices in process_each_pv.
In the case when it's needed, use the results of
label_scan which already determines which devs
are not PVs.
Just setting lvm.conf level=N should not send messages to
syslog (now the journal by default.)
Sending messages to syslog should require setting lvm.conf
log { syslog=1 level=N }.
Configure via lvm.conf log/journal or command line --journal.
Possible values:
"command" records command information.
"output" records default command output.
"debug" records full command debugging.
Multiple values can be set in lvm.conf as an array.
One value can be set in --journal which is added to
values set in lvm.conf
pvscan --cache <dev>
. read only dev
. create online file for dev
pvscan --listvg <dev>
. read only dev
. list VG using dev
pvscan --listlvs <dev>
. read only dev
. list VG using dev
. list LVs using dev
pvscan --cache --listvg [--checkcomplete] <dev>
. read only dev
. create online file for dev
. list VG using dev
. [check online files and report if VG is complete]
pvscan --cache --listlvs [--checkcomplete] <dev>
. read only dev
. create online file for dev
. list VG using dev
. list LVs using dev
. [check online files and report if VG is complete]
. [check online files and report if LVs are complete]
[--vgonline]
can be used with --checkcomplete, to enable use of a vg online
file. This results in only the first pvscan command to see
the complete VG to report 'VG complete', and others will report
'VG finished'. This allows the caller to easily run a single
activation of the VG.
[--udevoutput]
can be used with --cache --listvg --checkcomplete, to enable
an output mode that prints LVM_VG_NAME_COMPLETE='vgname' that
a udev rule can import, and prevents other output from the
command (other output causes udev to ignore the command.)
The list of complete LVs is meant to be passed to lvchange -aay,
or the complete VG used with vgchange -aay.
When --checkcomplete is used, lvm assumes that that the output
will be used to trigger event-based autoactivation, so the pvscan
does nothing if event_activation=0 and --checkcomplete is used.
Example of listlvs
------------------
$ lvs -a vg -olvname,devices
LV Devices
lv_a /dev/loop0(0)
lv_ab /dev/loop0(1),/dev/loop1(1)
lv_abc /dev/loop0(3),/dev/loop1(3),/dev/loop2(1)
lv_b /dev/loop1(0)
lv_c /dev/loop2(0)
$ pvscan --cache --listlvs --checkcomplete /dev/loop0
pvscan[35680] PV /dev/loop0 online, VG vg incomplete (need 2).
VG vg incomplete
LV vg/lv_a complete
LV vg/lv_ab incomplete
LV vg/lv_abc incomplete
$ pvscan --cache --listlvs --checkcomplete /dev/loop1
pvscan[35681] PV /dev/loop1 online, VG vg incomplete (need 1).
VG vg incomplete
LV vg/lv_b complete
LV vg/lv_ab complete
LV vg/lv_abc incomplete
$ pvscan --cache --listlvs --checkcomplete /dev/loop2
pvscan[35682] PV /dev/loop2 online, VG vg is complete.
VG vg complete
LV vg/lv_c complete
LV vg/lv_abc complete
Example of listvg
-----------------
$ pvscan --cache --listvg --checkcomplete /dev/loop0
pvscan[35684] PV /dev/loop0 online, VG vg incomplete (need 2).
VG vg incomplete
$ pvscan --cache --listvg --checkcomplete /dev/loop1
pvscan[35685] PV /dev/loop1 online, VG vg incomplete (need 1).
VG vg incomplete
$ pvscan --cache --listvg --checkcomplete /dev/loop2
pvscan[35686] PV /dev/loop2 online, VG vg is complete.
VG vg complete
The new system_id_source="appmachineid" will cause
lvm to use an lvm-specific derivation of the machine-id,
instead of the machine-id directly. This is now
recommended in place of using machineid.
Do not store full path with each archived name reduces memory usage if
the directory has thousands of entries and just add 'dir' path when
needed.
Also emit info print message to a user if the total size of archived
files for a VG is more then 128MiB or 8192 files.
TODO: Consider wheather adding a new 'lvm.conf archive{option}' to support
trimming these wild archive sizes can make situation better.
We already support retain_min && retain_days - but if user is
generating too many and too large archives with minutes - maybe archiving
should be disabled by a user - as it's not producing anything largely usable
and just slows-down command ??
If we add 'retain_max & retain_max_size' the condition will go against
each other and we need to chose priorities.
mm
Consider missing config tree from vg read to be an internal error
since we do not want to 'regenerate' this one in expesive parsing way.
Also if there is any failure on recreating committed VG, make it also
a 'vg_write' error.
Corrupt metadata text (with good mda header) was being handled
in the label_scan phase, but not in the vg_read phase. This
was sufficient because metadata areas would always be read and
checksummed during label_scan (metadata parsing was skipped
previously as an optimization.)
This changed with the optimization in
commit 61a6f9905e
"metadata: optimize reading metadata copies in scan"
Now, some metadata areas will not be read and checksummed
at all during the label_scan phase, only during the vg_read
phase. This means that bad metadata text may first be detected
in the vg_read phase. So, add equivalent bad metadata handling
to the vg_read path to match the label_scan path.
While being in lockless scanning phase, we can avoid reading and checking
matching metadata copies if we already know them from other PV
and just rely on matching metadata header information.
These copies will be examined later during locked metadata read/write
access.
This patch may postpone discovering some read failures to locked phase.
When creating lvm2 metadata for VG, lvm2 allocate some buffer,
and if buffer is not big enough, the buffer is 'reallocated' bigger,
and whole metadata creation is repeated until metadata fits.
We can try to use 'previous' metadata size as hint to reduce looping
here.
Preserve computed crc32 check from first written PV, just like we
preserve generated metadata.
Also there is no need to call crc32 twice on wrapping buffer with 2 calcs,
result must be always the same as with single crc32 checking.
sysconf() may also return -1 although rather theoretically.
Default to 4K when such case would happen.
Also in function call it just once and keep as static variable.
Ensure only nonNULL 'du' pointer is dereference altough the comment
to the last assign 'du' pointer already suggest 'NULL' case should not happen.
So just being explicit.
mer du
Error path in _lockd_retrive_vg_pv_list() has not zeroed released path
caussing possible double-free later in the code.
Fix it by using one single function freeing lock_pvs structure.
When cache creation fails on table reload path, implemen more
advanced revert solution, that tries to restore state of LVM
metadata into is look before actual caching started.
Loading invalid MQ/SMQ policy settings table line cause immediate
rejection - to prevent such failure, automatically filter valid
settins before it is being uploaded by lvm2.
For invalid setting issue a warnning informing user how to remove
them.
This solution is used to keep running cached LVs that might had
been created in the past with invalid settings that have been actually
unused due to another code bug.
New versions of kvdo module exposes statistics at new location:
/sys/block/dm-XXX/vdo/statistics/...
Enhance lvm2 to access this location first.
Also if the statistic info is missing - make it 'debug' level info,
so it is not failing 'lvs' command.
Since VDO is always returns 'zero' on unprovisioned read
and every provisioned block is always 'zeroed' on partial writes,
we can avoid 'zeroing' of such LVs.
Some device id types can only be used with specific device major
numbers, so use this restriction to avoid some comparisions.
This is more efficient, and can avoid some incorrect matches.
pvid and vgid are sometimes a null-terminated string, and
other times a 'struct id', and the two types were often
cast between each other. When a struct id was cast to a char
pointer, the resulting string would not necessarily be null
terminated. Casting a null-terminated string id to a
struct id is fine, but is still avoided when possible.
A struct id is: int8_t uuid[ID_LEN]
A string id is: char pvid[ID_LEN + 1]
A convention is introduced to help distinguish them:
- variables and struct fields named "pvid" or "vgid"
should be null-terminated strings.
- variables and struct fields named "pv_id" or "vg_id"
should be struct id's.
- examples:
char pvid[ID_LEN + 1];
char vgid[ID_LEN + 1];
struct id pv_id;
struct id vg_id;
Function names also attempt to follow this convention.
Avoid casting between the two types as much as possible,
with limited exceptions when known to be safe and clearly
commented.
Avoid using variations of strcpy and strcmp, and instead
use memcpy/memcmp with ID_LEN (with similar limited
exceptions possible.)