IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Instead of parsing the whole /proc/kallsyms use faster variant
of using modprobe tool logic.
lvm2 here wants to know whether the particular DM cache policy is
present in the kernel - however since the cache policy does not have
any kernel module parameters and it can be built-in to a kernel
there is no /sys/modules directory in such case and we would need to call
modprobe everytime we want detect such case.
The old solution tried to look for particular kernel symbol
(and like not the right way, as smq_exit might be actually ommitted).
New version checks MODULES_PATH/`uname -r`/modules.builtin for
whether is present cache policy module instead of CPU expensive parsing
of kallsyms.
Reuse existing report/headings config setting to make it possible to
change the type of headings to display:
0 - no headings
1 - column name abbreviations (default and original functionality)
2 - full column names (column names are equal to exact names that
-o|--options also accepts to set report output)
Also, add '--headings none|abbrev|full|0|1|2' command line option
so we are able to select the heading type for each LVM reporting
command directly.
We incorrectly marked pv_major and pv_minor fields as being of string
type, even though the values were already correctly handled as integers
internally. This confused -S|--select that tried to compare string
values instead of integers.
Reported here: https://github.com/lvmteam/lvm2/issues/122
If we are executing 'report_for_selection' to do an internal report
just for the selection itself (not for display), we can call it more
than once. In that case, we are reusing the same selection handle
(e.g. inside 'process_each_lv_in_vg').
The selection handle has 'report_type' field which is a union of all
report types needed for the report based on selection fields we actually
use.
The 'report_type' is further clarified based on checks and rules inside
'_get_final_report_type' which 'report_for_selection' calls. Then, this
final report type unambiguously identifies proper branch to take in
'report_all_in_{pv,vg,lv}' that is called next.
If the 'report_for_selection' is called more than once with the same
selection handle, we need to make sure that we always restore the report
type to its initial value, so all the rules inside 'report_for_selection'
are applied correctly next time.
This patch fixes the missing restoration of the 'report_type' value in
'selection_handle' that is reused for recurring 'report_for_selection'
calls.
An example scenario where this failed was with selecting an LV for
removal with "lvremove --select" while using a field in the selection
that required extra DM info or DM status call for the LV (that is,
"Logical Volume Device Info Fields" and "Logical Volume Device Status Fields"
as visible in 'lvs -S help').
When we are using -S|--select for non-reporting tools while using command log
reporting (log/report_command_log=1 setting), we need to create an internal
processing handle to handle the selection itself. In this case, the internal
processing handle to execute the selection (to process the -S|--select) has
a parent handle (that is processing the actual non-reporting command).
When this parent handle exists, we can't destroy the command log report
in destroy_processing_handle as there's still the parent processing to
finish. The parent processing may still generate logs which need to be
reported in the command log report. If the command log report was
destroyed prematurely together with destroying the internal processing
handle for -S|--select, then any subsequent log request from processing
the actual command (and hence an attermpt to access the command log report)
ended up with a segfault.
See also: https://bugzilla.redhat.com/show_bug.cgi?id=2175220
Since 67722b3123, we have a new mechanism
to run the autoactivation from udev. With this change, we also replaced
the way the LVM autoactivation service is instantiatiated - instead of
setting the SYSTEM_WANTS udev variable (which systemd read and then
instantiated the service), we're now directly instantiating the
transient 'lvm-activate-<vgname>' service by calling systemd-run.
As such, we don't need to bother with setting the SYSTEMD_READY variable
for foreign devices anymore (in this case, MD and loop devices on top of
which there's a PV).
Before, we set the SYSTEMD_READY variable to make sure that the SYSTEMD_WANTS
is applied correctly - the service instantiation was edge-triggered by
flipping the SYSTEMD_READY from 0 to 1 and at the same time having the
SYSTEMD_WANTS variable set to the service name to instantiate. We're
using systemd-run now so this condition does not apply anymore.
Also, it was not completely correct to set SYSTEMD_READY for foreign
devices because there might be cases where this could cause issues,
see also https://github.com/lvmteam/lvm2/issues/94.
When activating origin and its thick snapshots, ensure the
origin's LV udev processing is finished first and after this
reactivate its snapshot so the udev can scan them afterwards.
This should fix the problems for users using UUID of such device
in their fstab and occasionaly mounted snapshot instead of origin LV.
Enhance checking vdo constains so it also handles changes of active VDO LVs
where only added difference is considered now.
For this also the reported informational message about used memory
was improved to only list consuming RAM blocks.
When executing process_each_lv_in_vg, we process live LVs first and
after that, we process any historical LVs. In case we have just removed
an LV, which also means we have just made it "historical" and so it
appears as fresh item in vg->historical_lvs list, we have to skip it
when we get to processing historical LVs inside the same process_each_lv_in_vg
call.
The simplest approach here, without introducing another LV list, is to
simply mark such historical LVs as "fresh" directly in struct
historical_logical_volume when we have just removed the original LV
and created the historical LV for it. Then, we just need to check the
flag when processing historical LVs and skip it if it is "fresh".
When we read historical LVs out of metadata, they are marked as
"not fresh" and so they can be processed as usual.
This was mainly an issue in conjuction with -S|--select use:
# lvmconfig --type diff
metadata {
record_lvs_history=1
}
(In this example, a thin pool with lvol1 thin LV and lvol2 and lvol3 snapshots.)
# lvs -H vg -o name,pool_lv,full_ancestors,full_descendants
LV Pool FAncestors FDescendants
lvol1 pool lvol2,lvol3
lvol2 pool lvol1 lvol3
lvol3 pool lvol2,lvol1
pool
# lvremove -S 'name=lvol2'
Logical volume "lvol2" successfully removed.
Historical logical volume "lvol2" successfully removed.
...here, the historical LV lvol2 should not have been removed because
we have just removed its original non-historical lvol2 and the fresh
historical lvol2 must not be included in the same processing spree.