IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
New systemd services for startup:
lvm-devices-wait.service
Used in place of systemd-udev-settle, this service waits
for udev+pvscan to process PVs listed in system.devices.
It runs the command "lvmdevices --wait pvsonline".
This only waits for PVs that can be matched to a device in
sysfs, so it only waits for devices attached to the system.
It waits specifically for the /run/lvm/pvs_online/<pvid>
files to be created by pvscan. It quits waiting after a
configurable number of seconds. This service gives the
first activation service a chance to activate VGs from
PVs that are available immediately at startup. If this
service quits waiting before all the expected pvid files
appear, then the VG associated with those PVs will most
likely be activated by the -last service rather than the
initial -main service. If those PVs are even slower to
complete processing than the -last service, then the VG
will be activated by event activation whenever they are
finally complete.
lvm-activate-vgs-main.service
Calls "vgchange -aay", after lvm-devices-wait, to activate
complete VGs. It only considers PVs that have been
processed by udev+pvscan and have pvs_online files.
This is expected to activate VGs from basic devices
(not virtual device types) that are present at startup.
lvm-activate-vgs-last.service
Calls "vgchange -aay", after multipathd has started, to
activate VGs that became available after virtual device
services were started, e.g. VGs on multipath devices.
Like -main, it only looks at PVs that have been processed
by pvscan.
This vgchange in the -last service enables event activation
by creating the /run/lvm/event-activation-on file. Event
activation will activate any further VGs that appear on the
system (or complete udev processing) after the -last service.
In the case of event activation, the udev rule will run
vgchange -aay <vgname> via a transient service
lvm-activate-<vgname>.service. This vgchange only scans
PVs in the VG being activated, also based on the pvs_online
files from pvscan.
When there are many VGs that need activation during system
startup, the two fixed services can activate them all much
faster than activating each VG individually via events.
lvm.conf auto_activation_settings can be used to configure
the behavior (default ["service_and_event", "pvscan_hints"]).
"service_and_event" - the behavior described above, where
activation services are used first, and event activation
is used afterward.
"service_only" - only lvm-activate-vgs-* are used, and
no event-based activation occurs after the services finish.
(Equivalent to setting lvm.conf event_activation=0.)
"event_only" - the lvm-activate-vgs* services are skipped,
and all VGs are activated individually with event-based
activation.
"pvscan_hints" - the vgchange autoactivation commands
use pvs_online files created by pvscan. This optimization
limits the devices scanned by the vgchange command to only
PVs that have been processed by pvscan.
As we are not using 'enable-compat' for anything, remove this section.
Also remove duplicated check for blkid.
Move setting of some AC_ARG_ENABLE defaults into the macro so it's always
defined.
Enhance logic for checking supported systemd version,
while doing only a single check for systemd package.
For version checking use PKG_CHECK_EXISTS() macro.
Also use one pkg check for blkid.
Avoid checking version for thin/cache_check when tools are not present
on system.
Handle automatically new setttings
--disable-systemd-journal
--disable-appmachineid
Both setting will check presence of apropriate header files.
In case they are present, build will try to automatically build with
them (adding systemd dependency)
User can anytime disabled them and drop systemd dependency.
Also add --with-default-use-devices-file configure option to
select automatically default value for this option.
For this moment keep default upstream as 0
The new system_id_source="appmachineid" will cause
lvm to use an lvm-specific derivation of the machine-id,
instead of the machine-id directly. This is now
recommended in place of using machineid.
Add tool 'vdoimport' to support easy conversion of an existing VDO manager managed
VDO volumes into lvm2 managed VDO LV.
When physical converted volume is already a logical volume, conversion
happens with the VG itself, just with validation for extent_size, so
the virtually sized logical VDO volume size can be expressed in extents.
Example of basic simple usage:
vdoimport --name vg/vdolv /dev/mapper/vdophysicalvolume
Alongside the existed locking schemes of DLM and sanlock, this patch is
to introduce new locking scheme: In-Drive-Mutex (IDM).
With the IDM support in the drive, the locks are resident in the drive,
thus, the locking lease is maintained in a central place: the drive
firmware. We can consider this is a typical client-server model,
every host (or node) in the server cluster launches the request for
leasing mutex to a drive firmware, the drive firmware works as an
arbitrator to grant the mutex to a requester and it can reject other
applicants if the mutex has been acquired. To satisfy the LVM
activation for different modes, IDM supports two locking modes:
exclusive and shareable.
Every IDM is identified with two IDs, one is the host ID and another is
the resource ID. The resource ID is a unique identifier for what the
resource it's protected, in the integration with lvmlockd, the resource
ID is combined with VG's UUID and LV's UUID; for the global locking,
the bytes in resource ID are all zeros, and for the VG locking, the
LV's UUID is set as zero. Every host can generate a random UUID and
use it as the host ID for the SCSI command, this ID is used to clarify
the ownership for mutex.
For easily invoking the IDM commands to drive, like other locking
scheme (e.g. sanlock), a daemon program named IDM lock manager is
created, so the detailed IDM SCSI commands are encapsulated in the
daemon, and lvmlockd uses the wrapper APIs to communicate with the
daemon program.
This patch introduces the IDM locking wrapper layer, it forwards the
locking requests from lvmlockd to the IDM lock manager, and returns the
result from drives' responding.
One thing should be mentioned is the IDM's LVB. IDM supports LVB to max
7 bytes when stores into the drive, the most significant byte of 8 bytes
is reserved for control bits. For this reason, the patch maps the
timestamp in macrosecond unit with its cached LVB, essentially, if any
timestamp was updated by other nodes, that means the local LVB is
invalidate. When the timestamp is stored into drive's LVB, it's
possbile to cause time-going-backwards issue, which is introduced by the
time precision or missing synchronization acrossing over multiple nodes.
So the IDM wrapper fixes up the timestamp by increment 1 to the latest
value and write back into drive.
Currently LVB is used to track VG changes and its purpose is to notify
lvmetad cache invalidation when detects any metadata has been altered;
but lvmetad is not used anymore for caching metadata, LVB doesn't
really work. It's possible that the LVB functionality could be useful
again in the future, so let's enable it for IDM in the first place.
Signed-off-by: Leo Yan <leo.yan@linaro.org>
When --with-... option is used as --without-... it gets
assigned value 'no' - so support it better where we can.
Also remove 'shared' from help as it's not supported.
Not all libc (like musl, uclibc dietlibc) libraries support full symbol
version resolution in runtime like glibc.
Add support to not generate symbol versions when compiling against them.
Additionally libdevmapper.so was broken when compiled against
uclibc. Runtime linker loader caused calling dm_task_get_info_base()
function recursively, leading to segmentation fault.
Introduce --with-symvers=STYLE option, which allows to choose
between gnu and disabled symbol versioning. By default gnu symbol
versioning is used.
__GNUC__ check is replaced now with GNU_SYMVER.
Additionally ld version script is included only in
case of gnu option, which slightly reduces output size.
Providing --without-symvers to configure script when building against
uclibc library fixes segmentation fault error described above, due to
lack of several versions of the same symbol in libdevmapper.so
library.
Based on:
https://patchwork.kernel.org/project/dm-devel/patch/20180831144817.31207-1-m.niestroj@grinn-global.com/
Suggested-by: Marcin Niestroj <m.niestroj@grinn-global.com>
with fork and exec to avoid use of shell.
largely copied from lib/misc/lvm-exec.c
require lvmlockctl_kill_command to be full path
use lvm config instead of lvmconfig to avoid need for LVM_DIR
LVM2 is distributed under GPLv2 only. The readline library changed its
license long ago to GPLv3. Given that those licenses are incompatible
and you follow the FSF in their interpretation that dynamically linking
creates a derivative work, distributing LVM2 linked against a current
readline version might be legally problematic.
Add support for the BSD licensed editline library as an alternative for
readline.
Link: https://thrysoee.dk/editline
Enable compilation of vdo and writecache support as internaly
supported segment types by default.
For disabling use:
--with-vdo=none
--with-writcache=none
dm-integrity stores checksums of the data written to an
LV, and returns an error if data read from the LV does
not match the previously saved checksum. When used on
raid images, dm-raid will correct the error by reading
the block from another image, and the device user sees
no error. The integrity metadata (checksums) are stored
on an internal LV allocated by lvm for each linear image.
The internal LV is allocated on the same PV as the image.
Create a raid LV with an integrity layer over each
raid image (for raid levels 1,4,5,6,10):
lvcreate --type raidN --raidintegrity y [options]
Add an integrity layer to images of an existing raid LV:
lvconvert --raidintegrity y LV
Remove the integrity layer from images of a raid LV:
lvconvert --raidintegrity n LV
Settings
Use --raidintegritymode journal|bitmap (journal is default)
to configure the method used by dm-integrity to ensure
crash consistency.
Initialization
When integrity is added to an LV, the kernel needs to
initialize the integrity metadata/checksums for all blocks
in the LV. The data corruption checking performed by
dm-integrity will only operate on areas of the LV that
are already initialized. The progress of integrity
initialization is reported by the "syncpercent" LV
reporting field (and under the Cpy%Sync lvs column.)
Example: create a raid1 LV with integrity:
$ lvcreate --type raid1 -m1 --raidintegrity y -n rr -L1G foo
Creating integrity metadata LV rr_rimage_0_imeta with size 12.00 MiB.
Logical volume "rr_rimage_0_imeta" created.
Creating integrity metadata LV rr_rimage_1_imeta with size 12.00 MiB.
Logical volume "rr_rimage_1_imeta" created.
Logical volume "rr" created.
$ lvs -a foo
LV VG Attr LSize Origin Cpy%Sync
rr foo rwi-a-r--- 1.00g 4.93
[rr_rimage_0] foo gwi-aor--- 1.00g [rr_rimage_0_iorig] 41.02
[rr_rimage_0_imeta] foo ewi-ao---- 12.00m
[rr_rimage_0_iorig] foo -wi-ao---- 1.00g
[rr_rimage_1] foo gwi-aor--- 1.00g [rr_rimage_1_iorig] 39.45
[rr_rimage_1_imeta] foo ewi-ao---- 12.00m
[rr_rimage_1_iorig] foo -wi-ao---- 1.00g
[rr_rmeta_0] foo ewi-aor--- 4.00m
[rr_rmeta_1] foo ewi-aor--- 4.00m
Update configure and make code compilable if prlimit() is not present.
Since the code is suspicious do not cope yet with it's replacement
with set/getrlimit().
When lvextend extends an LV that is active with a shared
lock, use this as a signal that other hosts may also have
the LV active, with gfs2 mounted, and should have the LV
refreshed to reflect the new size. Use the libdlmcontrol
run api, which uses dlm_controld/corosync to run an
lvchange --refresh command on other cluster nodes.
Ensure configure.h is always 1st. included header.
Maybe we could eventually introduce gcc -include option, but for now
this better uses dependency tracking.
Also move _REENTRANT and _GNU_SOURCE into configure.h so it
doesn't need to be present in various source files.
This ensures consistent compilation of headers like stdio.h since
it may produce different declaration.
. When using default settings, this commit should change
nothing. The first PE continues to be placed at 1 MiB
resulting in a metadata area size of 1020 KiB (for
4K page sizes; slightly smaller for larger page sizes.)
. When default_data_alignment is disabled in lvm.conf,
align pe_start at 1 MiB, based on a default metadata area
size that adapts to the page size. Previously, disabling
this option would result in mda_size that was too small
for common use, and produced a 64 KiB aligned pe_start.
. Customized pe_start and mda_size values continue to be
set as before in lvm.conf and command line.
. Remove the configure option for setting default_data_alignment
at build time.
. Improve alignment related option descriptions.
. Add section about alignment to pvcreate man page.
Previously, DEFAULT_PVMETADATASIZE was 255 sectors.
However, the fact that the config setting named
"default_data_alignment" has a default value of 1 (MiB)
meant that DEFAULT_PVMETADATASIZE was having no effect.
The metadata area size is the space between the start of
the metadata area (page size offset from the start of the
device) and the first PE (1 MiB by default due to
default_data_alignment 1.) The result is a 1020 KiB metadata
area on machines with 4KiB page size (1024 KiB - 4 KiB),
and smaller on machines with larger page size.
If default_data_alignment was set to 0 (disabled), then
DEFAULT_PVMETADATASIZE 255 would take effect, and produce a
metadata area that was 188 KiB and pe_start of 192 KiB.
This was too small for common use.
This is fixed by making the default metadata area size a
computed value that matches the value produced by
default_data_alignment.
The pvscan systemd service for autoactivation was
mistakenly dropped along with the lvmetad related
services.
The activation generator program now looks at the new
lvm.conf setting "event_activation" (default 1) to
switch between event activation and direct activation.
Previously, the old use_lvmetad setting was used to
switch between event and direct activation.
Native disk scanning is now both reduced and
async/parallel, which makes it comparable in
performance (and often faster) when compared
to lvm using lvmetad.
Autoactivation now uses local temp files to record
online PVs, and no longer requires lvmetad.
There should be no apparent command-level change
in behavior.
Checks whether VDO support is enabled.
Detects presence of 'vdoformat' tool which is required for to format VDO pool.
ATM build of VDO is NOT automatically enabled (None is default).
To enable build of LVM with VDO support use:
configure --with-vdo=internal
TODO: Maybe future version may switch to link some small VDO library for formating
(would require linking and package dependency).
Introduce VDO plugin for monitoring VDO devices.
This plugin can be used also by other users, as plugin checks
for UUID prefix 'LVM-' and run lvm actions only on those
devices.
Non LVM- device are only monitored and log warnings
when usage threshold reaches 80%.